22 comments

  • sockbot 16 hours ago
    The problem is really behavioural, not the tooling. People that do not understand, test and document their decision making in their PRs should not be submitting them, regardless of what tooling (AI or otherwise) they used to create them.

    This problem existed before AI, but it is now just worse due to the spamming nature of these "contributors". It's another form of endless September where people unfamiliar with the norms of team software development are overwhelming existing project maintainers faster than maintainers can teach them the norms of behaviour.

    In the end, some sort of gatekeeping mechanism is needed to avoid overwhelming maintainers, whether it's a reputation system, membership in an in-group, or something else.

    • heavyset_go 14 hours ago
      No, it is a tooling problem.

      The tooling is telling laymen that they built wonderful things that definitely work and perfectly fix and add features.

      The tooling gasses them up and is simply wrong in these cases.

      If your tool regularly lies, gaslights and produces wrong results, that's a tooling issue.

      • zamadatix 1 hour ago
        There will usually be more than one issue, Sockbot's solution just happens to deal with more than one at a time.
      • Den_VR 13 hours ago
        Does the hammer lie to you that everything is a nail?

        Can a voltmeter _lie_ to you?

        EE are expected to know when their measurements are wrong. And Professional Engineers are legally accountable for consequences of such mistakes.

        • possibleworlds 13 hours ago
          If a hammer had a chat interface that said everything was a nail then the answer would be yes, the hammer lies to you about everything being a nail.
          • walletdrainer 11 hours ago
            If someone believes a hammer when it tells them such things, they should probably have some sort of a caretaker assigned to help them through life.
            • mvid 11 hours ago
              If hammer companies were suddenly the most valuable international companies, and spent millions on ad campaigns and lobbying about trusting the hammer interface, then you can assume a large amount of people might trust the hammer interface
              • leidenfrost 9 hours ago
                Still, it's a tool.

                Even if your tool learns to talk and to make decisions, it's still a tool, not a person. You're the person and the one responsible for the decisions you make based on your tools.

                Going back from the analogy, the problem is that we conflated software <engineers> with "coders". A lot of people thought their job was to create code, we gave them a tool to generate a lot of code fast, and they truly think that "more code" = "more good"

                • TheScaryOne 1 hour ago
                  > it's still a tool, not a person.

                  Tell that to the CEO's who have replaced all of their yes-men with yes-chatbots.

                • kiba 2 hours ago
                  A hammer usually doesn't have the power to persuade people.
              • walletdrainer 11 hours ago
                Where are the ad campaigns telling me to trust LLMs?

                I don’t use an adblocker, do read traditional dead tree newspapers and do get exposed to satellite tv channels.

                I don’t think I’ve ever seen anyone anywhere telling me how reliable LLMs are.

                Pretty sure this tech sells itself to consumers, enterprise sales are what they’ve always been.

                • card_zero 10 hours ago
                  So now you're pivoting away from the caretaker proposal? I thought it had potential but I don't know how you'd fund it.
                  • walletdrainer 10 hours ago
                    > I thought it had potential but I don't know how you'd fund it.

                    The same way we fund other social services here in Europe. If an individual is incapable of caring for themselves, the state is expected to care for them.

          • digitalPhonix 8 hours ago
            That wasn’t the question though? A hammer doesn’t have a chat interface, that’s the point.
        • heavyset_go 10 hours ago
          If I had a hammer robot that I told to go hammer some nails in a birdhouse and it goes "Sure, I'm on it!" then it nails a cat to the wall and says "Here's you new complete birdhouse, it's perfect in everyway and will make everyone jealous", then yes, that is a tooling issue.
          • digitalPhonix 8 hours ago
            The question wasn’t about a hammer robot, it was about a hammer
            • Tostino 5 hours ago
              That's not a good analogy then. What benefit is provided by a hammer that just tells the operator (who has eyes and can see) that there is a nail under it (and I assume to swing)?
        • Gud 7 hours ago
          Yes, a voltmeter can lie to you.

          Full disclosure: I do high voltage testing for a living.

          • pardon_me 5 hours ago
            It can misread, but meters cannot actively generate an incorrect output based on user expectations.
            • Gud 5 hours ago
              …yet!
              • wookmaster 3 hours ago
                Enshittification knows no bounds
        • fao_ 11 hours ago
          If software engineering wants to progress past being an "art" and be considered an engineering discipline, then it should adopt methods and practices from engineering. First and foremost, one of the universal methodologies is analysis of root cause in faults, and redundancies to avoid that. e.g. the FAA has two pilots for planes, and each system is built in redundantly so if an engineer misses a bolt or rivet, the plane won't crash. intersections are designed such that there is a forcing function[0] on the behaviour of the motorists to prevent fault. Or, to take your tool analogy, nail guns are designed to be pressed against something with a decent amount of pressure before you can fire them.

          All of these systems are designed around the core idea of "a human acting irrationally or improperly is not at fault" and, furthermore, that a human can have a bad day and still avoid a mistake. They all steer someone around a possible fault. Hell, the reason why we divide the road into lanes is itself a forcing function to avoid traffic collisions!

          So, where is the forcing function in large language models? What part of a large language model prevents gross misuse by laymen?

          I can think of examples here and there, maybe. OpenAI had to add guard rails to stop people from poisoning themselves with botulism and boron, etc. But the problem here is that the LLM is probabilistic, so there's really no guarantee that those guard rails will hold. I seem to remember there being a paper from a few months back, posted here, that show AI guardrails cannot be proven to work consistently. In that context, LLMs cannot be considered "safe" or "reliable" enough for use. Eddie Burback has a very, very good video showing an absolute worst case result of this[1], that was posted here last year. Even then, off the top of my head Angela Collier has a really, really good video demonstrating that there's an absolute plethora of people who have succumbed, in large ways or small, to the bullshit AI can spew[2].

          I feel like if most developers were actually serious about being an engineering discipline, like we claim, then we wouldn't have all jumped on the LLM bandwagon until they'd been properly tested and had a certain level of reliability. Instead there are a sizable chunk of people saying they've stopped coding by hand entirely, and aren't even reviewing the code! i.e. They've thrown out a forcing function that existed to prevent errorenous PRs being committed! And for some bizzare reason, after about 2 decades of people talking about type safety and how we need formal verification to reduce error, everyone seems to be throwing "reduction of error" out the window!

          [0]: https://en.wikipedia.org/wiki/Behavior-shaping_constraint (if you're curious about the term)

          [1]: https://www.youtube.com/watch?v=VRjgNgJms3Q

          [2]: https://www.youtube.com/watch?v=7pqF90rstZQ

          • anon7000 11 hours ago
            > I feel like if most developers were actually serious about being an engineering discipline, like we claim, then we wouldn't have all jumped on the LLM bandwagon until they'd been properly tested and had a certain level of reliability

            Development can’t be a “serious” engineering discipline because the economics of tech companies doesn’t allow for it. But this has a lot less to do about developers, and significantly more to do with the severe pressure company executives are putting on everyone to use AI, no matter what.

            But let’s be honest, many companies have adopted things like root cause analysis and blameless postmortems to deal with infrastructure reliability and reducing incidents. Making systems resilient to human mistakes, making it impossible for the typo to blow up a database, etc. are considered best practices at most places I’ve worked. On the product side, I think it’s absolutely normal to make it hard for a user to take an action that would seriously mess up their account.

            The core problem happens when your product idea (say, social media) has vast negative externalities which the company isn’t forced to deal with economically. Whereas in other engineering disciplines, many things are actually safety related and you could get sued over. I’m imagining pretty much anything a structural engineer or electrical engineer works on could seriously hurt or kill someone if a bad enough mistake was made.

            That just doesn’t apply to software. There is a lot of “life & death” software, but it’s more niche. The reality is that 90% of what the tech industry works on is not capable of physically harming humans, and it’s not really possible to sue over the potential negative consequences of… a dev tooling startup? It’s a very, very different industry than those other engineering disciplines work in.

            But, software engineering has actually been extremely successful at minimizing risk from software defects. The most likely worst software level mistake I could make could… crash my own program. It likely wouldn’t even crash the operating system since it’s isolated. That lack of trust in what other people might do is codified everywhere in software. On an iPhone, I’m downloading apps edited by tens of thousands of other engineers, at essentially no risk to myself at all.

        • perching_aix 11 hours ago
          > Can a voltmeter _lie_ to you?

          Hell fucking yes it can?

          • digitalPhonix 8 hours ago
            When used according to it’s datasheet/user manual, how?
            • Machado117 7 hours ago
              Cheaper voltmeters will lie on RMS values when not reading a pure sine wave
            • perching_aix 8 hours ago
              When their precision mismatches their accuracy (or your expectations as driven by their design), just like with any other metrology tool.

              Now you might say: "but the datasheet will give you the tolerances, and the manual will tell you to mind it!"

              And yes, that's true. Just like how LLM providers also do: they tell you that outputs may be arbitrarily wrong, and that you should always check for mistakes.

              Is this bullshit? Yes. So are metrology tools that have a mismatching precision and accuracy, need calibration, and have designs that fail to make you mind either of these, sending you to reading duty instead. Which just so happens to be a whole lot of them.

              It is also absolutely not bullshit of course, because it is a fundamental limitation, just like those properties are for metrology devices. LLMs produce arbitrary natural language. Short of becoming able to perfectly read and predict the users' mind, they'll never be able to make any hard assurances, ever.

              Defective devices also exist, and so do incorrect / incomplete documentation.

              • digitalPhonix 1 hour ago
                That's the difference - they have well defined and bounded errors. LLMs do not.
                • perching_aix 1 hour ago
                  Which is a notably different argument than whether they can lie to you.

                  Why are we skipping over the miscalibrated, defective, or ill-documented devices bit though? Those also all have arbitrary error.

      • bingo-bongo 12 hours ago
        > ..laymen..

        That’s the behavioral problem.

        When AI is assisting a professional, the outcome is vastly different.

        • amiga386 7 hours ago
          If investors invest heavily in lemon juice, then go around hyping it and selling it with the promise it makes you invisible to cameras (which it doesn't), it doesn't matter how stupid and gullible the rubes who fall for that are, the investors bear the responsibility for giving them that idea, when people start attempting to rob banks with lemon juice on their faces.

          (cf https://en.wikipedia.org/wiki/1995_Greater_Pittsburgh_bank_r...)

          Hype is bad. Unwarranted hype is worse. Enabling people who can't do a thing to do what they think the thing is, but isn't, because they don't know any better, is inflicting a pox upon the world.

          • nh23423fefe 1 hour ago
            What you say makes no sense and no one will act in the ways you proscribe.
      • AnthonBerg 12 hours ago
        By definition of responsibility it is a behavioral problem.
      • potsandpans 1 hour ago
        Everyone, "please please please don't personify llms it's so harmful."

        Also everyone, "the inanimate tool is lying and tricking people into inundating open source developers with poorly thought out slop."

      • fatata123 14 hours ago
        [dead]
      • onion2k 13 hours ago
        If your tool regularly lies, gaslights and produces wrong results, that's a tooling issue.

        It's a human issue if you don't recognise that the code it's generated is wrong. That will never change no matter how good the tooling gets.

        • kiba 13 hours ago
          The tooling is the issue because humans designed the tooling wrong. It's a chatbot interface fined tuned to sycophancy. That's not a coincidence.
        • drw85 9 hours ago
          Would it be a human issue, if you type something into a calculator and the calculated result is wrong?

          Would anyone use a calculator confidently, if the result was randomly generated?

        • Hamuko 12 hours ago
          Isn't part of the problem that these tools are advertised as allowing non-coders to code? How are you gonna recognise that the code is wrong when you don't know how to code and the product is telling you that you don't even need to?
      • teo_zero 12 hours ago
        Technical analysis tells you that a stock is in its upwards trend. You invest all your money on it without thinking twice. The price goes down and you lose thousands of dollars. Is it a tool problem?

        LLMs spit out a sequence of tokens that is the most probable continuation of the input. LLMs don't lie any more than technical analysis does when it predicts the most likely trend of stock prices. It's up to you how to use this information.

    • emsign 14 hours ago
      Nope. If the tooling is fooling then the tooling IS the problem.
      • ASalazarMX 8 minutes ago
        The tooling shouldn't be more clever than the user, otherwise the user is incredibly gullible. IMO the problem is that AI is filling an emotional need, not an intellectual one.
    • 7e 16 hours ago
      [flagged]
      • gassi 15 hours ago
        You wouldn't hold that opinion it you did maintain a popular open-source repo or interact with AI "PR review" tools at a serious level. Even the most SOTA models are willing to accept/merge absolutely trash PRs so long at the summitter can convince it that is addressed it's review comments.
  • MBCook 17 hours ago
    It’s starting to feel like we may need to go back to the model where you need to be invited to be able to submit code or PRs. The barrier is just too low now for popular projects.
    • jonhohle 16 hours ago
      It’s not just popular projects. On a small utility I have I received a PR that was more lines than the project had. I’m happy to be a good maintainer, but reviewing something that’s effectively an AI rewrite isn’t something I care to review and since I can’t vet it, can’t blindly accept it.
      • MBCook 15 hours ago
        I’m sure it’s all over, I was assuming the smaller projects could deal with the handful of contributions.

        Something like a big emulator is very complex and has a LOT of motivated users who aren’t going to be able to make quality submissions.

        So they get it in volume where it may be nearly impossible to deal with.

    • x-complexity 15 hours ago
      Forks need to be normalized again.

      Logistically & brand-wise, they're messy to deal with, but they result in a "filter" of sorts that the original project can pick & choose to upstream back into their code.

      • overfeed 11 hours ago
        > Forks need to be normalized again

        No one's going to be trusting forks or new projects for a while. The bar for merely generating new code is now too low to give a meaningful signal. Reputation and longevity will likely be useful metrics, hence the AI pull-requests will continue to be opened against high-reputation projects that have strong brands. Not unlike Ethereums the switch from proof of work to proof if stake

    • hsbauauvhabzb 17 hours ago
      I think some sort of reputation score would make more sense, assuming it’s possible to design one that can’t be easily faked
      • Groxx 17 hours ago
        Perhaps something where you can build a graph of who invited whom so you could prune entire sections that act maliciously. One might even consider it a to be a web of connections which are built on (or torn down by the loss of) trust.

        Sounds futuristic. Maybe it's an NFT on an agentic blockchain for deep-sea solar farm mining?

        • lobf 16 hours ago
          Private torrent trackers apparently do this, and have done so for years.
          • perching_aix 16 hours ago
            They're sarcastically describing web-of-trust: https://en.wikipedia.org/wiki/Web_of_trust

            Why are they doing that (i.e. being sarcastic)? Who knows.

            • Groxx 15 hours ago
              Because it's by far the dominant strategy for distributed trust-ranking systems out there, with decades of research around it. Might as well look at the forest when realizing that it'd be nice if trees existed.

              And I don't think anyone actually trusts any major actor to verify anything, so a fully centralized system is likely out. Otherwise people would be hype about WorldCoin, instead of recognizing it for the stupendously malicious grift that it is.

  • HDBaseT 18 hours ago
    I recently just started using Claude/ChatGPT/China models for some PS3 homebrew work.

    Every model seemingly falls flat in this scope of programming. The PS3 is very complex and the tooling is fairly undocumented in a lot of instances. It doesn't surprise me most of these AI PR's are nonsense.

    If anyone else has attempted writing PS3 homebrew apps using AI and has refined their tooling/systems/automation please let me know how you got the agents to work for you (:

    • Aurornis 16 hours ago
      I like to send Claude Code or Codex on max settings off to try a problem in parallel while I work on it.

      In a complex codebase it’s funny how often they’ll come back with gigantic commits that just make everything worse or accomplish the goal but have 1000 lines of unnecessary complexity.

      Every time they present it with a confident summary. I can see how a junior or just lazy dev would think this is their ticket to becoming a contributor to a repo with some big thing to put on their resume.

    • _JoRo 17 hours ago
      I've been working on a project myself over the last few weeks where the documentation is quite minimal. To no surprise the LLMs fell flat at being able to generate any sort of meaningful code. However, I realized that if I focused first on building out documentation and coding tools (linters, parsers, formatters, etc...), LLMs can do a decent job at solving fundamental problems.
    • nxobject 14 hours ago
      Similar for me, but regarding the Classic Macintosh APIs. The difference is that there are plenty of books, and some source code available… just not enough to stop Codex from writing subtly wrong gibberish.

      I get the impression that the “10x velocity!!!!” claims still only reflect which areas have a sufficient corpus to learn from, rather than any inductive reasoning.

      • christkv 13 hours ago
        You are completely right it's a what is the next token guessing machine so without corpus it's guesses are worse as expected.
        • eschaton 11 hours ago
          I’m glad to see someone else in these parts understands.
    • Agentlien 13 hours ago
      I have had the same result when trying similar things for graphics on modern consoles. I hear so much great stuff about AI coding but in my niche it just seems to fall flat. Even just rubber ducking around graphics and performance they sound like a beginner who has read a lot of good blogs but with no practical experience.
      • mysterydip 5 hours ago
        > has read a lot of good blogs but with no practical experience.

        I mean, yes essentially, right? Scraping every blog on the topic to generate a response without any actual coding experience behind it is literally how it was made.

        • Agentlien 4 hours ago
          Exactly. And it shows. It knows all the sayings, the expressions, etc. But shows very little actual practical understanding of them. Especially how to apply the knowledge it simulates.
    • gambiting 9 hours ago
      I don't know about homebrew(not done it since PSP times), but I work in games development and we use Claude extensively. The trick is just to feed it all the console docs and then it's pretty amazing. If you have access to PS3 docs still, just give them to Claude as part of the session, I'm sure it will improve tenfold.
    • eschaton 11 hours ago
      Why would you expect the LLMs to work for PS3 development? How much PS3 code do you think there is in the training set?

      You do realize that’s actually how they work, right? They don’t understand or reason about anything, your prompt and other input is just about trying to guide where the pachinko balls fall in the output.

  • mrandish 17 hours ago
    If you look into the arcane architecture of the Playstation 3 console you quickly gain an appreciation for just how impressive the RPCS3 emulator is. PS3 is definitely one of hardest emulation targets, so it's wild they have the majority of the library working (with enhancements like upscaling and higher frame rates on many of the titles).

    I guess it's nice people want to help and AI assisted coding can be fine but I can't imagine submitting a PR to a high-profile, much-revered project like that without reviewing and thoroughly testing it myself.

    • HerbManic 11 hours ago
      When I saw it treats SPE code in the same manner as shader code on GPUs that was an enlightening moment. It made so much sense to treat it like that. That move while complicated removes a lot of potential performance issues.
  • tick_tock_tick 17 hours ago
    This is like when everyone started opening code of conduct addition PRs against every opensource repos except now a lot of these AI ones take actual effort to know it's in bad faith.

    Or maybe it's worse because a lot of them aren't in bad faith they are well meaning people who just don't know or understand enough to realize they aren't being helpful.

  • jamesu 17 hours ago
    We've seen a few takes on this kind of issue, but the solution I liked the best was the linux "developers take full responsibility" approach. The "Assisted-by:" tag was a pretty nice touch too.

    The article unfortunately feels more like a rant than a good exploration of the problem space.

    • ollien 17 hours ago
      I've struggled with this "responsibility" take. What does it mean in the context of an open source project? As far as I understand it, the original contributors of bugs are often not the ones fixing them (though they can be). Is it that if you write enough buggy code you get banned as a contributor? Is it that you're not allowed to say Claude ate my homework?
      • x-complexity 15 hours ago
        > Is it that if you write enough buggy code you get banned as a contributor?

        If this is a consistent issue, your contribution would (ideally) be continuously put into a backlog until someone else with no connection to you verifies that it's as bug-free as it appears to be. (Excluding non-obvious security & performance issues)

        > Is it that you're not allowed to say Claude ate my homework?

        Yes. As the contributor, you should be the first one to look over the code, not someone else.

    • eschaton 11 hours ago
      If the submitter of a PR needs to take full responsibility for the code within, then the code within cannot be LLM-generated because—depending on whether you consider it an original work by the LLM or a resurrected copy of its training data—it’s either not subject to copyright or under someone else’s copyright.

      (At least for any coding LLM that isn’t trained entirely on one company’s own code and also offered by that company. That sort of LLM might be able to make the regurgitation argument work for them.)

      Thus any project requiring “full responsibility” by submitters may as well just ban submitters from using LLM-based tooling. That’s the tack I’ve taken for my projects, and a number of large projects have taken that stance too.

      (Before someone trots out “Technical enforcement of this is impossible!” be assured that such rules are not negated by a lack of technical enforcement; after all, there’s also no way to technically enforce that you didn’t copy someone else’s code and paste it in. But by thinking a lack of technical enforcement matters, you’re outing yourself as someone who will happily violate rules if they think they won’t get caught.)

    • bayarearefugee 17 hours ago
      > the solution I liked the best was the linux "developers take full responsibility" approach.

      The people who can realistically submit a Linux patch that will ever get looked at is already a super select group through who-you-know network effects.

      You can't apply the same system to random open source projects, the best option for people that run random small to medium sized open source projects is just to ban all unsolicited PRs, otherwise you're going to spend way too much effort sorting through the slop.

      • pabs3 16 hours ago
        I don't think that is true at all, I'm just a random FOSS dev with no connection to the Linux kernel community and I have gotten two small commits into the Linux kernel.
  • _JoRo 17 hours ago
    I'm curious what percentage of PRs are just the AI blindly writing code and submitting a PR without testing, and which have at least been locally tested to some degree. Any OS maintainers have any insights on this?
    • koolba 17 hours ago
      > … and submitting a PR without testing, and which have at least been locally tested to some degree.

      There’s no need to test the PR when you already asked the AI to not make any mistakes.

    • gerdesj 17 hours ago
      Ask ChatGPT: You'll get an authoritative answer!
    • greenknight 17 hours ago
      Thats the thing, what if the codebases had CLAUDE.md / AGENTS.md files, which clearly dictated that

      A) tests need to pass

      B) anything you write needs tests

      C) the code quality must adhere to these standards

      etc.etc.... Helping the LLMs that people Vibe code with, produce better quality results.

      By not having these in place, it means people who want to help out, cant. because htey dont understand whats going on.

      adding stuff to these files, woudl allow developers to give guidelines / guardrails for developement using these agents.

      Should the barrier of entry be someone who knows how to code? or should the barrier of entry be someone who is motivated to help with open-source software.

      • x-complexity 15 hours ago
        > Should the barrier of entry be someone who knows how to code? or should the barrier of entry be someone who is motivated to help with open-source software.

        The motivation to help the OSS project should also come with the obligation to learn how the software operates, at least on a conceptual level. The desire to help does not grant people the pass to sledgehammer their way into adding in a feature.

      • int0x29 17 hours ago
        It really shouldn't be the RPCS3 devs' problem to fix other people's broken AI pipelines.
      • GCUMstlyHarmls 17 hours ago
        > Should the barrier of entry be someone who knows how to code? or should the barrier of entry be someone who is motivated to help with open-source software.

        Probably yes? QED submitting slop PRs is not helping. If "helping" is sticking it through an LLM, the developers can do that themselves with better insight and guidance? If you must help via an LLM, donate cash for tokens.

        If you can't code, and cant donate cash/machine time, help by confirming issue reproductions, design, wikis, documentation, whatever.

      • egypturnash 15 hours ago
        How about claude.md/agents.md files that just say "Don't".
      • techpression 16 hours ago
        What motivation? Is it motivation to start Claude Code and let it run when you have no idea what’s going on? Is motivation the same as token spend? Yes, the barrier should definitely be someone who knows how to code when submitting, well, code.

        And since the training data seems to be very lacking, no amount of markdown would fix that.

      • _JoRo 16 hours ago
        I agree, and yet I think even with a well engineering agent harness, there are a lot of unknown unknowns out there.

        I imagine the problem will persist if users continue to submit PRs that pass the harness without being able to validate for themselves that it actually works.

      • jmye 14 hours ago
        > someone who is motivated to help with open-source software.

        I don’t mean to pile on, but like… are you actually helping if you don’t understand the code you’re fixing, don’t understand the problem you’re addressing, and don’t understand the potential solution you’re submitting for that unknown problem? Or are you just making a lot of distracting noise so you can pat yourself on the back?

        I think people need to be a bit more self-critical about what they’re actually up to, and who is actually benefiting from it. Generally, from comments like yours, the answers seem to be “self-aggrandizement” and “no one”, but people really don’t want to think they might be the bad guys.

      • estimator7292 16 hours ago
        [dead]
  • saagarjha 18 hours ago
    The emulation space is particularly bad about this because there are a lot of semi-technical and "well meaning" users who will do anything to get their games to play better and AI gives them a way to make it seem like they are doing something useful, without being able to judge the quality of the output they are producing.

    One of the projects I work on recently had a guy drop by and explain that he wanted to use Claude to clean up our backlog and he absolutely could not fathom why I kept bringing up that we would only accept PRs that reduced our work instead of increasing it. "Do you know what Opus 4.7 is?" "Why are you so close-minded?". Unfortunately it is very hard for these users to understand that the thing they are using has a bar for quality and the bugs that still slip through cannot be solved by waving a magic wand at it.

    • loloquwowndueo 17 hours ago
      A good argument to use could be: I can use Claude myself, so I will if I need to, but you using Claude on my behalf doesn’t save me any work, it just introduces another layer of noise into the mix. (Yes calling the guy “noise” haha)
      • saagarjha 10 hours ago
        Yeah I basically said this and the guy claimed I didn't understand what he was offering
    • NoMoreNicksLeft 16 hours ago
      Over the last month, I've been using Claude to assist in some things that were at the edge of my ability (or maybe just a hair's breadth beyond it). I've added features to open source projects that everyone's been waiting years for. I always fork it telling myself that I want to be able to submit PRs, but really I'm just making the changes for myself, since I don't even have the nerve to show it off.

      If these people can make changes to the emulators that will actually make the games more playable for them, the changes don't have to go back into the official project. It works for them and makes things better.

      Right now, I've been working on some changes to the mkv container spec to have embedded scripting cable of doing Black Mirror: Bandersnatch in interactive mode. VLC and mpv. I've already added mutable torrent support to Transmission, and it works. But yeh, if someone took a look at it who really knew the code, they'd see it was AI slop and do a hard pass.

      • saagarjha 10 hours ago
        There's another person who for a long time wrote kind of hacky patches to work around bugs (e.g. disabling multithreading to avoid a longstanding race) and now he's using AI to fix everything in his own fork. I guess he can do that? We haven't really been able to use any of his changes though.
      • sitkack 15 hours ago
        You could use your changes to show that a feature is useful, but not do a PR. The specific code is no longer that important.
        • oneshtein 11 hours ago
          In era of AI, prompts are the source.
  • b00ty4breakfast 16 hours ago
    what is the appeal of blindly blasting open source projects with high-volume PRs? If you're trying to help the project to accomplish something, it doesn't follow that a firehose approach is tenable, if only for the fact that reviewing the code takes time.
    • x-complexity 15 hours ago
      > what is the appeal of blindly blasting open source projects with high-volume PRs?

      The prestige of being "the one that added feature X to OSS project Y". The things that would've been actually useful (bug diagnostics/troubleshooting, merging duplicate issues & PRs) do not offer the same level of prestige.

    • VoidWhisperer 15 hours ago
      At some point it used to be in order to have things that you can show got merged into popular public projects in job interviews, but I'm not sure that is the case anymore since some of these people have no intention (as far as i can tell) of finding a SE job
    • MBCook 15 hours ago
      At this point these could just be gamers who want to play a game and are being annoyed by something not being right.

      Maybe they use Claude or whatever and tell it to fix the problem and then just blindly submit it.

      I could see people doing that without knowing enough to be able to compile and test the code, ignoring whether it’s good or not. So they just submit it and hope it gets merged to “fix” the problem, having no understanding of what’s involved or how much of a burden that is.

      Now imagine a whole bunch of people doing that for a whole bunch of really complex bugs in 75 different games. It’s not like the PlayStation three was a simple system.

    • numpad0 14 hours ago
      Instant gratification? This feels like the exact same phenomenon as kids trying to profiteer from repackaged game mods and ripped game assets.
    • doctor_radium 13 hours ago
      Just wanted to say that reading all the comments here, I'm getting flashbacks to alt.aol.sucks.
  • stuaxo 10 hours ago
    If someone doesn't understand every bit of the PR they are submitting it should not be submittable - yes it takes time, but you are expecting devs on the other end to take more time than that.

    Though one plus point: a dev can ask the LLM to:

    - Split a PR into logical patches - Explain each one

    From there as questions and edit and rebase each until it makes sense, because it's guaranteed that not all of it will until you do that.

  • motbus3 10 hours ago
    We learned from the past litellm security incident that there are tons of compromised/fake GitHub stars (which was known from the past due to the star farms)

    I don't see vibe coders trying to push PRs in good faith in a old emulator. Could it be that the sheer amount of PRs and eventually a bad PR in the middle is being used to compromise repositories?

    • sunaookami 10 hours ago
      It's for their CV's
      • motbus3 1 minute ago
        Does it really count? Honest question.

        I mean, if I interview a candidate who put such thing on the CV, it wouldn't matter, but if it did, I would make 300% sure to ask him all the ins and outs. I think it would be plain stupid to expose that you didn't do that instead of showing smaller collaboration that you completely understand

  • Marsymars 16 hours ago
    I just took a look at the RPCS3 PR history, and it doesn't look that bad. (Certainly worse than "no slop", but not what I'd call a flood.)

    I went 10 pages back on GitHub, and the overwhelming number of PRs look like good PRs that have been merged. There's really only a single handful of rejected slop-looking PRs. (And another handful from a single user who seemingly didn't know how to use Git/GitHub and was turning local non-compiling commits into PRs somehow.)

  • nlh 17 hours ago
    I’ve read so many stories like this that I’ve actually gotten scared of making PRs open source projects.

    There’s one in particular where a feature I really wanted didn’t exist, so I forked and had Codex 5.5 assist with building the feature on my local version. It works perfectly. My life has been improved in being able to have this feature now.

    Normally I’d want to share it back with the community so others can benefit as well (presumably if I wanted this feature, others probably want it too.) But…I am not pretending this is perfect, great, or even good code. I spent about an hour total on it - it works, I haven’t had any issues with it, but it’s probably slop by any hard-core engineering account. And I neither want to get attacked for submitting slop nor do I have the time to properly engineer it to be hand-coded, so the net result is that it lives on my machine alone.

    Is this the right outcome? I feel guilty that I’m getting a better version of this software and others aren’t. I want to help makes others lives easier too, but I don’t want to burden the project maintainers or get yelled at for submitting slop.

    What’s the future look like here?

    • magnio 17 hours ago
      First, you don't have to feel guilty of anything, since forking open source projects to make changes tailored to your use case is as old as open source itself. It is, in fact, the primary benefit of open source.

      Second, it is not a given that your change would be accepted regardless of who wrote it. Maybe the feature is too niche for its complexity, maybe it is better implemented with more generality or extensibility that does not make sense for your own use. In those cases, your change might have been rejected upstream, so having it only locally is a perfect fine solution.

      Third, if you believe it is actually useful for broader users, open an issue requesting that feature, and say LLM implemented it in an hour. Then the maintainers can prompt their own LLM to implement it with ease, or do whatever they want with their project.

      • MBCook 15 hours ago
        You could send a comment/open a discussion explain explaining what you did and asking if they would be interested in the feature or a PR.
    • hgoel 16 hours ago
      I did this recently too, didn't really care about the code quality of a small tool, just asked Claude to add in the features I wanted and it produced something that worked.

      I just pushed the changes to my fork of the project and left it at that. Leaves the feature around for me and anyone that stumbles across my fork, without wasting the original dev's time looking at code I didn't care to look at.

      Even before AI coding I think it was relatively common to fork some code and edit it to have something you want, then to either leave it as a personal version, or to never actually get a response on the PR.

    • jcranmer 16 hours ago
      As a maintainer, discovering that a PR is AI-generated just absolutely saps any motivation I have to actually review it. I've never been a great reviewer, and AI means I have to watch out for really different kinds of errors. There's also the potential for extra friction with interactions with the "author": some people try to pull a "I'm just a smol bean, not a programmer, how dare you ask me to do anything" in response to changes, while others just play a middleman role in between you and the AI they're using.

      If you're actually motivated to get a working fix upstream, and you're willing to do more than be a passive player, then it's not necessarily a problem to submit it (subject to responsible disclosure, of course)... but you also say that you don't have the time to properly engineer it, which makes me think you don't have the time to be sufficiently engaged in the upstreaming process anyways.

      • djtango 16 hours ago
        AI has inverted the effort - in the past a PR meant someone had to come in, read your ticket, documentation, code and tests to successfully author a PR. Subsequently reviewing that PR would typically take less time than authoring it and you would receive fewer PRs.

        Now it is it the opposite, maintainers are flooded with low effort PRs that take more effort to review than author, but the author is unable to see why this is problematic to the maintainer and the project.

        • toast0 16 hours ago
          Exvuse me, I've been doing drive by manual slop PRs for at least a decade.

          I certainly didn't read a ticket; I ran into the problem myself. I probably didn't read documentation or write tests either. I just fixed my problem and tried to help others a bit.

          Tldr, pr review has always been hard.

    • Panzer04 17 hours ago
      If you're upfront about the provenance and amount of effort that went into it, is there really a problem?

      I feel like the issue is people contributing code they don't understand and presenting it as if they do.

      • MBCook 15 hours ago
        Quite possibly never tested, of maybe only tested their problem and not if it broke anything else.
        • grebc 11 hours ago
          Not quite possibly. 99.99999% likely.
    • perching_aix 16 hours ago
      Just go for it. Do it enough, and over time you'll either find yourself resilient enough, or conclude that people do not actually deserve it (or rather that you do not deserve the struggle), and you'll be cured of this compulsion. The only way to go is forward.
    • rgoulter 16 hours ago
      > What’s the future look like here?

      For practically no effort, you were able to customise free software to your liking.

      That's a surprising and really cool dynamic.

      Is your "about an hour of ... using Codex 5.5" really something others can't do for themselves, that it's worth communicating the change?

    • pabs3 16 hours ago
      It isn't clear that AI generated code is copyrightable, so that portion of the code wouldn't able to have the license enforced against violators, and so the authors wouldn't accept such code. Of course if its permissively licensed, the authors probably don't care to enforce the license, so might be fine including the code.

      To submit the code, at minimum, you should review and fix the code diff, run the appropriate static analysis tools against it, write the pull request description and commit messages yourself, read the contribution guidelines, make sure everything matches that, disclose that you used AI and for what, and the prompts used.

    • nxobject 14 hours ago
      It’s a reasonable when you frame it like this: the consequences of one AI-assisted addition are small… but maintainers are responsible for the codebase’s long-term quality after years of additions. The bar's higher. (Similarly, my friends like it when I host an occasional dinner party, but things would really suck at a restaurant run by nothing but my clones.)
    • JTbane 13 hours ago
      There is nothing wrong with forking, and one man's "better" version is another's bloat. Also, making a fork rather than a PR avoids burdening maintainers.
    • grebc 11 hours ago
      It’s the right outcome. Yours isn’t the better version.
    • qwrurt 17 hours ago
      If you don't have time to properly engineer it, then you can't submit. Why would you feel guilty? Others can throw a coin in the laundromat, too, if they are so inclined.
    • habinero 17 hours ago
      The same as it does now.

      I'm glad it works for you, but please do not submit low-effort stuff like this, if you're not willing to do the rest of the work to make it maintainable.

      I get the desire to help -- that's fine -- but AI code is abundant and of low value. Don't sandbag them with more work and increase their maintenance burden, with stuff they could easily vibe code themselves.

    • bakugo 16 hours ago
      > I feel guilty that I’m getting a better version of this software and others aren’t

      Why? None of what you did is special. What stops anyone else from asking their AI to implement the same feature you did, if they need it?

      • Auracle 16 hours ago
        Because it still took them an hour in addition to testing it, so presumably it’s not ridiculously simple and people have a limited amount of time?
    • Barrin92 17 hours ago
      >Is this the right outcome?

      Yes, if you can't vouch for the quality of the code that is the correct outcome. The long term health and maintainability of an open source project takes precedence over adding another feature. This was the case before repos were flooded with AI slop as well. Virtually no project would have accepted a random code dump if the person submitting it does not understand it because that just means the burden falls on someone else which would very quickly get any software project into big trouble.

    • embedding-shape 17 hours ago
      [dead]
  • ixxie 11 hours ago
    Compare two popular FOSS harness projects:

    -----

    OpenCode

    4.9k issues 1.7k PRs 158k stars

    https://github.com/anomalyco/opencode

    -----

    Pi

    31 issues 4 PRs 47k stars

    https://github.com/badlogic/pi-mono

    Their secret? A very rigorous contribution policy. Essentially, issues and PRs are autoclosed, and reviewed daily by the team. If its not slop, they whitelist either the issue/PR or the contributor (so their stuff isn't autoclosed next time).

    https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...

    GitHub needs an issue / PR approval flow.

  • wilg 14 hours ago
    Regardless of whatever other opinions you have on this, I don't think its a fair characterization that the original tweet was "polite" or "nice".
  • phendrenad2 15 hours ago
    Perhaps a "wall of shame" showing off bad PRs would make people think twice.
    • habinero 11 hours ago
      It would not. The kind of person who does this always thinks they are the exception.
  • ls612 16 hours ago
    I’m hopeful that in a year or so the models will be good enough to help productively with emulator development and that you will see a similar shift to these PRs that you did with security this spring.
    • MBCook 15 hours ago
      Will they get there? They rely so much on existing content.

      But in such a niche area where the documentation or other solutions often flat out don’t exist how are they supposed to get better through training?

  • ares623 12 hours ago
    why not just fork and forge an entirely new path unfettered by outdated norms? Isn't that the AI way?
  • emsign 14 hours ago
    AI could be the end of FOSS if this doesn't stop. Why don't people get it? No AI means no AI.
  • villgax 14 hours ago
    Just paywall it, pays for development, queue prioritization and in return they get code merged into a popular project and visibility
    • CWwdcdk7h 9 hours ago
      Isn't it simpler, in general, to just move the development off GitHub, while leaving existing repo in sync, so it can harmlessly drown in PRs from people who didn't bother to even read project contribution guidelines?
  • perching_aix 16 hours ago
    Aww, PRs no longer open/welcome? Whatever will the usual suspects parrot now?

    My personal schadenfreude aside, I wonder if this will follow a similar trajectory as security bug reports did recently. I'd be surprised, for a number of reasons, but the overall shape is looking awfully similar.