Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by Fnord666 on Monday February 09, @12:24PM   Printer-friendly

Vibe Coding Is Killing Open Source Software, Researchers Argue:

According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it's happening faster than anyone predicted.

Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there's a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that's been built up over decades.

Open-source projects rely on community support to survive. They're collaborative projects where the people who use them give back, either in time, money, or knowledge, to help maintain the projects. Humans have to come in and fix bugs and maintain libraries.

Vibe coders, according to these researchers, don't give back.

The study Vibe Coding Kills Open Source, takes an economic view of the problem and asks the question: is vibe coding economically sustainable? Can OSS survive when so many of its users are takers and not givers? According to the study, no.

"Our main result is that under traditional OSS business models, where maintainers primarily monetize direct user engagement...higher adoption of vibe coding reduces OSS provision and lowers welfare," the study said. "In the long-run equilibrium, mediated usage erodes the revenue base that sustains OSS, raises the quality threshold for sharing, and reduces the mass of shared packages...the decline can be rapid because the same magnification mechanism that amplifies positive shocks to software demand also amplifies negative shocks to monetizable engagement. In other words, feedback loops that once accelerated growth now accelerate contraction."

[...] According to Koren, vibe-coders simply don't give back to the OSS communities they're taking from. "The convenience of delegating your work to the AI agent is too strong. There are some superstar projects like Openclaw that generate a lot of community interest but I suspect the majority of vibe coders do not keep OSS developers in their minds," he said. "I am guilty of this myself. Initially I limited my vibe coding to languages I can read if not write, like TypeScript. But for my personal projects I also vibe code in Go, and I don't even know what its package manager is called, let alone be familiar with its libraries."

The study said that vibe coding is reducing the cost of software development, but that there are other costs people aren't considering. "The interaction with human users is collapsing faster than development costs are falling," Koren told 404 Media. "The key insight is that vibe coding is very easy to adopt. Even for a small increase in capability, a lot of people would switch. And recent coding models are very capable. AI companies have also begun targeting business users and other knowledge workers, which further eats into the potential 'deep-pocket' user base of OSS."

This won't end well. "Vibe coding is not sustainable without open source," Koren said. "You cannot just freeze the current state of OSS and live off of that. Projects need to be maintained, bugs fixed, security vulnerabilities patched. If OSS collapses, vibe coding will go down with it. I think we have to speak up and act now to stop that from happening."

He said that major AI firms like Anthropic and OpenAI can't continue to free ride on OSS or the whole system will collapse. "We propose a revenue sharing model based on actual usage data," he said. "The details would have to be worked out, but the technology is there to make such a business model feasible for OSS."

[...] "Popular libraries will keep finding sponsors," Koren said. "Smaller, niche projects are more likely to suffer. But many currently successful projects, like Linux, git, TeX, or grep, started out with one person trying to scratch their own itch. If the maintainers of small projects give up, who will produce the next Linux?"

arXiv link: https://arxiv.org/abs/2601.15494


Original Submission

 
This discussion was created by Fnord666 (652) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Touché) by c0lo on Monday February 09, @09:31PM (13 children)

    by c0lo (156) Subscriber Badge on Monday February 09, @09:31PM (#1433165) Journal

    How much value is an AI slopper really going to provide to an OSS project?

    How much effort is required to reject AI slop from hitting the main branch of an OSS project?
    (no, it's a serious consideration, don't reply with "just use vibe code reviews", it's wouldn't be even funny)

    --
    https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Touché=1, Total=2
    Extra 'Touché' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Insightful) by Bentonite on Tuesday February 10, @12:20AM (2 children)

    by Bentonite (56146) on Tuesday February 10, @12:20AM (#1433181)

    It would take a lot of effort to reject copyright infringing slop from the master branch of a free software project, because at first glance the commit usually will look plausible - only on closer inspection will it be noticeable that it is random code, that looks right, but is wrong.

    • (Score: 4, Touché) by c0lo on Tuesday February 10, @12:52AM (1 child)

      by c0lo (156) Subscriber Badge on Tuesday February 10, @12:52AM (#1433186) Journal

      Oh, the irony for being sued for infringing the copyright of a bad code would be something special.

      --
      https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by Bentonite on Tuesday February 10, @02:10AM

        by Bentonite (56146) on Tuesday February 10, @02:10AM (#1433198)

        Even if code is terrible, you can still get sued if you infringe copyright by distributing it in a way that does not follow the license (for example, MIT expat requires including a copy of the license and retaining the copyright information of the copyright holders and copyright year(s)).

        Sometimes the copyright holder does nothing, as saying that terrible code was from you would be an embarrassment - for example it is claimed that the developer of the Dirty Operating System copied from some other OS (which was later licensed to Microsoft, who renamed it to MS-DOS - provided the copying happened, the license document would have falsely stated that all copyright was in order).

        What's most likely to happen is for a few LLM slop commits to slip through and proceed to cause bugs and only then will the main developers realize that such submitter is maliciously submitting proprietary software and they have to waste their time cleaning out the commits and barring the submitter (the only thing that you can get banned for in a real free software project is intentionally submitting proprietary software).

  • (Score: 4, Interesting) by JoeMerchant on Tuesday February 10, @02:04AM (9 children)

    by JoeMerchant (3937) on Tuesday February 10, @02:04AM (#1433195)

    > "just use vibe code reviews", it's wouldn't be even funny)

    It's not funny, but it's part of the answer.

    In my recent experience, you can trust the content of a "vibe code review" even less than vibe code, but... it is a worthwhile exercise (I'm going to do one tomorrow morning on a colleague's PR) - the vibe code review says some outright stupid stuff, strike that and move on, then it says some things you actually want to know about - and if you look at what it's pointing at, those are usually correct. Would you have found all that on your own? In the 30 seconds it took to skim through the vibe code review output? That's useful. Do you rely on it 100%? Hell no, but it does produce helpful output that results in a higher quality end product, if you know how to use it.

    On the vibe code end of things, how much testing, review, refinement, refactoring needs to be done before it's ready to show to another human? Usually a lot. Are we saving time overall? Depends. In my wheelhouse of the stuff I've done daily for the last 20 years, hell no - I can do that quicker myself, maybe calling on AI for the occasional API call structure I don't know off the top of my head (like reading a paper manual used to suffice for), but overall: when I'm in charge it goes faster. Something weird (to me) like having Rust generate an .svg based website with server sent evnets keeping the display updated in realtime? Uh, yeah, I _could_ do that by hand, but I can direct an AI agent to do (a simple one) about 10x faster than I can look up all that twisted syntax that I haven't spent any significant hands-on time with, ever.

    I call AI agents: power tools. Like chain saws. With great power comes great responsibility - if you give the Phoenix AZ High School JV football squad each a chainsaw and tell them to clear 40 trees from around a cabin in Yosemite, without any instruction or oversight? You're going to have some problems with misuse of power tools. Give those same chainsaws to people who know what they're doing and the chainsaws will be very helpful.

    AI everything is new, nobody is an expert (and these days I haven't seen any "Wanted: AI programmer with 15 years experience" ads like we used to get for the old new tech of the day). Our company is encouraging us to step up and "share our AI expertise" - I can't imagine that anyone has any. What I, and my colleagues, have are a few months of AI experience - and that's worth sharing, but what worked last November (when I was last doing daily AI programming work) and what works now (I just dove back in yesterday) are somewhat different things - to call it a rapidly evolving field is to underplay the speed with which it is changing, and the changes of the past 6 months applying LLMs to programming tasks have been consistently toward more power, fewer miscues. It's still far from perfect, but the progress is palpable.

    --
    🌻🌻🌻🌻 [google.com]
    • (Score: 4, Insightful) by c0lo on Tuesday February 10, @02:56AM (8 children)

      by c0lo (156) Subscriber Badge on Tuesday February 10, @02:56AM (#1433212) Journal

      It's not funny, but it's part of the answer.

      What is the rest?

      In my recent experience, you can trust the content of a "vibe code review" even less than vibe code, but... it is a worthwhile exercise

      There's a technique of ensuring a 1/10000 accuracy of a transcription by using transcribers with a 1% accuracy - just give the same task (w/ the same input) to two different/independent transcribers and reconcile the results (the chance of both making an transcription error in the same place is 1%*1%).

      I can't quantify how worthwhile the exercise is for the "vibe code review" on a "vibe coded source", the two exercises are not independent.

      then it says some things you actually want to know about - and if you look at what it's pointing at, those are usually correct.

      I was saved by code-linting more times that I like to admit, but it was a deterministic linter (one of the most insidious case is "unused variable", when you want to use the unused object but end up using a different one). Are there cases in which an AI reviewer offers more than a deterministic linter?

      I call AI agents: power tools. Like chain saws.

      While the wood they cut may not be, chain saws are deterministic. Cutting wood of variable quality with an nondeterministic chainsaw is not an an experience I'd dare to acquire, given the risk to life and limb.

      "A computer lets you make more mistakes faster than any invention in human history - with the possible exceptions of handguns and tequila" (and chainsaws before the computers)

      Our company is encouraging us to step up... to call it a rapidly evolving field is to underplay the speed with which it is changing

      There have to be early adopters, but I grew old enough to no longer enjoy being among them.

      One on top of the other, I'm not against AI in programming, I just don't have an incentive or motivation to try it, my plate is full with things where vibe coding can't help (architecture/integration)

      --
      https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 1) by khallow on Tuesday February 10, @01:03PM (3 children)

        by khallow (3766) Subscriber Badge on Tuesday February 10, @01:03PM (#1433235) Journal

        There's a technique of ensuring a 1/10000 accuracy of a transcription by using transcribers with a 1% accuracy - just give the same task (w/ the same input) to two different/independent transcribers and reconcile the results (the chance of both making an transcription error in the same place is 1%*1%).

        That approach relies on the transcription processes being independent. That's a poor assumption to make for AI. There will be a lot of overlap in algorithms and data sets. They might even be using each others' output as training material.

        • (Score: 2) by c0lo on Tuesday February 10, @01:58PM (2 children)

          by c0lo (156) Subscriber Badge on Tuesday February 10, @01:58PM (#1433245) Journal

          Only one para down from the quoted one reads

          I can't quantify how worthwhile the exercise is for the "vibe code review" on a "vibe coded source", the two exercises are not independent.

          --
          https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 1) by khallow on Tuesday February 10, @07:17PM (1 child)

            by khallow (3766) Subscriber Badge on Tuesday February 10, @07:17PM (#1433266) Journal
            This is a different dependency. You were proposing using multiple AIs to debug the same code and assuming via the formula that they were independent.
            • (Score: 2) by c0lo on Tuesday February 10, @09:31PM

              by c0lo (156) Subscriber Badge on Tuesday February 10, @09:31PM (#1433280) Journal

              I'm not proposing anything, I set a context to contrast with the "vibe review a vibed code", to support why I can't trust the resulted code as being better.

              --
              https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 3, Interesting) by JoeMerchant on Tuesday February 10, @03:43PM (3 children)

        by JoeMerchant (3937) on Tuesday February 10, @03:43PM (#1433253)

        >>It's not funny, but it's part of the answer.

        >What is the rest?

        Business as usual - but that's going to groan under the load of excessive detail that AI seems to put in everything it creates, at least by default. You can always ask it for a shorter summary, but at this stage I don't have confidence that it will not leave out important details while summarizing... Still, these aren't really new problems at all - people also give too much detail and leave important items out of summaries...

        >I can't quantify how worthwhile the exercise is for the "vibe code review" on a "vibe coded source", the two exercises are not independent.

        Neither can I. I can say that the exact same model will find faults when it reviews its own output, at least for the first several iterations. Eventually it settles down and "gets happy" with its own work, at least for things I have taken that far. I haven't set two independent models (like Opus 4.6 vs GPT 5.3 codex) against each other reviewing and revising each others' work - would be interesting to see where that process lands after N iterations, does it approach a stable result or does it oscillate? I am fairly certain that would vary from trial to trial.

        >chain saws are deterministic. Cutting wood of variable quality with an nondeterministic chainsaw is not an an experience I'd dare to acquire, given the risk to life and limb.

        It's not a perfect analogy. Part of why I "got into" computers in the first place (in the 1980s) was because you could experiment more or less endlessly and the worst "damage" you would (typically) do would be a full system crash / reboot, then try again. No missing fingers or limbs, no incurable diseases... of course, when you let your software out into the wild it takes on additional risk (thus the MIT license standard disclaimer...) but that's what I and my colleagues are paid for: to take the company's software "to the next level" where it's not a teenager's toy anymore, but rather something you can rely on. Is AI ready to sign off on software as "ready to use"? No, HELL NO, and anybody who tries an excuse along the lines of "the AI said it was good enough" should lose whatever license and job they had that people trusted them to make the "ready to use" assurance.

        >a deterministic linter

        An interesting (to me) aspect of all this is: AI is pretty good at writing deterministic parsers, probably linters and similar stuff. If you let the non-deterministic LLM review some code, then score its responses: helpful, pedantic, irrelevant, wrong/harmful - eventually the helpful range results should be able to be defined and added as modules to deterministic linters... This morning's review by Opus contained 15 observations - two were redundant-ish, and potentially valid, another for a potential race condition was also good, one pointed out a potential UI/UX enhancement opportunity, and the other 11 were of limited to zero value; about 6 limited value things were stale comments / dead code which admittedly should be corrected in best practices, and the others are basically due to Cursor's inexperience with our larger project build system - though it intuits pretty well how it works in principle, it's missing some of the actual usage practice (like default values that are overwritten before actual use) that resulted in those zero value observations.

        >There have to be early adopters, but I grew old enough to no longer enjoy being among them.

        At one time, I had the title Vice President, in a small company, but nonetheless there's an expectation of leadership / management / mentoring that goes with the title. Through the years I backed away from the management roles, not because I didn't like having minions do my bidding, but because people management is inherently messy and absolutely unforgiving of mistakes, in contrast with software "experiments." To an extent, AI agents behave like minions, and they absolutely do suck up to you - too much at times. It's quite a bit of fun when they "get it right" in many ways more satisfying than building the thing for myself - being able to delagate and still have it built successfully is fun, for me.

        --
        🌻🌻🌻🌻 [google.com]
        • (Score: 2) by c0lo on Tuesday February 10, @10:03PM (2 children)

          by c0lo (156) Subscriber Badge on Tuesday February 10, @10:03PM (#1433282) Journal

          Thank you. The discussion slightly adjusted my position from "as of today, a waste of time" into "I may give it a try, no high expectations of practicality".
          The "AI is pretty good at writing deterministic parsers" and "being able to delagate and still have it built successfully" were the points.

          --
          https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2) by JoeMerchant on Wednesday February 11, @12:13AM (1 child)

            by JoeMerchant (3937) on Wednesday February 11, @12:13AM (#1433287)

            >"I may give it a try, no high expectations of practicality"

            That's where I started last April or so, and I'm not convinced I have found dependable practicality yet. I have found some things it is good at (like simple parsers) - so I guess that's practical when the needs arise. But... upper management really really wants to hear how we're working with it, so I take the opportunity to use it when I can. The 2 month hiatus from early Dec to early Feb was only 3 weeks of vacation, the other 5 I was too busy catching up from 3 weeks off to really play much with the AI tools.

            --
            🌻🌻🌻🌻 [google.com]
            • (Score: 2) by c0lo on Wednesday February 11, @10:50AM

              by c0lo (156) Subscriber Badge on Wednesday February 11, @10:50AM (#1433320) Journal

              If you have 20 mins to spare, how accurate it this guy describing [youtube.com] the current status of vibing?

              --
              https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford