TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

If AI is so good at coding where are the open source contributions?

83 pointsby thm7 days ago

11 comments

avbanks7 days ago
This is exactly what I've been trying to point out, while LLM's and coding agents are certainly helpful, they're extremely over-hyped. We don't see a significant bump in open source contributions, optimizations, and innovation in general.
评论 #44000536 未加载
评论 #44000585 未加载
jsheard7 days ago
I keep seeing users of the AI-centric VSCode forks impotently complaining about Microsoft withholding their first-party extensions from those forks, rather than using their newfound AI superpowers to whip up open source replacements. That should be easy, right? Surely there's no need to be spoonfed by Microsoft when we're in the golden age of agenic coding?
rerdavies7 days ago
How could somebody who has actually used coding assistants be asking for evidence of open source projects that had been 100% written by an AI? That&#x27;s not what the tools are used for.<p>Here: <a href="https:&#x2F;&#x2F;rerdavies.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;rerdavies.github.io&#x2F;</a> About (at least) 30% of the NEW code in my open source repos are currently written by AI (older code is not). I don&#x27;t think this is extraordinary. This is the new normal. This is what AI-generated source code looks like. I don&#x27;t imagine anyone could actually point to a specific AI-generated chunk of code in this codebase and say with certainty that it had been written by an AI.<p>I can’t help feeling that people who make these sorts of complaints have not used coding assistants. Or perhaps have not used coding assistants recently. Are non-professional programmers writing about AI coding assistants they have never used really any better than non-programmers submitting AI generated pull requests? I think not.
评论 #44000835 未加载
kazinator7 days ago
Open source dev here. I <i>cannot</i> merge something generated by AI, because it is plagiarized, and therefore incompatible with the project license (pretty much any license: BSD, MIT, Apache, GPLn, ...).<p>A significant contribution to a project requires that the the contributor put a copyright note and license on it. The license has to be compatible with the project being contributed to. (Note: I&#x27;m not talking about copyright assignment, but, yes, that&#x27;s a thing also, e.g. with GNU projects.)<p>You can&#x27;t put your copyright notice and license on something that you generated with AI.<p>Small changes to existing code are in a gray area. They are more based on the existing code that the unauthorized training data, but the latter is hanging over them like a spectre.<p>I won&#x27;t merge anything knowing it was done with AI, even a one liner bug fix. If AI were used to describe the issue and propose a fix, and then someone coded it based on that, I think that would be okay; that&#x27;s something analogous to a clean room approach.
jwitthuhn7 days ago
There are some but not a lot, current AI is a lot better focusing on smaller more well-defined problems than stuff like &quot;add this feature&quot; or &quot;fix this bug&quot;.<p>A good example is this PR for llama.cpp which the author says was written 99% by AI. <a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11453">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp&#x2F;pull&#x2F;11453</a><p>When the problem statement can be narrowed down to &quot;Here is some code that already produces a correct result, optimize it as much as possible&quot; then the AI doesn&#x27;t need to understand any context specific to your application at all and the chance it will produce something usable is much higher.<p>Maybe it will eventually be better at big picture stuff but for now I don&#x27;t see it.
评论 #44001413 未加载
MoonGhost7 days ago
Probably because AI generated code cannot be copyrighted. And the second reason it&#x27;s not as good as AI sellers tell you.
评论 #44000375 未加载
评论 #44001470 未加载
评论 #44000515 未加载
vborovikov7 days ago
I recently discovered an open source project which I believe was completely vibe coded. 90k lines of code, 135 commits starting from an empty project.<p>I cloned the repo and tried to run the examples. Nothing worked. The problem was many data structures and variables were initialized empty&#x2F;zero but the code wasn&#x27;t designed to work with empty data at all.
flembat5 days ago
I let a popular agentic ai refactor some code for me today, it looked really nice, but despite the fact that it was meant to be splitting a file full of functions that already compiled and worked it rewrote them all and broke them comprehensively, it then tried to research how to fix them, even though working code was right there in the file and just kept making the code worse and worse. It also had some limit which meant it left the code partly refactored, ran out of its quota left everything broken and then when restarted suggested refactoring some more classes.
ramity7 days ago
I think it&#x27;s fair to say AI generated code isn&#x27;t visibly making a meaningful impact in open source. Absence of evidence is not evidence of absence, but that shouldn&#x27;t be interpreted as a defense to orgs or the fanciful predictions made by tech CEOs. In its current forms, AI feels comparable to piracy where the real impact is fuzzy and companies claim a number is higher or lower depending on the weather.<p>Yes, open source projects would be the main place where these claims could be publicly verifiable, but established open source projects aren&#x27;t just code--they&#x27;re usually complex, organic, and ever shifting organizations of people. I&#x27;d argue the metric of interacting with a large group of people whom have cultivated their own working process and internal communication patterns is closer to AGI than coding assistant, so maybe the goal posts we&#x27;re using for AI PRs are too grand. I think it&#x27;s expected to hear claims from within walled gardens, where processes and teams can be upended at will, that AI is making an unverifiable splash, because they&#x27;re precisely the environments where AI could be the most disruptive.<p>Additionally, I think we&#x27;re willfully looking in the wrong places when trying to measure AI impact by looking for AI PRs. Programmers don&#x27;t flag PRs when they use IntelliJ or confer with X flavor of LLM(tm), and expecting mature open source projects to have AI PRs seems as dubious as expecting then to use blockchain or any other technology that could be construed as disruptive. It just may not be compatible or reasonable with their current process. Calculated change is often incremental and boring, where real progress is only felt by looking away.<p>I made a really simple project that automatically forwards browser console logs to a central server, programmatically pull the file(s) from the trace, and had an LLM consume a templated prompt + error + file. It&#x27;d make a PR with what it thought was the correct fix. Sometimes it was helpful. The problem was it needed to do more than code, because the utility of a one shot prompt to PR is low.
SpecialistK7 days ago
It&#x27;s pretty obvious that AI code generation is not up to snuff for mission-critical projects (yet? ever?) - it can prove handy for small hobbyist projects with low stakes and it may provide time-savings and alternative perspectives for those who subsequently know how to sniff out BS.<p>But even airliner autopilot systems, which are much more mature and have a proven track record, are not trusted as a replacement for two trained experts to have final control.<p>The overall trend I&#x27;ve seen with AI creations (not just in programming) is that the tech is cool and improving, but people have trouble recognizing where its suitable and where it isn&#x27;t. I&#x27;ve found chatbots to be fantastic for recipes and cooking advice, or banal conversations I couldn&#x27;t have with real people (especially after a drink...) and pretty shoddy for real programming projects or creative endeavors. It isn&#x27;t a panacea and we&#x27;d benefit a lot from people recognizing that more.
lalith_c7 days ago
maybe open source projects are prohibiting code generated by AI?
评论 #44000108 未加载