TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI is stifling new tech adoption?

508 pointsby kiyanwang3 months ago

100 comments

crazygringo3 months ago
No, AI isn&#x27;t.<p>Any new tech, or version upgrade, or whatever, takes time for <i>people</i> to become familiar with it. You might as well say &quot;Stack Overflow is stifling new tech adoption&quot; because brand-new stuff doesn&#x27;t have many Q&#x27;s and A&#x27;s yet. But that would be a silly thing to say.<p>I&#x27;m not going to adopt a brand-new database <i>regardless</i> of LLM training data cutoff, just because enough <i>people</i> haven&#x27;t had enough experience with it.<p>And LLM&#x27;s have a commercial incentive to retrain every so often anyways. It&#x27;s not like we&#x27;re going to confront a situation where an LLM doesn&#x27;t know anything about tech that come out 5 or 10 years ago.<p>Early adopters will be early adopters. And early adopters aren&#x27;t the kind of people relying on an LLM to tell them what to try out.
评论 #43052610 未加载
评论 #43052177 未加载
评论 #43055122 未加载
评论 #43052035 未加载
moyix3 months ago
One thing that is interesting is that this was anticipated by the OpenAI Codex paper (which led to GitHub Copilot) all the way back in 2021:<p>&gt; Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered. Further, for already existing packages, the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced (Eghbal, 2020; Trinkenreich et al., 2021).<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.03374" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2107.03374</a> (Appendix H.4)
评论 #43051928 未加载
PaulRobinson3 months ago
I think if you specify a technology in your prompt, any LLM should use that technology in its response. If you don&#x27;t specify a technology, and that is an important consideration in the answer, it should clarify and ask about technology choices, and if you don&#x27;t know, it can make a recommendation.<p>LLMs should not have hard-wired preferences through providers&#x27; prompt structure.<p>And while LLMs are stochastic parrots, and are likely to infer React if a lot of the training corpus mentions React, work should be done to actively prevent biases like this. If we can&#x27;t get this right with JS frameworks, how are we going to solve it for more nuanced structural biases around ethnicity, gender, religion or political perspective?<p>What I&#x27;m most concerned about here is that Anthropic is taking investment from tech firms who vendor dev tooling - it would not take much for them to &quot;prefer&quot; one of those proprietary toolchains. We might not have much of a problem with React today, but what if your choice of LLM started to determine if you could or couldn&#x27;t get recommendations on AWS vs Azure vs GCP vs bare metal&#x2F;roll your own? Or if it suggested only commercial tools instead of F&#x2F;LOSS?<p>And to take that to its logical conclusion, if that&#x27;s happening, how do I know that the history assignment a kid is asking for help with isn&#x27;t sneaking in an extreme viewpoint - and I don&#x27;t care if it&#x27;s extreme left or right, just warped by a political philosophy to be disconnected from truth - that the kid just accepts as truth?
评论 #43048414 未加载
评论 #43048291 未加载
评论 #43048532 未加载
评论 #43048396 未加载
__MatrixMan__3 months ago
Can confirm, I recently gave up on learning anything new re: data visualization and have just been using matplotlib instead. Training data for it has been piling up since 2008. The AI&#x27;s are so good at it that you hardly ever have to look at the code, just ask for changes to the graph and iterate.<p>Honestly it&#x27;s been kind of fun, but I do feel like the door is closing on certain categories of new thing. Local maxima are getting stickier, because even a marginal competence is enough to keep you there--since the AI will amplify that competence in well-trained domains by so much.<p>Emacs lisp is another one. I&#x27;d kind of like to build a map of these.
评论 #43052089 未加载
matsemann3 months ago
I actually asked this a while back, but got little response: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40263033">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40263033</a><p>&gt; Ask HN: Will LLMs hurt adoption of new frameworks and technology?<p>&gt; If I ask some LLM&#x2F;GPT a react question I get good responses. If I ask it about a framework released after the training data was obtained, it will either not know or hallucinate. Or if it&#x27;s a lesser known framework the quality will be worse than for a known framework. Same with other things like hardware manuals not being trained on yet etc.<p>&gt; As more and more devs rely on AI tools in their work flows, will emerging tech have a bigger hurdle than before to be adopted? Will we regress to the mean?
评论 #43050325 未加载
评论 #43049052 未加载
评论 #43049059 未加载
评论 #43054052 未加载
lasagnagram3 months ago
No, new tech is just 100% extractive, wealth-generating garbage, and people are sick and tired of it. Come up with something new that isn&#x27;t designed to vacuum up your data and your paycheck, and then maybe people will be more enthusiastic about it.
评论 #43054111 未加载
评论 #43053950 未加载
armchairhacker3 months ago
AI may be exaggerating this issue, but it&#x27;s always existed.<p>New tech has an inherent disadvantage vs legacy tech, because there&#x27;s more built-up knowledge. If you choose React, you have better online resources (official docs, tutorials, answers to common pitfalls), more trust (it won&#x27;t ship bugs or be abandoned), great third-party helper libraries, built-in IDE integration, and a large pool of employees with experience. If you choose some niche frontend framework, you have none of those.<p>Also, popular frameworks usually have better code, because they have years of bug-fixes from being tested on many production servers, and the API has been tailored from real-world experience.<p>In fact, I think the impact of AI generating better outputs for React is <i>far less</i> than that of the above. AI still works on novel programming languages and libraries, just at worse quality, whereas IDE integrations, helper libraries, online resources, etc. are useless (unless the novel language&#x2F;library bridges to the popular one). And many people today still write code with zero AI, but nobody writes code without the internet.
评论 #43052675 未加载
hiAndrewQuinn3 months ago
&gt;Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. [... This] incentivises them to use something [older].<p>That sounds great to me, actually. A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.
评论 #43051973 未加载
评论 #43048744 未加载
评论 #43048967 未加载
评论 #43053720 未加载
spiderfarmer3 months ago
&gt;With Claude 3.5 Sonnet, which is generally my AI offering of choice given its superior coding ability, my “What personal preferences should Claude consider in responses?” profile setting includes the line “When writing code, use vanilla HTML&#x2F;CSS&#x2F;JS unless otherwise noted by me”. Despite this, Claude will frequently opt to generate new code with React, and in some occurrences even rewrite my existing code into React against my intent and without my consultation.<p>I noticed this too. Anyone found out how to make Claude work better?
评论 #43049186 未加载
评论 #43048522 未加载
评论 #43048671 未加载
spenvo3 months ago
Like several other commenters in this thread, I also wrote[0] something recently on a related topic: Google&#x27;s AI Overviews and ChatGPT harm the discovery of long tail information - from a product builder&#x27;s perspective. Basically, users are having a tougher time finding accurate info about your product (<i>even if the correct answer to their query is in Google&#x27;s own search results</i>). And I also found the basic tier of ChatGPT hallucinated my app&#x27;s purpose in a way that was borderline slanderous. AI can make it tougher (at scale) for creators trying to break through.<p>[0] - <a href="https:&#x2F;&#x2F;keydiscussions.com&#x2F;2025&#x2F;02&#x2F;05&#x2F;when-google-ai-overviews-and-chatgpt-get-it-very-wrong-about-your-product&#x2F;" rel="nofollow">https:&#x2F;&#x2F;keydiscussions.com&#x2F;2025&#x2F;02&#x2F;05&#x2F;when-google-ai-overvie...</a>
评论 #43050565 未加载
评论 #43049698 未加载
评论 #43052405 未加载
catapulted3 months ago
There is a counter example for this: MCP, a standard pushed by Anthropic, provides a long txt&#x2F;MD optimized for Claude to be able to understand the protocol, which is very useful to bootstrap new plugins&#x2F;servers that can be used as tools for LLMs. I found that fascinating and it works really well, and I was able to one-shot improve my CLInE extension (a coding agent similar to cursor.sh) to work with existing APIs&#x2F;data.<p>It&#x27;s so easy to bootstrap that even though the standard is a couple of months old, already has a few hundred (albeit probably low quality) implementations to adapt to different services.<p>- txt&#x2F;markdown for LLMs: <a href="https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;llms-full.txt" rel="nofollow">https:&#x2F;&#x2F;modelcontextprotocol.io&#x2F;llms-full.txt</a><p>- server implementations: <a href="https:&#x2F;&#x2F;github.com&#x2F;modelcontextprotocol&#x2F;servers#-community-servers">https:&#x2F;&#x2F;github.com&#x2F;modelcontextprotocol&#x2F;servers#-community-s...</a>
评论 #43053049 未加载
VMG3 months ago
Guess I figured out my niche as a SWE: have a later knowledge cutoff date than LLMs
评论 #43053744 未加载
jwblackwell3 months ago
Larger context windows are helping solve this, though.<p>I use ALpineJS which is not as well known as React etc, but I just added a bunch of examples and instructions to the new cursor project rules, and it&#x27;s now close to perfect.<p>Gemini models have up to 2M context windows, meaning you can probably fit your whole codebase and a ton of examples in a single request.<p>Furthermore, the agenetic way Cursor is now behaving, automatically building up context before taking action, seems to be another way around this problem
评论 #43048427 未加载
lackoftactics3 months ago
&gt; OpenAI’s latest models have cutoffs of late 2023.<p>The first paragraph is factually incorrect; the cutoff is June 2024 for 4o.<p>Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don&#x27;t see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.
评论 #43049104 未加载
评论 #43048800 未加载
评论 #43048949 未加载
评论 #43052655 未加载
评论 #43048734 未加载
评论 #43048886 未加载
tobyhinloopen3 months ago
I noticed this as I experimented with alternatives for React and all of them I tried were terrible on OpenAI&#x2F;ChatGPT. Either it doesn&#x27;t know them, or it makes weird mistakes, or uses very outdated (no longer working) versions of the code.<p>It is also annoying that most modern JS things have 4 versions to do the same thing: With TS, With TS + Decorators, With plain JS, with JSX, etc. so code generation picks one that isn&#x27;t compatible with the &quot;mode&quot; you use.
评论 #43050841 未加载
physicsguy3 months ago
If AI stifles the relentless churn in frontend frameworks then perhaps it&#x27;s a good thing.
评论 #43048176 未加载
评论 #43048364 未加载
评论 #43055025 未加载
feoren3 months ago
The answer to this seems obvious: continuous training of live models. No more &quot;cutoff dates&quot;: have a process to continually ingest new information and update weights in existing models, to push out a new version every week.<p>Note that I said &quot;obvious&quot;, not &quot;easy&quot;, because it certainly isn&#x27;t. In fact it&#x27;s basically an unsolved problem, and probably a fiendishly difficult one. It may involve more consensus-based approaches like mixture of experts where you cycle out older experts, things like that -- there are dozens of large problems to tackle with it. But if you want to solve this, that&#x27;s where you should be looking.
评论 #43053351 未加载
评论 #43053753 未加载
评论 #43053256 未加载
benve3 months ago
I think this is true because I myself said to myself: &quot;it is useless for me to create a library or abstraction for the developers of my project, much better to use everything verbose using the most popular libraries on the web&quot;. Until yesterday having an abstraction (or a better library&#x2F;framework) could be very convenient to save time in writing a lot of code. Today if the code is mostly generated there is no need to create an abstraction. AI understands 1000 lines of code in python pandas much better than 10 lines of code using my library (which rationalises the use of pandas).<p>The result will not only be a disincentive to use new technologies, but a disincentive to build products with an efficient architecture in terms of lines of code, and in particular a disincentive to abstraction.<p>Maybe some product will become a hell with millions of lines of code that no one knows how to evolve and manage.
评论 #43052071 未加载
评论 #43052569 未加载
评论 #43052028 未加载
评论 #43052593 未加载
milesrout3 months ago
Why would AI stifle tech adoption when ~nobody uses it? I think HN is in a bit of a bubble here. People on here seem to often think that everyone is using AI at work, it is really common and widely appreciated, etc.<p>It really, really isn&#x27;t. Most people in the software industry do not use it. Its use in other industries and in the professions is even lower. AI coding tools are bad enough at widely used things like Python and JS. They are DOGSHIT at generating C or C++. They are basically terrible at doing anything other than regurgitating things from Medium blogspam tutorials.<p>The result is not people moving to only using technology that AI is &quot;good&quot; at (relatively, given it is terrible at coding anything at all). It is that the overwhelming majority don&#x27;t use it at all. The thing is, nobody really talks about this because it isn&#x27;t interesting _not_ to use something. You can&#x27;t write many high-engagement blog posts to content-market your company by saying you still just use vim and ctags and documentation to write code, just like you did 10 years ago. That isn&#x27;t noteworthy and nobody will read it or upvote it. HN is always biased by this towards the new, the noteworthy, changes to practices, etc. Just like browsing HN would lead you to believe people are rewriting their websites in new JS frameworks every 6 months. No, but posts about doing that obviously generate more engagement than 6-monthly &quot;Update: Our website is still written in Ruby on Rails&quot; posts would.
jimnotgym3 months ago
Is this such a bad result? Do we need office CRUD apps to use bleeding edge technologies?
评论 #43048181 未加载
评论 #43048182 未加载
评论 #43048696 未加载
评论 #43048274 未加载
chrisco2553 months ago
This makes me fear less for web development jobs being lost to AI, to be honest. Look, we can create new frameworks faster than they can train new models. If we all agree to churn as much as possible the AIs will never be able to keep up.
评论 #43054133 未加载
mxwsn3 months ago
This ought to be called the qwerty effect, for how the qwerty keyboard layout can&#x27;t be usurped at this point. It was at the right place at the right time, even though arguably its main design choices are no longer relevant, and there are arguably better layouts like dvorak.<p>Python and React may similarly be enshrined for the future, for being at the right place at the right time.<p>English as a language might be another example.
评论 #43052530 未加载
评论 #43052525 未加载
评论 #43052516 未加载
killjoywashere3 months ago
Pathologists as a specialty has been grousing about this for several years, at least since 2021 when the College of American Pathologists established the AI Committee. As a trivial example: any trained model deployed will necessarily be behind any new classification of tumors. This makes it harder to push the science and clinical diagnosis of cancer forward.<p>The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.<p>It&#x27;s absolutely fascinating watching software developers come to grips with what they have wrought.
评论 #43048792 未加载
评论 #43049665 未加载
评论 #43048470 未加载
评论 #43048574 未加载
评论 #43048905 未加载
评论 #43051289 未加载
评论 #43050278 未加载
评论 #43050395 未加载
评论 #43050544 未加载
评论 #43048419 未加载
评论 #43049547 未加载
dataviz10003 months ago
I&#x27;m on the fence with this. I&#x27;ve been using Copilot with vscode constantly and it has greatly increased my productivity. Most important it helps me maintain momentum without getting stuck. Ten years ago I would face a problem with no solution, write a detailed question on Stack Exchange, and most likely solve it in a day or two with a lot of tinkering. Today I ask Claude. If it doesn&#x27;t give me a good answer, I can get the information I need to solve the problem.<p>I&#x27;ve been thinking a lot of T.S. Eliot lately. He wrote and essay, &quot;Tradition and the Individual Talent,&quot; which I think is pertinent to this issue. [0] (I should reread it.)<p>[0] <a href="https:&#x2F;&#x2F;www.poetryfoundation.org&#x2F;articles&#x2F;69400&#x2F;tradition-and-the-individual-talent" rel="nofollow">https:&#x2F;&#x2F;www.poetryfoundation.org&#x2F;articles&#x2F;69400&#x2F;tradition-an...</a>
CharlieDigital3 months ago
As the saying goes:<p><pre><code> while (React.isPopular) { React.isPopular = true } </code></pre> It&#x27;s actually quite sad because there are objectively better models both for performance and memory including Preact, Svelte, Vue, and of course vanilla.
评论 #43048554 未加载
评论 #43048169 未加载
评论 #43048236 未加载
评论 #43048954 未加载
评论 #43049061 未加载
评论 #43048778 未加载
评论 #43050312 未加载
评论 #43048615 未加载
anarticle3 months ago
Sadly, as a person who used write AVX in C for real time imaging systems: don&#x27;t care shipped.<p>I love dingling around with Cursor&#x2F;Claude&#x2F;qwen to get a 300 line prototype going in about 3-5 minutes with a framework I don&#x27;t know. It&#x27;s an amazing time to be small, I would hate to be working at a megacorp where you have to wait two months to get approval to use only GitHub copilot (terrible), in a time of so many interesting tools and more powerful models every month.<p>For new people, you still have to put the work in and learn if you want to transcend. That&#x27;s always been there in this industry and I say that as a 20y vet, C, perl, java, rails, python, R, all the bash bits, every part matters just keep at it.<p>I feel like a lot of this is the js frontend committee running headlong into their first sea change in the industry.
d_watt3 months ago
It&#x27;s always been a thing with modes of encapsulating knowledge. The printing press caused the freezing of language, sometimes in a weird place*<p>Where great documentation was make or break for a open source project for the last 10 years, I think creating new projects with AI in mind will be required in the future. Maybe that means creating a large amount of examples, maybe it means providing fine tunes, maybe it means publishing a MCP server.<p>Maybe sad because it&#x27;s another barrier to overcome, but the fact that AI coding is so powerful so quickly probably means it&#x27;s worth the tradeoff, at least for now.<p>*<a href="https:&#x2F;&#x2F;www.dictionary.com&#x2F;e&#x2F;printing-press-frozen-spelling&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.dictionary.com&#x2F;e&#x2F;printing-press-frozen-spelling&#x2F;</a>
评论 #43119432 未加载
ilrwbwrkhv3 months ago
&gt; However, a leaked system prompt for Claude’s artifacts feature shows that both React and Tailwind are specifically mentioned.<p>Damn.
owenversteeg3 months ago
I think as new data gets vacuumed up faster, this will be less of an issue. About a year ago here on HN I complained about how LLMs were useless for Svelte as they did not have it in their training data, and that they should update on a regular basis with fresh data. At the time my comment was considered ridiculous. One year later, that’s where we are, of course; the average cutoff of “LLM usefulness” with a new subject has dropped from multiple years to months and I see no reason that the trend will not continue.
hinkley3 months ago
I don’t like that this conclusion seems to be that if humans adopt every new technology before AI can train on it that their jobs will be more secure. That is its own kind of hell.
评论 #43053066 未加载
AlienRobot3 months ago
&gt;Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. This forces developers to rely solely on potentially limited official documentation and early adopter experiences, which, for better or worse, tends to be an ‘old’ way of doing things and incentivises them to use something else.<p>I can&#x27;t help but feel that a major problem these days is the lack of forums on the Internet, specially for programming. Forums foster and welcome new members, unlike StackOverflow. They&#x27;re searchable, unlike Discord. Topics develop as people reply, unlike Reddit. You&#x27;re talking to real people, unlike ChatGPT. You can post questions in them, unlike Github Issues.<p>When I had an issue with a C++ library, I could often find a forum thread made by someone with a similar problem. Perhaps because there are so many Javascript libraries, creating a separate forum for each one of them didn&#x27;t make sense, and this is the end result.<p>I also feel that for documentation, LLMs are just not the answer. It&#x27;s obvious that we need better tools. Or rather, that we need tools. I feel like before LLMs there simply weren&#x27;t any universal tools for searching documentation and snippets other than Googling them, but Googling them never felt like the best method, so we jumped from one subpar method to another.<p>No matter what tool we come up with, it will never have the flexibility and power of just asking another human about it.
hinkley3 months ago
I’m working on a side project that actually probably could use AI later on and I’m doing everything I can not to “put a bird on it” which is the phase we are at with AI.<p>I might be willing to use a SAT solver or linear algebra on it if I ever get to that point but there’s a lot else to do first. The problem space involves humans, so optimizing that can very quickly turn into “works in theory but not in practice”. It’d be the sort of thing where you use it but don’t brag about it.
jayd163 months ago
It&#x27;s pretty interesting and mildly shocking that everyone is just making the same &#x27;who needs a new JS library&#x27; joke.<p>What about closed source tooling? How do you expect an AI to ever help you with something it doesn&#x27;t have a license to know about? Not everything in the world can be anonymously scraped into the yearly revision.<p>If AI is going to stay we&#x27;ll have to solve the problem of knowledge segmentation. If we solve that, keeping it up to date shouldn&#x27;t be too bad.
评论 #43051487 未加载
pphysch3 months ago
I don&#x27;t think this is unique to AI. There are categories of knowledge that are infested with bad practices (webdev, enterprise software), and even a direct web search will lead you to those results. AI definitely regurgitates many of these bad practices, I&#x27;ve seen it, but it&#x27;s not obvious to everyone.<p>I think it&#x27;s unrealistic to expect a general purpose LLM would be an practical expert in a new field where there are potentially 0 human practical experts.
mtkd3 months ago
Sonnet + Tailwind is something of a force multiplier though -- backend engineers now have a fast&#x2F;reliable way of making frontend changes that are understandable and without relying on someone else -- you can even give 4o a whiteboard drawing of a layout and get the tailwind back in seconds<p>On the wider points, I do think it is reducing time coders are thinking about strategic situation as they&#x27;re too busy advancing smaller tactical areas which AI is great at assisting -- and agree there is a recency issue looming, once these models have heavy weightings baked in, how does new knowledge get to the front quickly -- where is that new knowledge now people don&#x27;t use Stackoverflow?<p>Maybe Grok becomes important purely because it has access to developers and researchers talking in realtime even if they are not posting code there<p>I worry the speed that this is happening results in younger developers not spending weeks or months thinking about something -- so they get some kind of code ADHD and never develop the skills to take on the big picture stuff later which could be quite a way off AI taking on
评论 #43048602 未加载
nektro3 months ago
developers using ai continue to find new and novel ways to make themselves worse
jgalt2123 months ago
Along similar lines, I found Google auto complete to constrict my search space. I would only search the terms that auto complete.
benrutter3 months ago
I think annecdotally this is true, I&#x27;ve definitely seen worse, but older technologies be chosen on the basis of LLM&#x27;s knowing more about them.<p>That said, I also think it&#x27;s a <i>bad choice</i>, and here&#x27;s some good news on that front- you can make good choices which will put you and your project&#x2F;company ahead of many projects&#x2F;companies making bad choices!<p>I don&#x27;t think the issue is that specific to LLMs- people have been choosing React and similar technologies &quot;because it&#x27;s easy to find developers&quot; for ages.<p>It&#x27;s definitely a shame to see people make poor design decisions for new reasons, but I think poor design decisions for dumb reasons are gonna outlive LLMs by some way.
photochemsyn3 months ago
The central issue is high cost of training the models, it seems:<p>&gt; &quot;Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff... The cutoff means that models are strictly limited in knowledge up to a certain point. For instance, Anthropic’s latest models have a cutoff of April 2024, and OpenAI’s latest models have cutoffs of late 2023.&quot;<p>Hasn&#x27;t DeepSeek&#x27;s novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.
评论 #43049975 未加载
jleask3 months ago
The underlying tech choice only matters at the moment because as software developers we are used to that choice being important. We see it as important because <i>we</i> currently are the ones that have to use it.<p>As more and more software is generated and the prompt becomes how we define software rather than code i.e. we shift up an abstraction level, how it is implemented will become less and less interesting to people. In the same way that product owners now do not care about technology, they just want a working solution that meets their requirements. Similarly I don&#x27;t care how the assembly language produced by a compiler looks most of the time.
avbanks3 months ago
LLM based AI tools are the new No&#x2F;Low Code.
评论 #43048806 未加载
datadrivenangel3 months ago
This is the same problem as google&#x2F;search engines: A new technology has less web presence, and thus ranks lower in the mechanisms for information distribution and retrieval until people put in the work to market it.
Eridrus3 months ago
This will be solved eventually on the AI model side. It isn&#x27;t some law of nature that it takes a million tokens for an AI to learn something; just the fact that we can prompt these models should convince you of that.
评论 #43048299 未加载
评论 #43048670 未加载
tomduncalf3 months ago
I was talking about this the other day - to some extent it feels like React (and Tailwind) has won, because LLMs understand it so deeply due to the amount of content out there. Even if they do train on other technologies that come after, there maybe won’t be the volume of data for it to gain such a deep understanding.<p>Also it doesn’t hurt that React has quite a stable&#x2F;backwards compatible API, so outdated snippets probably still work… and in Tailwind’s case, I suspect the direct colocation of styles with the markup makes it a bit easier for AI to reason about.
NiloCK3 months ago
I, too, wrote a shittier version of this a little while back: <a href="https:&#x2F;&#x2F;www.paritybits.me&#x2F;stack-ossification&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.paritybits.me&#x2F;stack-ossification&#x2F;</a><p>Another observation since then: good documentation for newer tech stacks will <i>not</i> save the LLM&#x27;s capabilities with that tech. I think the reason, in short, is that there&#x27;s no shortcut for experience. Docs are book learning for tech stacks - millions (billions) of lines of source code among the training data are something else entirely.
mncharity3 months ago
In contrast, I suggest AI could <i>accelerate</i> new tech adoption.<p>&gt; if people are reluctant to adopt a new technology because of a lack of AI support, there will be fewer <i>people [emphasis added]</i> likely to produce material regarding said technology, which leads to an overall inverse feedback effect. Lack of AI support prevents a technology from gaining the required critical adoption mass, which in turn prevents a technology from entering use and having material made for it,<p>At present. But what if this is a transient? It depends on the new technology&#x27;s dev team being unable to generate synthetic material. What happens when they can create for themselves a fine tune that translates between versions of their tech, and between &quot;the old thing everyone else is using&quot; and their new tech? One that encapsulates their &quot;idiomatic best practice&quot; of the moment? &quot;Please generate our rev n+1 doc set Hal&quot;? &quot;Take the new <i>Joe&#x27;s ten thousand FAQ questions about topic X</i> list and generate answers&quot;? &quot;Update our entries in [1]&quot;? &quot;Translate the <i>Introduction to Data Analysis using Python</i> open-source textbook to our tech&quot;?<p>The quote illustrates a long-standing problem AI can help with - just reread it swapping &quot;AI support&quot; to &quot;documentation&quot;. Once upon a time, releasing a new language was an ftp-able tar file with a non-portable compiler and a crappy text-or-PS file and a LISTSERV mailinglist. Now people want web sites, and spiffy docs, and Stack Overflow FAQs, and a community repo with lots and lots of batteries, and discuss, and a language server, and yes, now LLM support. But the effort delta between spiffy docs and big repo vs LLM support? Between SO and LLM latency? That depends on how much the dev team&#x27;s own LLM can help with writing it all. If you want dystopian, think lots of weekend &quot;I made my own X!&quot; efforts easily training transliteration from an established X, and running a create-all-the-community-infrastructure-for-your-new-X hook. Which auto posts a Show HN.<p>AI could at long last get us out of the glacial pace of stagnant progress which has characterized our field for decades. Love the ongoing learning of JS churn? Just wait for HaskellNext! ;P<p>[1] <a href="https:&#x2F;&#x2F;learnxinyminutes.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;learnxinyminutes.com&#x2F;</a> <a href="https:&#x2F;&#x2F;rigaux.org&#x2F;language-study&#x2F;syntax-across-languages.html" rel="nofollow">https:&#x2F;&#x2F;rigaux.org&#x2F;language-study&#x2F;syntax-across-languages.ht...</a> <a href="https:&#x2F;&#x2F;rosettacode.org&#x2F;wiki&#x2F;Category:Programming_Languages" rel="nofollow">https:&#x2F;&#x2F;rosettacode.org&#x2F;wiki&#x2F;Category:Programming_Languages</a> ...
delichon3 months ago
Working in Zed I&#x27;m full of joy when I see how well Claude can help me code. But when I ask Claude about how to use Zed it&#x27;s worse than useless, because it&#x27;s training data is old compared to Zed, and it freely hallucinates answers. So for that I switch over to Perplexity calling OpenAI and get far better answers. I don&#x27;t know if it&#x27;s more recent training or RAG, but OpenAI knows about recent Zed github issues where Claude doesn&#x27;t.<p>As long as the AI is pulling in the most recent changes it wouldn&#x27;t seem to be stiflling.
评论 #43048546 未加载
trescenzi3 months ago
Generative AI is fundamentally a tool that enables acceleration. Everything mentioned in this already true without Gen AI. Docs of new versions aren’t as easy to find till they aren’t as new. This is even true for things in the zeitgeist. Anyone around for the Python 2 to 3 or React class to hooks transitions knows how annoying that can be.<p>Yes new programmers will land on Python and React for most things. But they already do. And Gen AI will do what it does best and accelerate. It remains to be seen what’ll come of that trend acceleration.
nbuujocjut3 months ago
Related: <a href="https:&#x2F;&#x2F;www.mjlivesey.co.uk&#x2F;2025&#x2F;02&#x2F;01&#x2F;llm-prog-lang.html" rel="nofollow">https:&#x2F;&#x2F;www.mjlivesey.co.uk&#x2F;2025&#x2F;02&#x2F;01&#x2F;llm-prog-lang.html</a><p>It’s doesn’t matter if a minority of passion techies will still be up for new tech, if the average developer just wanting to get the job done and relying on LLMs finds it harder, it will be a significant barrier.
montjoy3 months ago
The lack of new training data also makes it bad at projects that are still maturing because it will suggest outdated code - or worse it will mix&#x2F;match old and new syntax and generate something completely broken.<p>I worry that the lack of new examples for it to train on will self-reinforce running old syntax that has bad patterns.<p>If the &quot;AI&quot; could actually store its mistakes and corrections from interactive sessions long-term I think it would greatly alleviate this problem, but that opens up another whole set of problems.
carlosdp3 months ago
I don&#x27;t think this is a bad thing. Pretty much all of the author&#x27;s examples of &quot;new and potentially superior technologies&quot; are really just different flavors of developer UX for doing the same things you could do with the &quot;old&quot; libraries&#x2F;technologies.<p>In a world where AI is writing the code, who cares what libraries it is using? I don&#x27;t really have to touch the code that much, I just need it to work. That&#x27;s the future we&#x27;re headed for, at lightning speed.
评论 #43050323 未加载
评论 #43050431 未加载
评论 #43052017 未加载
bilater3 months ago
This is precisely why I have said that every new framework&#x2F;library should have a markdown or text or whatever is the best format for LLM models endpoint that has all the docs and examples in one single page so you can easily copy it over to a models context. You want to make it as easy as possible for LLMs to be aware of how your software works. The fancy nested navigation guide walkthrough thing is cool for users but not optimized for this flow.
评论 #43052879 未加载
评论 #43052611 未加载
pmuk3 months ago
I have noticed this. I think it also applies to the popularity of the projects in general and the number of training examples it has seen.<p>I was testing Github copilot&#x27;s new &quot;Agent&quot; feature last weekend and rapidly built a working app with Vue.js + Vite + InstantSearch + Typesense + Tailwind CSS + DaisyUI<p>Today I tried to build another app with Rust and Dioxus and it could barely get the dev environment to load, kept getting stuck on circular errors.
lherron3 months ago
I don&#x27;t know how you solve the &quot;training data and tooling prompts bias LLM responses towards old frameworks&quot; part of this, but once a new (post-cutoff) framework has been surfaced, LLMs seem quite capable of adapting using in-context learning.<p>New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.
zkmon3 months ago
People used to live in villages and places that were not connected by roads. Now since we have roads, any place that is not connected by a road will be seen a rough place. The difficulty is caused by usage of roads and vehicles, because it was not perceived or felt back in those days. So technology and assistance create new perceived problems.
richardw3 months ago
I tried a new agent library with a model a few weeks ago. Just pasted the relevant api docs in and it worked fine.<p>However, while I’m proud of the outcomes, I’m not proud of the code. I’m not releasing anything open source until I feel it’s mine, which is another step. I’d be a bit embarrassed bringing another dev on.<p>“I’m Richard and I’m using AI to code” Support Group: “Hi Richard”
orbital-decay3 months ago
So... it slows down adoption by providing easier alternatives for beginners? I guess you could look at it that way too.<p>Eventually it will go either of the two ways, though:<p>- models will have enough generalization ability to be trained on new stuff that has passed the basic usefulness test in the hands of enthusiasts and shows promise<p>- models will become smart enough to be useful even for obscure things
booleandilemma3 months ago
Seems like a short-term problem. We&#x27;re going to get to the point (maybe we&#x27;re already there?) where we&#x27;ll be able to point an AI at a codebase and say &quot;refactor that codebase to use the latest language features&quot; and it&#x27;ll be done instantly. Sure, there might be a lag of a few months or a year, but who cares?
kristianp3 months ago
&gt; Claude’s artifacts feature<p>The article mentions that Claude’s artifacts feature is opinionated about using react and will even refuse, to code for Svelte Runes. It&#x27;s hard to get it to use plain JavaScript because react is in the system prompt for artefacts. Poor prompt engineering in claude.
slevis3 months ago
Looks like I might be the minority, but I disagree with this prediction. Better models will also be better at abstracting and we have seen several examples (e.g. the paper LIMO: Less is More for Reasoning) that with a small amount of training data, models can outperform larger models.
JimboOmega3 months ago
Has there been any progress or effort on solving the underlying problem?<p>I&#x27;m not entirely sure why AI knowledge must be close to a year old, and clearly this is a problem developers are aware of.<p>Is there are a technical reason they can&#x27;t be, for instance, a month behind rather than close to a year?
j453 months ago
If people are skipping one shelf of tech, and jumping to the next shelf up with only ai trying to cover everything, and are let down, maybe there is an opportunity to share that there may be more realistic offers in the interim to offer across both.
__MatrixMan__3 months ago
The Arrows of Time by Greg Egan (Orthogonal, Book 3) deals with something analogous to this: Our characters must break themselves out of a cycle which is impeding innovation. If you like your scifi hard, the Orthogonal series is a lot of fun.
evanjrowley3 months ago
Neovim author TJ DeVries Express similar concerns in a video earlier this year: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L</a>
OutOfHere3 months ago
Always get the response with and without a web search. The web search may yield a newer solution.<p>Also, each package should ideally provide an LLM ingestible document. Upload this for the LLM, and have it answer questions specific to the new package.
conradfr3 months ago
I was thinking the other day how coding assistants would hinder new languages adoption.
amelius3 months ago
This is like saying in the 90s that Google Search would stifle tech adoption ...<p>I don&#x27;t buy it. AI can teach me in 5 minutes how to write a kernel module, even if I&#x27;ve never seen one. AI brings more tech to our fingertips, not less.
评论 #43052390 未加载
memhole3 months ago
I&#x27;ve wondered this myself. There was a post about gumroad a few months ago where the CEO explained the decision to migrate to typescript and react. The decision was in part because of how well AI generated those, iirc.
janalsncm3 months ago
I’ve been out of web dev for a while, but maybe the problem is there’s a new framework every 6 months and instead of delivering value to the end user, developers are rewriting their app in whatever the new framework is.
评论 #43052524 未加载
评论 #43052425 未加载
评论 #43052820 未加载
评论 #43052375 未加载
评论 #43052357 未加载
评论 #43052963 未加载
at_3 months ago
Anecdotally, working on an old Vue 2 app I found Claude would almost always return &quot;refactors&quot; as React + Tailwind the first time, and need nudging back into using Vue 2.
评论 #43048630 未加载
ofirg3 months ago
while it is true that there is a gap between what most LLMs &quot;know&quot; and current time that gap is getting smaller not larger with time, it is also possible to teach a model pass the knowledge cut-off with tools and an LLM might be encouraged to be aware of the gap and reach out for the latest information when it might have changed (pi is still pi but the country with the most people might have changed)
razodactyl3 months ago
Not entirely sure it&#x27;s a hard fact but this is definitely an example of bias in an AI system.<p>More reason to decouple and think for ourselves.
1970-01-013 months ago
Are we really at the point where we are concerned how abstraction levels are not being abandoned as quickly as they were yesterday?
ZaoLahma3 months ago
Seems plausible, especially in combination with the AI-coma that occurs when you tab-complete your way through problems at full speed.
jadbox3 months ago
Upgrading to Tailwind v4 was horribly frustrating as every AI insisted on using v3 even though it technically knew the v4 api.
mring336213 months ago
I don&#x27;t know how this is surprising.<p>LLM-provided solutions will reinforce existing network effects.<p>Things that are popular will have more related content...
ripped_britches3 months ago
This should not be relevant with cursor being able to include docs in every query. For those who don’t use this I feel for ya.
drbojingle3 months ago
I think llms will be great for Lang&#x27;s like elm personally. Especially with agents that can operate in an eval loop.
IshKebab3 months ago
Huh how long until advertisers pay to get their product preferred by AI? If it isn&#x27;t already happening...
jgalt2123 months ago
If you can build an app that an AI cannot, then you know some sort n-month head start on the competition.
spaceguillotine3 months ago
If the only new feature is AI, its not worth the upgrade. Outside the lil tech bubble, people hate it.
cushychicken3 months ago
…Isn’t this the website that constantly encourages people to “choose boring technology” for their web tech startups?<p>Aren’t a reasonable portion of the readers here people who bemoan the constant learning curve hellscape of frontend development?<p>And now we’re going to be upset that tools that help us work faster, which are trained on data freely available on the internet and thus affected by the volume of training material, decide to (gasp) <i>choose solutions with a greater body of examples?</i><p>Just can’t satisfy all the people all the time, I guess! SMH.
lcfcjs63 months ago
There is an enormous fear from mainstream media of AI, but the thing that excites me the most about this is in health care. AI will find the cure to Alzeimers and countless other diseases, there&#x27;s no doubt about it. This simple fact is enough to make it acceptable.
g9yuayon3 months ago
&gt; Once it has finally released, it usually remains stagnant in terms of having its knowledge updated....meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.<p>I find such argument weak. We can say the same thing about a book, like &quot;Once The Art of Computer Program is finally published, it usually remains stagnant in terms of having its knowledge updated, thus disincentivizing people to learn new algorithms&quot;.
casey23 months ago
Truly and honestly 99% of developers haven&#x27;t even heard of chatgpt or copilot let alone the general public. It&#x27;s a self-emposed problem on the orgs that choose to use such tools. More to the point, recency bias is so much stronger I&#x27;d rather have a system that points people to the current correct solution than a slightly better solution that is somehow harder to understand despite it&#x27;s claimed simplicity by fanatics.
thecleaner3 months ago
Shove the docs as context. Gemini has 2m context length.
ausbah3 months ago
i do wonder if this could be mitigated by sufficiently popular newer libraries submitting training data of their library or whatever in action
zombiwoof3 months ago
Yup, python pretty much wins due to training data
ramoz3 months ago
We could call this the hamster-wheel theory.
Rehanzo3 months ago
Does anyone know what font is used here?
评论 #43053878 未加载
jgalt2123 months ago
Herein lies the key for IP protection. Never use cloud hosted coding tools as the world will soon be able to copy your homework at zero cost.
评论 #43048438 未加载
tajd3 months ago
Yeah maybe. But I think the thing I like is that is takes me a much shorter amount of time to create solutions for my users and myself. Then I can worry about “tech adoption” once I’ve achieved a relevant solution to my users.<p>If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.<p>They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.
skeeter20203 months ago
I don&#x27;t agree, because the people using these tools for their wore work were never doing innovative tech in the first place.
tiahura3 months ago
Perhaps reasoning will help?
评论 #43048207 未加载
yieldcrv3 months ago
Eh a cooldown period between the fanfare of a new thing and some battle testing before it gets added to the next AI’s training set is a good thing<p>the delay is like 8 months for now, thats fine<p>I think this is also great for some interview candidate assessments, you have new frameworks that AI can&#x27;t answer questions about yet, and you can quiz a candidate on how well they are able to figure out how to use the new thing
highfrequency3 months ago
I have definitely noticed that ChatGPT is atrocious at writing Polars code (which was written recently and has a changing API) while being good at Pandas. I figure this will mostly resolve when the standard reasoning models incorporate web search through API documentation + trial and error code compilation into their chain of thought.
anal_reactor3 months ago
Not a problem. I&#x27;m sure that being able to work well with new information is the next goal most researchers are working towards, so the entire post feels like a boomer complaining &quot;computers are bad because they&#x27;re big and bulky&quot; thirty years ago, not being able to imagine the smartphone revolution.
stevemadere3 months ago
This is truly terrible.<p>What happened to a new JS front end library every week?<p>If this keeps up, we won&#x27;t get to completely throw away all of our old code and retool every two years (the way we&#x27;ve been operating for the last 20 years)<p>How will we ever spend 85% of our time spinning up on new js front end libraries?<p>And don&#x27;t even get me started on the back end.<p>If AI had been around in 2010, we probably still have some people writing apps in Rails.<p>OMG what a disaster that would be.<p>It&#x27;s a good thing we just completely threw away all of the work that went into all of those gems. If people had continued using them, we wouldn&#x27;t have had the chance to completely rewrite all of them in node and python from scratch.
评论 #43051030 未加载
评论 #43050643 未加载
评论 #43051345 未加载
评论 #43050416 未加载
评论 #43050746 未加载
评论 #43050476 未加载
评论 #43051273 未加载
_as_text3 months ago
know what this will be about without reading<p>Python 3.12-style type annnotations are a good example imo, no one uses the type statement because dataset inertia
评论 #43048351 未加载
评论 #43048495 未加载
评论 #43048135 未加载
tolerance3 months ago
So what.<p>…if society continues to delegate more of their work to AI then we are going to fall back into the grips that inform us that some people are better at things than other people are and some are worse at things than other people are and this is what lies beneath the bridge of relying or not relying on AI to leverage your capacity to think and act on what you feel.<p>I think that People who will be willing to put in effort for their crafts <i>without AI</i> will be the ones who will be willing to try out new things and seek opportunities for ingenuity in the future. I think that the problem people have with this idea is that it runs counter to notions related to—<i>ahem</i>—<p>diversity, equity and inclusion…<p>On one hand and on it’s little finger is the legitimate concern that if companies who develop LLMs are not transparent with the technologies they make available to users when generating code, then they’ll hide all the scary and dangerous things that they make available to the people who’ll think, act and feel corrupt regardless of the tools they wield to impose disadvantages onto others. But I don’t think that will make a difference.<p>The only way out is hard work in a world bent on making the work easy after it makes you weak.
评论 #43050668 未加载