As always, citation needed.<p>(Also, grain of salt required, because this is a blatant marketing post.)<p>Look, I've been hearing "the models will get better and make these core problems go away" since it become common to talk about "the models" at all. Maybe they will some day! But also, and critically, maybe they won't.<p>You also have to consider the future where some companies spend an additional $50-100k per developer and they DON'T see any of this supposed increase in performance, if these "trust me, it'll happen this time" promises don't come true. This is the kind of bet that can CRATER companies, so it's not surprising to see some hesitation here, a desire to see if the football will be again yanked away.<p>Plus, and I believe most damningly, this article appears to be engaging in the classic technocratic failure mode: mistaking social problems for technical ones.<p>Obviously, yes, developers engage in solving technical problems, but that is not all they do, and at the higher level, that becomes the least of what they do. More and more, a good developer ensures that they are solving the RIGHT problem in the RIGHT WAY. They're consulting with managers, (ideally) users, other teams, a whole host of people to ensure the right thing is built at the right time with the right features and that the right sacrifices are being made. LLMs are classically bad at this.<p>The author dismissively calls this "getting stuck", and handwaves the importance of it away, saying that the engineer will be able to unstuck the model at first (which, if we're putting armies of "vibe coding" junior engineers in charge of the LLMS, who've not had time enough in their career to develop this skill, HOW?), and then makes the classic claim "but the models will get better", and predicts the models will eventually be able to do it (which, if this is an intractable problem with LLMS -- and so far evidence has been leaning this way -- again, HOW?).<p>Forgive that apalling grammar. I am het up. But note well what I'm doing: I'm asking "should we even be doing this?" Which is something these models a) will have to do well to accomplish what the author insinuates they will, and b) have been persistently terrible at.<p>I'm going to remain skeptical for now, since it seems that's my one remaining superpower versus these LLMs, and I guess I'm going to need to keep that skill sharp if I want to avoid the breadline in this author's future. =)