It's complex - an essay in and of itself if we're to respond properly, but I'll try to keep things brief here.<p>First, let's address the tooling side. While the current crop of "code completion tools" built out of or around LLMs are quite capable in their own right, they're not exactly "free thinkers" like we can be. Rather, their output is limited by a combination of training data, the model itself, and - increasingly - the user's ability to put their ideas into a prompt that can generate the desired output consistently. So there's already a huge hurdle just on the tooling side to overcome before we can begin "improving", one tied just as much to the capabilities of the product as the capabilities of the end user. I would argue that this is the most immediate hurdle to cross if we want to see meaningful improvements to code as a whole.<p>In addition to that immediate hurdle, there's three more issues on the tooling front:<p>* The existing training data is largely bad, bloated, or insecure code samples (generally from publicly-available social media and repositories), because code security and efficiency are only relatively recent prerogatives of large development companies or outfits as they seek to dodge lawsuits (security) and increase margins (efficiency)<p>* LLMs aren't very good at teaching a user how to think better about a problem, only making them better at phrasing their prompt to get closer to a possible solution<p>* LLMs are stuck in a predictive framework that mandates an answer for the customer, as opposed to a human who is able to say "I don't know" and going off to learn more about that thing they're stuck on.<p>Ultimately, the tooling is helping novice or entry-level developers and hobbyists write better code, but only because the models were trained on code from more senior or professional developers that was also shared publicly. Senior developers and above may find utility in writing faster code with LLMs, but aren't nearly as likely to write better code as a result of the tooling, at least from my subjective reasoning.<p>Now let's switch to the business side of things, which I already touched on above. Businesses haven't been interested in secure or efficient code until very recently, as we began bumping up against the limits of physical hardware in x86-64 land and lawsuits for failures became more of an existential threat. This means a lot of the code from public samples fits the "done is better than good" mantra of modern business practices, rather than being an improvement to prior releases; even if a business has taken the time to create more secure or efficient code, they likely haven't shared it as it's a core part of their competitive advantage or product line. This will take years, maybe a decade before the LLM training sets have enough "superior" data to outscore the "inferior" training set data, during which time the status quo - barring a literal revolution in computing - is likely to remain.<p>Admittedly all of this is my subjective POV from infrastructure-world, and could be way off base; YMMV, buyer beware, caveat emptor, etc.