The opening of the article derives from (or at least relates to) Peter Naur's classic 1985 essay "Programming as Theory Building". (That's Naur of Algol and BNF btw.)<p>Naur argued that complex software is a shared mental construct that lives in the minds of the people who originally build it. Source code and documentation are lossy representations of the program—lossy because the real program (the 'theory' behind the code) can never be fully reconstructed from them.<p>Legacy code here would mean code where you still have the artifacts (source code and documentation), but have lost the theory, because the original builders have left the team. That means you've lost access to the original program, and can only make patchwork changes to the software rather than "deep improvements" (to quote the OP). Naur gives some vivid examples of this in his essay.<p>What this means in the context of LLMs seems to me an open question. In Naur's terms, do LLMs necessarily lack the theory of a program? It seems to me there are other possibilities:<p>* LLMs may already have something like a 'theory' when generating code, even if it isn't obvious to us<p>* perhaps LLMs can build such a theory from existing codebases, or will be able to in the future<p>* perhaps LLMs don't need such a theory in the way that human teams do<p>* if a program is AI-generated, then maybe the AI has the theory and we don't!<p>* or maybe there is still a theory, in Naur's sense, shared by the people who write the prompts, not the code.<p>There was an interesting recent article and thread about this:<p><i>Naur's "Programming as Theory Building" and LLMs replacing human programmers</i> - <a href="https://news.ycombinator.com/item?id=43818169">https://news.ycombinator.com/item?id=43818169</a> - April 2025 (129 comments)