> code is not literature<p>One thing I’ve thought about is how AI assistants are actually turning code into literature, and literature into code.<p>In old-fashioned programming, you can roughly observe a correlation between programmer skill and linear composition of their programs, as in, writing it all out at once from top to bottom without breaks. There was then this pre-modern era where that practice was criticized in favor of things like TDD and doc-first and interfaces, but it still probably holds on the subtasks of those methods. Now there are LLM agents that basically operate the same way. A stronger model will write all at once, while a weaker model will have to be guided through many stages of refinement. Also, it turns the programmer into a literary agent, giving prose descriptions piece by piece to match the capabilities of the model, but still in linear fashion.<p>And I can’t help but think that this points to an inadequacy of the language. There should be a programming language that enables arbitrary complexity through deterministic linear code, as humans seem to have an innate comfort with. One question I have about this is why postfix notation is so unpopular versus infix or prefix, where complex expressions in postfix read more like literature where details build up to greater concepts. Is it just because of school? Could postfix fix the stem/humanities gap?<p>I see LLMs as translators, which is not new because that’s what they were built for, but in this case between two very different structures of language, which is why they must grow in parameters with the size of the task rather than process linearly along a task with limited memory, as in the original spoken language to spoken language task. If mathematics and programming were more like spoken language, it seems the task would be massively simpler. So maybe the problem for us too is the language and not the intelligence.