The previous discussion from 2017 was on point: Most of programming is figuring out what you actually want and that problem is not going to be solved by computers any time soon. However, I think there is a sub-problem where computers <i>can</i> help. One of the biggest issues when writing large and long lived systems is that the code becomes more complicated than the original problem. Anybody whose worked on legacy systems before will be familiar with that issue: figuring out how to do what you want to do in the confines of the existing code is sometimes more difficult than analysing the original problem.<p>I tend to make my living on legacy code. In fact, I enjoy it more than green field projects because it's trickier. Also, because nobody else wants to touch that code, I have more freedom :-) And (if I'm honest) it's an easy escape from the occasional colleague who always insists that "it must be like X" because they lack experience to see things in terms other than X. When you get to the point of, "Well, we can't reasonably <i>do</i> X because of all our legacy problems. What's plan B?", there is a lot more room for compromise and the task becomes a lot more fun (IMHO).<p>But on greenfield projects, having a tool that helps you understand when you are making the code worse would be incredibly helpful. We've got some metrics (like cyclomatic complexity, class size, method size, etc, etc) but it's still pretty easy to write code with terrible consequences even when following linting rules.<p>This is where I can see the potential for AI and ML to help: a kind of permanent pair programming partner that can keep an eye on the big picture. For example, warning you when you have too many options in your API, or when your architecture is not isolated enough (or too isolated). It's a lot of fuzzy judgement calls, so it's hard to find specific rules to help you. I suspect there are some startups looking at this, but I admit that I haven't followed their progress at all.