I don't think the shift to chat was intentional. It's a byproduct of how conversations with LLMs work. In order to create continuity in a conversational thread, the interface must submit all of the previous inputs and outputs along with the last prompt. The "low hanging fruit" for such a UI is a chat box that just spills out the same data, all the inputs and outputs in sequential order.<p>Doing something beyond that makes it a RAG app, one that looks at each LLM response and then runs traditional programming code when appropriate. These kinds of UIs are much harder to build than chat because you have to know exactly what tools (traditional programming functions) you want to call and under what circumstances. It's a complicated app aside from the LLM aspect, and so a true RAG app with a custom UI for a particular use case or data set creates more of a moat than just a chat UI.