Here's the skeptical GPT take, after months of seeing things evolve.<p>- "Prompt engineering" based products are fickle, and depend on the underlying model not changing much. They also have almost no moat. I can go into ChatGPT, and probably get a few plugins to do your product.<p>- Do you really want to use American Expresses, or Bob's Laundromat, or Instacarts chatbot? Or would you rather just use their product search, when search is called for. Use the few reliable, point-and-click actions when obvious self-service customer support? And talk to a human being when support escalation is needed? (I have a hard time imagining a chatbot taking an action that you need 'root access' to an org's processes because I have a weird one-off support issue). Though maybe LLMs can make these existing interactions more seamless, I'm less sure people want to throw away what they know to jump into a chatbot<p>Here's where I think we are:<p>- Obviously, the biggest game changer is interrogating information. I can ask any question and get a cogent answer with enough accuracy for it to be useful. I have a personal Stackoverflow (and a million other help forums) where the expertise under those forums is captured.<p>- ChatGPT and friends like spreadsheets. They are beginner programming paradigm for creating natural-language interactions over the entirety of human information. That in itself is revolutionary. But, like spreadsheets, a few vendors will own this space - those that can train on all of human knowledge. Also, like spreadsheets, you can only go so far with a "prompt-based programming" without needing to build a real application that goes really deep into that domain.<p>- LLMs, like good CGI, works best when you don't see them. ChatGPT and friends can massively improve the ease of implementing applications that require all of human knowledge: search, recommendations, and other applications where you may have had ot rely on some hand-crafted knowledge graph. These have existing affordances to real users.<p>- It's easier and easier to train and. fine-tune LLMs on your own data, which makes using them transparently to interrogate