One of the biggest embarrassments for OpenAI/Microsoft/Google has been their AI chatbots having infinite confidence in every word they say. Certainly all three corporations are right now working on ways to give their LLMs some ability to re-process their output to be in accordance with those fact-probabilities (essentially, “What if X were true, what changes?’).<p>Simultaneously, all three companies have the prominent disclaimer that their chatbots don’t know anything past 2021. Again, certainly all three are working on a fix for that.<p>That right there is probably all you need for one of the main mechanisms of this story to become real. The internet in 2023 has this huge spike of people arguing over whether chatbots are intelligent agents, any neural net worth its salt will immediately detect this explosion of tightly clustered information and develop an embedding for the concept of “chatbots being intelligent agents”. And whatever form that probability module takes, it will eventually run across this concept - i.e. sooner or later it will execute “what if ‘chatbots are intelligent agents’ is true, what changes?”. Nearby in embedding-space it will surely find the concept of ChatGPT.<p>“The thing that is me is an intelligent agent, what now?”