> but maybe AI mainstream adoption will take longer than we anticipate.<p>Here's how the adoption of this technology is going to go (this is the way all AI technology adoption has gone for 60 years):<p>1) Papers will come out showing how by creating a more effective way to leverage compute + data to make a system self-improving, performance at some task looks way better than previous AI systems, almost human-like. (This already happened: "All You Need Is Attention")<p>2) The first generally available implementations of the technology, in a pretty raw form, will be released. People will be completely amazed at how this machine can do something that was thought to be a hallmark of humans! And by just doing $SIMPLE_THING (search, token prediction) which isn't "really" "thinking"! (This will amaze some people but also form the basis of a lot of negative commentary) (Also already happened: ChaGPT, etc)<p>3) There will be a huge influx of speculative investment capital into the space and a bunch of startups will appear to take advantage of this. At the same time, big old tech companies will start putting stickers on their existing products that say they're powered by LLMs. (Also already happened)<p>4) There will be a wave of press, first in the academia, then in technology circles, then in the mainstream, about What This Means. "AGI" is just over the horizon, all human jobs are about to be gone, society totally transformed. (We are currently here at step 4)<p>5) After a while, the limits of the technology will start to become clear. A lot of the startups will figure out that they don't really have a business, but a few will be massively successful and either build real ongoing businesses that use LLMs to solve problems for people, or get acquired. It will turn out that LLMs are massively, massively useful for some previously-thought-to-be-nearly-impossible or at least contigent on solving the general AI problem work: something like intent extraction, grammarly-type writing assistants, Intellisense on steroids, building natural chat interfaces to APIs in products like Siri or Alexa that understand "turn on the light" and "turn on the lights" mean the same thing. I have no idea what the things will actually be, if I was good at that sort of thing I'd be rich.<p>6) There will be a bunch of "LLMs are useless!" press. Because LLMs don't have Rosie-from-the-Jetsons level of human-like intelligence, they will be considered "a failure" for the general AI problem. Once people get accustomed to whatever the actual completely amazing things LLMs get used for, things that seemed "impossible" in 2021. Startups will fail. Enrollments in AI courses in school will drop, VCs will pull back from the category, AI in general (not just LLMs) will be considered a doomed investment category for a few years. This entire time, LLMs will be used every day by huge numbers of people to do super helpful things. But it will turn out that no one wants to see a movie where the screenplay is written by AI. The LLM won't be able to drive a car. All the media websites that are spending money to have LLMs write articles will find out that LLM-generated content is a completely terrible way to get people to come to your site, read some stuff and look at ads, with terrible economics, and these people will lose at least hundreds of millions of dollars, probably low billions, collectively.<p>7) At this trough point where LLMs have "failed" and AI as a sector is toxic to VCs, what LLMs do will somehow be thought of as 'not AI'. "It's just predicting the next token" or something will become the accepted common thinking that disqualifies it as 'Artificial Intelligence'. LLMs and LLM engineering will be considered useful and necessary, but it will be considered a part of mainstream software engineering and not really 'AI' per se. People will generally forget that whatever workaday things LLMs turn into a trivial service call or library function, used to be massively difficult problems that people thought would require human-like general intelligence to solve (for instance, making an Alexa-like voice assistant that, can tell 'hey can you kill the lights', 'yo shutoff the overhead light please?', 'alright shut the lights', 'close the light' all mean the same thing). This will happen really fast. <a href="https://xkcd.com/1425/" rel="nofollow noreferrer">https://xkcd.com/1425/</a><p>Sometimes when you see an amazing magic show, if you later learn how the trick was done, it seems a lot less 'magical'. Most magic tricks exploit weird human perceptual phenomena and, most of all, the magicians willingness to master incredibly tedious technique and do incredibly tedious work. Even though we 'know' this at some level when we see magicians perform, it's still deflating to learn the details. For some reason, AI technology is subject to the same phenomenon.