I've been in this industry for just over a decade now, been interested in tech/computers just a few years more than that. I'm a few decades more away from being a graybeard but I can say with some confidence that all this hype is mostly fluff. I'm not saying it's meaningless---we're probably not headed to another AI winter when the bubble bursts. After this hype cycle I think we will have some valuable concepts to carry down the line, while the investors will just quietly be looking for the next buzzword.<p>I make this claim with the following observations:<p>- As with other aspects of life, the conservative forecasts are <i>often</i> closer to the truth than extreme optimism/pessimism, especially in the long run; there is almost always a regression to the mean, so to speak. The technologists fervently burning with an almost religious zeal for all these AI models are making some egregious claims like "Stable Diffusion/ChatGPT <i>IS</i> (human?) learning", "There is no difference between a human learning from an artist's corpus and a neural net training from said corpus", on which they build extremely shaky forecasts.<p>- Tech and philosophy aside, there are a lot of other hurdles AI must clear for it to be the disruptive technology that it is portrayed to be. Just my least controversial example: people ask if LLMs will replace search engines anytime soon but, notably, ChatGPT can't fetch you the news, and "conversing" with it is an amusing but cumbersome interface. In the future, I think search engines and LLMs will share utility; based on my own experience using ChatGPT for programming, it is great for greenfield projects but horrible for working with pre-existing code. (Also, by saying "AI is not yet disruptive", I don't mean to imply its economic impact will be negligible---some people will definitely be affected but not the people nor in ways we think of currently. Also, my money is betting that the domain experts will be mostly safe from this impact.)<p>- I've seen this hype cycle before: a new conceptual/business/technological framework shows an amusing/promising use case which is then exploited to death by a wealth of startups and existing products pivoting, making this a "core" value prop. Remember when Google was a mobile-first company, so everyone prepared to be mobile-first too? When everything was social? That lead to toothbrushes and culinary products having APIs so you can build mobile apps for them. Remember VR? Blockchain? Maybe I'm too jaded but maybe I'm correct. Time will tell.<p>There's this tongue-in-cheek saying that advanced AI is whatever current AI can't do; in other words, researchers will not ever achieve "advanced AI" because the goal posts will keep moving. As a tech enthusiast who even did undergraduate AI research a decade ago, I really feel bad harboring such negative prognoses on progress that is, hands down, impressive. But how I like to think of it is not that AI progress is failing or underwhelming but that AI progress is helping us understand more of our humanity; when we move the goal posts, it's because we see AI (not) doing something that we consider for granted when dealing with actual humans.