Why is every CEO predicting AGI in the next 2 to 5 years? I get the need to market yourself to the world, but this seems a bit much in terms of embellishments.<p>Don't get me wrong. I fully believe in the potential of current-gen AI. I am myself employed in the field. But to me it seems pretty obvious that these models are for the most part just memorizing and interpolating at huge scales with limited generalizability. I just don't see how we can go from current-gen to AGI without a huge paradigm shift. But in the statements made by these CEOs it is implied that this is not necessary.<p>I just don't get what sort of strategy they are following. Wouldn't these embellishments eventually come to haunt them when their promises are not realized? This seems to be Tesla FSD all over again. Then too it was claimed that only scale was necessary to achieved the objective.
> Wouldn't these embellishments eventually come to haunt them when their promises are not realized? This seems to be Tesla FSD all over again.<p>Is Tesla "haunted" by their repeated failure to deliver Full Self-Driving? I haven't been following, but it doesn't really seem like it.<p>Having something concrete to promise seems like a winning tactic: you don't want to actually <i>deliver</i> the thing, because then you'd have to train the public to expect something <i>else</i> (and that takes more creativity). Whether it's AGI, FSD, the Second Coming, a border wall, universal Medicare, peace in the Middle East... there doesn't seem to be any particular downside to promising the public something that repeatedly fails to happen. After all, if it didn't happen <i>yesterday</i>, that means it could still happen <i>tomorrow</i> — so you'd better make sure you're prepared!
There will be useful applications that will change the world and a new hype cycle to chase so everyone will forget about the outrageous proclamations and just focus on the new new thing.
What happened when they failed to deliver The Metaverse(TM)? Surprisingly little, really, at least on a macro scale, because everyone had moved onto the next fad. I'd assume that within a few years there will be a new Big New Thing, and a lot of the LLM stuff will be quietly written off. So it goes (though the amounts of money involved _are_ particularly large this time round, granted, and it may be painful for some).
Not sure, but it won't be a big deal. History is a great resource, see the prior AI is around the corner and we'll all have flying cars in the decades after WW2
If we were somehow stuck with our current models, there would still be about 10 years of finding more and more useful ways of using them. If we don't have AGI in 5 years we'll have something quite close to it.