Unless you're using what I would consider a pretty idiosyncratic definition of "real" then I'd say LLM's <i>are</i> "real AI". Less than perfect AI, sure. Less than "human level" in some areas, sure. But then again, human intelligence isn't exactly perfect either.<p>I would argue that intelligence is nothing more than the ability to display behavior that would be considered intelligent in context. In that regard, we've had "artificial intelligence" for decades, and it's simply slowly improving in terms of its capabilities and fidelity with "actual human" intelligence over time.<p>Now if by "real AI" you mean "AI like in the sci-fi movies, AI that is fully equal to humans in every regard and probably exceeds humans in some regards"... then clearly we're not there <i>yet</i> but I still see no reason in principle to think that it won't happen in time. That is, is there any particular reason to think that LLM's are the last advancement in AI that will ever take place?
I think we have the pieces already to create AIs that will match human performance on most intellectual work that people do in companies. I believe that you will see systems that combine several LLMs in a conversation of generation and evaluation. Those models may call other models, APIs and plugins. It will not be full sentient AGI, but models will be combined to form applications that can, for example replace any accounting professional, then prompt engineers will fine tune the application for a single company’s accounting process. An accounting department might have its staff reduced 90% to just CFO, VPs and prompt engineers.<p>I think these systems will be built over time, but only see a marginal impact on productivity due to inertia. Then suddenly there will be a billion dollar company run by 5 people with graphics cards and the old Fortune 500 will shit the bed.
Yes, but without consensus for a long time when it comes to definition. We might be debating "AI or not" for far longer than the time it took to arrive and I'm a little worried as for what this entails when it comes to our relationship with it.
Yes - but LLM is a long, long way from what I'd call "real AI".<p>That I've heard, the "AI" folks have yet to produce anything even close to a common crow's social savvy, or ability to handle the real world.
This is real AI.<p>Confusing terminology and decades of science fiction have brainwashed people to expect it to be perfect in order to qualify as real.