>> A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning “will never happen” into “will happen faster than you think.”<p>In the '90s NP-complete problems were hard and today they are easy, or at least there is a great many instances of NP-complete problems that can be solved thanks to algorithmic advances, like Conflict-Driven Clause Learning for SAT.<p>And yet we are nowhere near finding efficient decision algorithms for NP-complete problems, or knowing whether they exist, neither can we easily solve <i>all</i> NP-complete problems.<p>That is to say, you can make a lot of progress in solving specific, special cases of a class of problems, even a great many of them, without making any progress towards a solution to the general case.<p>The lesson applies to general intelligence and LLMs: LLMs solve a (very) special case of intelligence, the ability to generate text in context, but make no progress towards the general case, of understanding and generating language at will. I mean, LLMs don't even model anything like "will"; only text.<p>And perhaps that's not as easy to see for LLMs as it is for SAT, mainly because we don't have a theory of intelligence (let alone artificial general intelligence) as developed as we do for SAT problems. But it should be clear that, if a system trained on the entire web and capable of generating smooth grammatical language, and even in a way that makes sense often, has not yet achieved independent, general intelligence, that's not the way to achieve it.