He almost makes a good point when he questions whether “human imitative” AI could solve the other problems we face, seeing as humans aren’t that smart (especially not in large numbers when participating in complex systems).<p>But the distinction he makes between ML and AI is crucial. What he’s really talking about is AGI - general intelligence. And he’s right - we don’t have a single example of AGI to date (few or single shot models withstanding, as they are only so for narrow tasks).<p>The majority mindset in AI research seems to be (and I could be wrong here, in that I only read many ML papers) that the difference between narrow AI and general AI is simply one of magnitude - that GPT-3, given enough data and compute, would pass the Turing test, ace the SAT, drive our cars, and tell really good jokes.<p>But this belief that the difference between narrow and general intelligence is one of degree rather than kind, may be rooted in what this article points out: in the historical baggage of AI almost always signifying “human imitative”.<p>But there is no reason that AGI must be super intelligent, or human-level intelligent, or even dog-level intelligent.<p>If narrow intelligence is not really intelligence at all (but more akin to instinct), then the dumbest mouse is more intelligent than AlphaGo and GPT-3, because although the mouse has exceedingly low General Intelligence, AlphaGo and GPT-3 have none at all.<p>There is absolutely nothing stopping researchers from focusing on mouse-level AGI. Moreover, it seems likely that going from zero intelligence to infinitesimal intelligence is the harder problem than going from infinitesimal intelligence to super-intelligence. The latter may merely be an exercise in scale, while the former requires a breakthrough of thought that asks why a mouse is intelligent but an ant is not.<p>The only thing stopping researchers is that when answering this question, the answer is really uncomfortable, and outside their area of expertise, and has weighty historical baggage. It takes courage of researchers like Yoshua Bengio to utter the word “consciousness”, although he does a great job by reframing it with Thinking Fast and Slow’s System 1/2 vocabulary. Still, the hard problem of consciousness, and the baggage of millennia of soul/spirit as an answer to that hard problem, makes it exceedingly difficult for well-trained scientists to contemplate the rather obvious connection between general intelligence and conscious reasoning.<p>It’s ironic that those who seek to use their own conscious reasoning to create AGI are in denial that conscious reasoning is essential to AGI. But even if consciousness and qualia are a “hard”problem that we cannot solve, there’s no reason to shelve the creation of consciousness as also “hard”. In fact, we know (from our own experience) that the material universe is quite capable of accidentally creating consciousness (and thus, General Intelligence). If we can train a model to summarize Shakespeare, surely we can train a model to be as conscious, and as intelligent, as a mouse.<p>We’re only one smart team of focused AI researchers away from Low-AGI. My bet is on David Ha. I eagerly await his next paper.