Hinton qualified that statement by adding, <i>"but I do think there's going to have to be quite a few conceptual breakthroughs."</i> He explicitly conditioned his belief on future conceptual breakthroughs that have not been made yet. Also, his statement is not a prediction; it starts with "I do believe."<p>Here's the full quote, copied verbatim from the article:<p><i>> I do believe deep learning is going to be able to do everything, but I do think there’s going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It’s now used in almost all the very best natural-language processing. We’re going to need a bunch more breakthroughs like that.</i><p>Please don't criticize him or the article without first reading it in full.
<i>Now it’s hard to find anyone who disagrees, he says.</i><p>Either that's an accurate claim or representative of a cliqueyness and echo chamber approach to AGI in some quarters.<p>Either way it's depressing.
I’m tired of Hinton over promising. It’s not going to do everything and he’s promoting an irresponsible position.<p>Has there really been a huge breakthrough since the initial wave of CNNs? I guess transformers/attention, but I don’t consider GPT-3 to solve <i>any</i> problem at all.
> In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it’s doing.<p>Does anyone know which paper that was?
> What do you believe to be your most contrarian view on the future of AI?<p>> Well, my problem is I have these contrarian views and then five years later, they’re mainstream.<p>Has every single one of his contrarian views panned out? What an arrogant quote. This just diminished him in my eyes.
If a cyborg were to "do everything" on deep learning would it have a meaningful model of reality or would it simply be behaving as if it did?
I basically buy that DL will be effective, but I think the real innovation will have to be power efficiency e.g. how many GPUs and GBs of ram does GPT-<i>x</i> need?
I am very skeptical of the current approaches (supervised learning) making the quantum leaps being promised. This needs a paradigm shift of weakly-supervised learning and fine-tuning for specific tasks (a la human learning).