> deep learning must be supplemented by other techniques if we
are to reach artificial general intelligence<p>I don't think anyone major ever disputed that.<p>Having said that, thousand times yes to the author's concerns. Deep learning is AI's cryptocurrency in terms of being overhyped, although its main proponents are not to blame for that.
This is somewhat of an opinion piece. We need more articles like it to counterbalance the "AI is the new electricity" crowd. Hyping deep learning isn't healthy.
Almost all concerns in the paper are active research topics and do have certain solutions which do use some sort of deep learning approach. Depending on the viewpoint and interpretation, you could say that some of these approaches are hybrid solutions, but this is really just a matter of interpretation. No-one is really denying that the stated concerns are valid concerns. But also, no-one would say that the current knowledge gained from deep learning research will not be useful in the future. Of course, maybe for some aspects, you would need more radical new ideas, but I doubt that for future methods, nothing from the current methods will be used in some way.<p>E.g.:<p>3.1. Deep learning thus far is data hungry. First, you could argue that on a low-level, an animal/human gets quite a lot of visual and audio input, so it's data hungry as well. Then, you could argue that the evolution did already do some sort of pretraining/pre-wiring which helps, using million of years of data. Then, related to this is the topic of unsupervised learning and reinforcement learning. Then, dealing with the aspect of learning with small amounts of data, there are the active research topics of one-shot-learning, zero-shot-learning of few-shot-learning. Related is also meta-learning.<p>3.2. Deep learning thus far is shallow and has limited capacity for transfer. Transfer-learning, meta-learning and multi-task-learning are active research areas which deal with this.<p>3.3. Deep learning thus far has no natural way to deal with
hierarchical structure. There are various approaches also for this. This is also an active research area.<p>3.4. Deep learning thus far has struggled with open-ended inference. This is also an active research area.<p>3.5. Deep learning thus far is not sufficiently transparent. Also this is an active research area. And then, you could also argue that the biological brain also suffers at this.<p>3.6. Deep learning thus far has not been well integrated with prior knowledge. This is also an active research area.<p>Etc.
Different perspectives and research backgrounds converging to the same limits of the given tool is very good for defining a boundary while containing the hype. More in general, it still seems generally inefficient (and very risky from a regulator point of view) to deploy full AI agents in dynamic, human, imperfect environments, eg. self-driving cars in the common traffic flow.
This isn't a great paper (as you can tell by how often the author cites himself).<p>It isn't really worth responding too - it's either attacking claims which are never made, or so outrageously wrong it appears to be trolling.