TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Deep Learning: A Critical Appraisal [pdf]

85 pointsby saroshover 7 years ago

6 comments

vadimbermanover 7 years ago
&gt; deep learning must be supplemented by other techniques if we are to reach artificial general intelligence<p>I don&#x27;t think anyone major ever disputed that.<p>Having said that, thousand times yes to the author&#x27;s concerns. Deep learning is AI&#x27;s cryptocurrency in terms of being overhyped, although its main proponents are not to blame for that.
评论 #16085227 未加载
评论 #16084498 未加载
评论 #16084519 未加载
评论 #16086372 未加载
评论 #16084529 未加载
starchild_3001over 7 years ago
This is somewhat of an opinion piece. We need more articles like it to counterbalance the &quot;AI is the new electricity&quot; crowd. Hyping deep learning isn&#x27;t healthy.
albertzeyerover 7 years ago
Almost all concerns in the paper are active research topics and do have certain solutions which do use some sort of deep learning approach. Depending on the viewpoint and interpretation, you could say that some of these approaches are hybrid solutions, but this is really just a matter of interpretation. No-one is really denying that the stated concerns are valid concerns. But also, no-one would say that the current knowledge gained from deep learning research will not be useful in the future. Of course, maybe for some aspects, you would need more radical new ideas, but I doubt that for future methods, nothing from the current methods will be used in some way.<p>E.g.:<p>3.1. Deep learning thus far is data hungry. First, you could argue that on a low-level, an animal&#x2F;human gets quite a lot of visual and audio input, so it&#x27;s data hungry as well. Then, you could argue that the evolution did already do some sort of pretraining&#x2F;pre-wiring which helps, using million of years of data. Then, related to this is the topic of unsupervised learning and reinforcement learning. Then, dealing with the aspect of learning with small amounts of data, there are the active research topics of one-shot-learning, zero-shot-learning of few-shot-learning. Related is also meta-learning.<p>3.2. Deep learning thus far is shallow and has limited capacity for transfer. Transfer-learning, meta-learning and multi-task-learning are active research areas which deal with this.<p>3.3. Deep learning thus far has no natural way to deal with hierarchical structure. There are various approaches also for this. This is also an active research area.<p>3.4. Deep learning thus far has struggled with open-ended inference. This is also an active research area.<p>3.5. Deep learning thus far is not sufficiently transparent. Also this is an active research area. And then, you could also argue that the biological brain also suffers at this.<p>3.6. Deep learning thus far has not been well integrated with prior knowledge. This is also an active research area.<p>Etc.
评论 #16088658 未加载
iricktover 7 years ago
More context here: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16083325" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16083325</a>
DrNukeover 7 years ago
Different perspectives and research backgrounds converging to the same limits of the given tool is very good for defining a boundary while containing the hype. More in general, it still seems generally inefficient (and very risky from a regulator point of view) to deploy full AI agents in dynamic, human, imperfect environments, eg. self-driving cars in the common traffic flow.
nlover 7 years ago
This isn&#x27;t a great paper (as you can tell by how often the author cites himself).<p>It isn&#x27;t really worth responding too - it&#x27;s either attacking claims which are never made, or so outrageously wrong it appears to be trolling.
评论 #16084909 未加载
评论 #16084946 未加载