This was a long presentation. Yann gets to the issue in the title of the post about one third of the way through. The first third should probably be called "What's right with deep learning, or how DL works"...<p>For each problem, he explores some salient ideas or ways to address the issue.<p>TLDR:<p>* Theory: We don't always have good explanations for why it works.<p>* Reasoning: Stick a CRF on top of a Deep Net<p>* Memory: We need a "hippocampus". Memory networks, neural embeddings.<p>* Unsupervised Learning: How do we speed up inference in a generative model? Sparse autoencoders, sparse models...<p>For those who could use an overview of neural nets and how some of them work, this may be useful: <a href="http://deeplearning4j.org/neuralnet-overview.html" rel="nofollow">http://deeplearning4j.org/neuralnet-overview.html</a>
Here is the video.<p><a href="http://techtalks.tv/talks/whats-wrong-with-deep-learning/61639/" rel="nofollow">http://techtalks.tv/talks/whats-wrong-with-deep-learning/616...</a>