With all due respect, this is a quite random reading list, and appears more a result of the 'if "deep learning" in post.title: post.upvote()' trend on Hacker News ...<p>I will pick on two under the "Classics" section simply as I know the author and have used their work, so am not in any way saying the work isn't useful (it certainly can be in the right spot!), but it isn't "classic" or what I'd recommend for early readers at all.
"Uncertainty in Deep Learning" and "Dropout as a Bayesian Approximation" were both published within the last year and a half and is a PhD thesis + paper on interpretations of neural networks in a Bayesian fashion. "Classic" for a paper + thesis less than two years old is quite a stretch even for the fast moving field of deep learning.
The same holds true for many of the other papers in "Classics" such as batch norm which is (a) recent and (b) certainly not the only of such techniques (see layer norm, recurrent batch norm, ...) and (c) has complications in implementation[1].<p>As the simplest example, why is the original dropout paper[7] not under classics? It's an elegant paper, fundamentally important for current neural networks, and is more classic than "Dropout as a Bayesian Approximation" or "Dropout Rademacher Complexity of Deep Neural Networks" which are both listed.<p>I'm also highly dubious of the noted neuroscience connection - most deep learning researchers use very little from neuroscience.<p>Again, this list may be helpful to the creator of the repo and tailored toward their specific research direction but it is not useful for readers from Hacker News or those aiming to get their start in deep learning. Why so many upvotes? Zero comments? Zero discussion?<p>If you want a book, check out the Deep Learning book[2]. If you want a course for RNNs, check out CS224d[3]. If you want a course for CNNs, check out CS231n[4]. If you want to get down and dirty in a practical software engineering way, check out Fast AI[5]. If you want summaries of select recent deep learning papers in GitHub format, check out Denny Britz's notes[8]. There are many other starting points but those are my default suggestions.<p>If you really want to start learning, this isn't the right list for you and I'd really like to suggest a more sane and potentially tailored path. Seriously. If you reply with what you want, I'll do my best to suggest a starting point.<p>Background: I'm a deep learning researcher who publishes papers and articles[6].<p>[1]: <a href="http://www.alexirpan.com/2017/04/26/perils-batch-norm.html" rel="nofollow">http://www.alexirpan.com/2017/04/26/perils-batch-norm.html</a><p>[2]: <a href="http://www.deeplearningbook.org/" rel="nofollow">http://www.deeplearningbook.org/</a><p>[3]: <a href="http://cs224d.stanford.edu/" rel="nofollow">http://cs224d.stanford.edu/</a><p>[4]: <a href="http://cs231n.github.io/" rel="nofollow">http://cs231n.github.io/</a><p>[5]: <a href="http://course.fast.ai/" rel="nofollow">http://course.fast.ai/</a><p>[6]: <a href="http://smerity.com/articles/2016/google_nmt_arch.html" rel="nofollow">http://smerity.com/articles/2016/google_nmt_arch.html</a><p>[7]: <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="nofollow">https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf</a><p>[8]: <a href="https://github.com/dennybritz/deeplearning-papernotes" rel="nofollow">https://github.com/dennybritz/deeplearning-papernotes</a>