TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Foundations of deep learning

101 点作者 aidanrocke将近 8 年前

2 条评论

Smerity将近 8 年前
With all due respect, this is a quite random reading list, and appears more a result of the &#x27;if &quot;deep learning&quot; in post.title: post.upvote()&#x27; trend on Hacker News ...<p>I will pick on two under the &quot;Classics&quot; section simply as I know the author and have used their work, so am not in any way saying the work isn&#x27;t useful (it certainly can be in the right spot!), but it isn&#x27;t &quot;classic&quot; or what I&#x27;d recommend for early readers at all. &quot;Uncertainty in Deep Learning&quot; and &quot;Dropout as a Bayesian Approximation&quot; were both published within the last year and a half and is a PhD thesis + paper on interpretations of neural networks in a Bayesian fashion. &quot;Classic&quot; for a paper + thesis less than two years old is quite a stretch even for the fast moving field of deep learning. The same holds true for many of the other papers in &quot;Classics&quot; such as batch norm which is (a) recent and (b) certainly not the only of such techniques (see layer norm, recurrent batch norm, ...) and (c) has complications in implementation[1].<p>As the simplest example, why is the original dropout paper[7] not under classics? It&#x27;s an elegant paper, fundamentally important for current neural networks, and is more classic than &quot;Dropout as a Bayesian Approximation&quot; or &quot;Dropout Rademacher Complexity of Deep Neural Networks&quot; which are both listed.<p>I&#x27;m also highly dubious of the noted neuroscience connection - most deep learning researchers use very little from neuroscience.<p>Again, this list may be helpful to the creator of the repo and tailored toward their specific research direction but it is not useful for readers from Hacker News or those aiming to get their start in deep learning. Why so many upvotes? Zero comments? Zero discussion?<p>If you want a book, check out the Deep Learning book[2]. If you want a course for RNNs, check out CS224d[3]. If you want a course for CNNs, check out CS231n[4]. If you want to get down and dirty in a practical software engineering way, check out Fast AI[5]. If you want summaries of select recent deep learning papers in GitHub format, check out Denny Britz&#x27;s notes[8]. There are many other starting points but those are my default suggestions.<p>If you really want to start learning, this isn&#x27;t the right list for you and I&#x27;d really like to suggest a more sane and potentially tailored path. Seriously. If you reply with what you want, I&#x27;ll do my best to suggest a starting point.<p>Background: I&#x27;m a deep learning researcher who publishes papers and articles[6].<p>[1]: <a href="http:&#x2F;&#x2F;www.alexirpan.com&#x2F;2017&#x2F;04&#x2F;26&#x2F;perils-batch-norm.html" rel="nofollow">http:&#x2F;&#x2F;www.alexirpan.com&#x2F;2017&#x2F;04&#x2F;26&#x2F;perils-batch-norm.html</a><p>[2]: <a href="http:&#x2F;&#x2F;www.deeplearningbook.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.deeplearningbook.org&#x2F;</a><p>[3]: <a href="http:&#x2F;&#x2F;cs224d.stanford.edu&#x2F;" rel="nofollow">http:&#x2F;&#x2F;cs224d.stanford.edu&#x2F;</a><p>[4]: <a href="http:&#x2F;&#x2F;cs231n.github.io&#x2F;" rel="nofollow">http:&#x2F;&#x2F;cs231n.github.io&#x2F;</a><p>[5]: <a href="http:&#x2F;&#x2F;course.fast.ai&#x2F;" rel="nofollow">http:&#x2F;&#x2F;course.fast.ai&#x2F;</a><p>[6]: <a href="http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;google_nmt_arch.html" rel="nofollow">http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;google_nmt_arch.html</a><p>[7]: <a href="https:&#x2F;&#x2F;www.cs.toronto.edu&#x2F;~hinton&#x2F;absps&#x2F;JMLRdropout.pdf" rel="nofollow">https:&#x2F;&#x2F;www.cs.toronto.edu&#x2F;~hinton&#x2F;absps&#x2F;JMLRdropout.pdf</a><p>[8]: <a href="https:&#x2F;&#x2F;github.com&#x2F;dennybritz&#x2F;deeplearning-papernotes" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dennybritz&#x2F;deeplearning-papernotes</a>
评论 #14485640 未加载
评论 #14485303 未加载
评论 #14485592 未加载
评论 #14485306 未加载
minimaxir将近 8 年前
You can&#x27;t take papers and reupload them to GitHub uncited from the original source publication.
评论 #14485195 未加载
评论 #14485194 未加载