TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Theoretical Motivations for Deep Learning

93 点作者 rndn超过 9 年前

5 条评论

chriskanan超过 9 年前
There is a recent 5 page theoretical paper on this topic that I thought was pretty interesting, and it tackles both deep nets and recurrent nets: <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1509.08101" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1509.08101</a><p>Here is the abstract:<p>This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1&#x2F;6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.
arcanus超过 9 年前
1) I am curious about learning more about the statement: &quot;Deep learning is a branch of machine learning algorithms based on learning multiple levels of representation. The multiple levels of representation corresponds to multiple levels of abstraction. &quot;<p>What evidence exists that the &#x27;multiple levels of representation&#x27;, which I understand to generally be multiple hidden layers of a neural network, actually correspond to &#x27;levels of abstraction&#x27;?<p>2) I&#x27;m further confused by, &quot;Deep learning is a kind of representation learning in which there are multiple levels of features. These features are automatically discovered and they are composed together in the various levels to produce the output. Each level represents abstract features that are discovered from the features represented in the previous level. &quot;<p>This implies to me that this is &quot;unsupervised learning&quot;. Are deep learning nets all unsupervised? Most traditional neural nets are supervised.
评论 #10420896 未加载
评论 #10420895 未加载
dnautics超过 9 年前
I wonder if &quot;lots of data&quot; is wrong. If I show you say twenty similar-looking Chinese characters in one person&#x27;s handwriting, and the same twenty in another person&#x27;s handwriting, you&#x27;ll probably do a good job (though maybe not an easy time) classifying them with very little data.
评论 #10427919 未加载
评论 #10421664 未加载
评论 #10429368 未加载
ilurk超过 9 年前
What tools did you use to make those nice pictures?<p>(didn&#x27;t read it yet though, will do when I have time)
评论 #10420011 未加载
memming超过 9 年前
Nice. Very well organized.