TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Training Recurrent Neural Networks [pdf]

88 点作者 e19293001超过 8 年前

4 条评论

e19293001超过 8 年前
I found this from neural-networks course in coursera[0]. The author of this paper had been discussed as an example of what recurrent neural nets can now do.<p>Here&#x27;s the description from the slide:<p><pre><code> •  Ilya Sutskever (2011) trained a special type of recurrent neural net to predict the next character in a sequence. •  After training for a long time on a string of half a billion characters from English Wikipedia, he got it to generate new text. – It generates by predicting the probability distribution for the next character and then sampling a character from this distribution. – The next slide shows an example of the kind of text it generates. Notice how much it knows! Some text generated one character at a time by Ilya Sutskever’s recurrent neural network: In 1974 Northern Denver had been overshadowed by CNL, and several Irish intelligence agencies in the Mediterranean region. However, on the Victoria, Kings Hebrew stated that Charles decided to escape during an alliance. The mansion house was completed in 1882, the second in its bridge are omitted, while closing is the proton reticulum composed below it aims, such that it is the blurring of appearing on any well-paid type of box printer. </code></pre> [0] - <a href="https:&#x2F;&#x2F;www.coursera.org&#x2F;learn&#x2F;neural-networks&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.coursera.org&#x2F;learn&#x2F;neural-networks&#x2F;</a>
评论 #12651150 未加载
joe_the_user超过 8 年前
Also needs 2013 designation,<p>Especially important as neural net knowledge seems to be evolving quickly.<p>And perhaps someone can explain why paper matters relative to the plethora of papers and approaches &quot;out there&quot;.
the8472超过 8 年前
&gt; The certificate is only valid for the following names: www.cs.toronto.edu, cs.toronto.edu
评论 #12650370 未加载
eveningcoffee超过 8 年前
Considering that it is from 2013 and from Hinton team then are the Restricted Boltzmann Machines actually necessary for this?