TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Neural Turing Machines

60 pointsby willwill100over 10 years ago

3 comments

teraflopover 10 years ago
Previous discussion: <a href="https://news.ycombinator.com/item?id=8487807" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=8487807</a>
iandanforthover 10 years ago
I&#x27;m going to be stupid in public on the hope that someone will correct me.<p>1. I&#x27;m not clear on the point of this paper.<p>There are a lot of buzzwords and an extremely diverse set of references. The heart of the paper seems to be a comparison between Long-Short-Term-Memory (LSTM) recurrent nets and their NTM nets. But they don&#x27;t expose the network to very long sequences, or sequences broken by arbitrarily long delays which are what LSTM nets are particularly good at. They seem to make the jump from &quot;LSTM nets are theoretically turing complete&quot; to &quot;LSTM nets are a good benchmark for any computational task.&quot;<p>2. The number of training examples seems huge<p>For many of the tasks they trained over hundreds of thousands of sequences. This seems like very very slow learning. If I&#x27;m meant to interpret these results as a network learning a computational rule (copying, sorting etc) is it really that impressive if it takes 200k examples before it gets it right? (Not sarcasm, I really don&#x27;t know.)
评论 #8491566 未加载
评论 #8491583 未加载
macraelover 10 years ago
Does a &quot;typical&quot; neural network not have any storage to speak of? When I&#x27;ve seen examples of neural networks working, it&#x27;s seemed like they work in cycles in some way, with the states of each &quot;neuron&quot; affecting the state of others. Is that not potentially storage?
评论 #8491415 未加载
评论 #8491760 未加载