TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Turing Machines Are Recurrent Neural Networks (1996)

63 pointsby jaybosamiyaover 9 years ago

3 comments

csantiniover 9 years ago
TLDR: Given any Turing Machine you can build a Recurrent Neural Network that does the same thing.<p>You can then apply known facts about Touring machines, e.g. Halting Problem, so you can&#x27;t generally predict when the neural network is gonna get to the final state.
评论 #10938430 未加载
评论 #10939393 未加载
kazinatorover 9 years ago
The paper seems to show that recurrent networks are Turing Machines, not the other way around.<p>They are finite state machines, unless the variables stored at the nodes are of unlimited size (so that a pushdown stack or whatever of unlimited depth could be represented in a variable). That goes against the &quot;spirit&quot;, if you will, of neural networks, which maintain some small amount of state at each node.<p>All finite state machines are Turing Machines, of course. Just the reverse isn&#x27;t true.<p>I don&#x27;t see the point of the Example, which clearly shows a representation of a computation in the form of a finite sized matrix with bounded values, iterating on a vector. Such a contraption is not a Universal Turing Machine no matter what values you stuff into it and what initial vector you use.<p>Ironically, the paper was <i>written on</i> a computer built on sequential logic, which is network of combinational gates which feedback: essentially a recurrent neural network, and its purpose is evidently to argue that such a thing can compute.<p>Ah, we have this flights of fancy toward the end:<p><i>The variables can be continuous, not only integer valued.</i><p>Come on, there is no such thing. Continuous variables are not <i>constructable</i> (just some of them). They are only an abstraction in math. A continuous variable effectively consists of infinite bits. Even those that are called computable, like the number pi, still cannot be computed to completion. What &quot;computable means is that there is an algorithm which makes progress toward a more and more accurate value: for any specific integer n, the function &quot;find the n-th digit of pi&quot; completes in a finite number of steps. That doesn&#x27;t mean we can have a continuously valued variable and stick pi in it.<p>Moreover, in this fantasy world where we have true continuous variables, why can&#x27;t we also have them in the original language whose elements are equated with the neural network? After all <i>&quot;for each variable V in the program, there exists a node Nv</i>. If node Nv is continuous, so must be the variable V in the program.<p>What the paper really shows is that the syntax of some abstract algorithmic, imperative steps can be transliterated into the syntax of a network with feedback which does the same thing.
j2kunover 9 years ago
&gt; An interesting question arises, for example, whether the class of NP-complete problems could be attacked more efficiently in the network environment!<p>It is well known that training even a two-layer feed-forward neural network is NP-hard, so I doubt that this is a reasonable direction. This was even known in 1993. [1]<p>[1]: <a href="https:&#x2F;&#x2F;people.csail.mit.edu&#x2F;rivest&#x2F;pubs&#x2F;BR93.pdf" rel="nofollow">https:&#x2F;&#x2F;people.csail.mit.edu&#x2F;rivest&#x2F;pubs&#x2F;BR93.pdf</a>