Last time I was at UMass Amherst it was Bong Day or something, and people were out blithely blazing up on the lawn of the university square.<p>Fittingly, this article reads like a stoner's ramblings. It doesn't get into what exactly a "Super-Turing" machine is capable of that a Turing machine is not. Some googling turned up some theoretical possibilities (oracles and such) which do not appear to be physically buildable.
Here's the Science paper that describes super-Turing computation:<p><a href="http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf" rel="nofollow">http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf</a><p>I haven't had time to do anything more than skim it. My initial bogometer reading is not quite pegged, but it's close.
The article did not make sense so I dug up some of the papers in question. Those do not make sense either.<p>It appears to be a poorly executed and overstated attempt at (re-)discovering high-order inductive computational models. These are equivalent to Turing models unless you happen to have a hypercomputer at your disposal.<p>The endless parade of low-quality claims like this is why no one takes AI research seriously, even the quality work.
I just hate the headlines. In one of the first lectures in my post grad school our prof. told us how to critically examine research papers and claims. He said "If anyone claims to come up with something "revolutionary" you should be 10 times more skeptical about his claims. It is very likely that the person probably doesn't know what he is talking about."<p>Of course in these cases the news reporters are to be blamed than the scientists who worked on the proble,.
The article is very confused but I think what it is trying to say the researchers are building a Hardware based Recurrent Neural Network that operates with Real Numbers (i.e. analog). Hence it will be exponentially more powerful than a turing machine. The possibility of being able to build a physical device that can harness the reals to infinite precision is a <i>big</i> assumption.<p>So this device is an example of a Hypercomputer. Its existence would disprove the Chuch-Turing Hypothesis. It would also be very difficult to verify that it was actually calculating what it claimed [1]. As a hypercomputer it could by definition solve the halting problem. It is also more powerful than a quantum computer.<p>Simple argument: a turing machine can <i>inefficiently</i> simulate a quantum computer. By definition a turing machine cannot simulate a hypercomputer. It could hence compute incomputable functions. And since it could tell whether a program will halt it could compute Chaitins constant. One consequence of that is the resolution of twin prime, goldbach's conjecture and other open number theory problems. It should also be able to compute Solomonoff's Universal prior and hence act in a bayes optimal manner. Strong AI.<p>If what is claimed can be done then this is a very big deal.<p>Some other implicit or explicit arguments in this article:<p>- Human brain is more than a turing maching<p>- turing machine will not be able to realize AI<p>- <i>"Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed."</i><p>I do not buy any of those arguments. And am not sure what that last quote is supposed to mean.<p>[1] <a href="http://www.complex-systems.com/pdf/18-1-6.pdf" rel="nofollow">http://www.complex-systems.com/pdf/18-1-6.pdf</a><p><a href="http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf" rel="nofollow">http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf</a>
From an earlier paper:<p>> such super-Turing capabilities can only be achieved
in cases where the evolving synaptic patters [sic] are themselves non-recursive (i.e., non Turing-computable)<p>"Interactive Evolving Recurrent Neural Networks are Super-Turing", 2012, Jérémie Cabessa
<a href="http://jcabessa.byethost32.com/papers/CabessaICAART12.pdf" rel="nofollow">http://jcabessa.byethost32.com/papers/CabessaICAART12.pdf</a><p>So, create a neural network that changes after a non-Turing-computable pattern and its output might not be Turing-computable.
Wish there were a bit more information here. The article's a little breathless without telling us exactly how this is practically improves on ANNs and the Turing model, and I couldn't find a more accurate representation of this paper.<p>Hypercomputation models that depend on things like infinite-precision real numbers have been around for a while, including in Siegelmann's work, so I'm curious to know what specific advance is being reported here in "Neural computing".
For anyone else who's slightly miffed by the presence of three columns, only one of which contains the actual story:<p><a href="http://www.readability.com/articles/dbycne79" rel="nofollow">http://www.readability.com/articles/dbycne79</a>