It's an interesting article but I feel a bit bait-and-switched by the title; this is a definition of consciousness so weak that it's not really very interesting. I don't think I accept a definition of consciousness that appears to boil down to "shows similar perceptual illusions as the human perceptual system".
Fascinating study with some fairly straight-forward follow on experiments. I worry about these things getting the same bad rap that plagued 'fuzzy' logic because the investigators try too hard to describe them in human terms. So rather than saying "A new selection mechanism for data assimilation based on similar mechanisms that have been proposed in neural processes." they say "Its almost conscious!" which sets off a whole bunch of distractions which don't help in the underlying algorithm development or understanding its applicability to things like machine learning or process control.
Strange; the "button test" says humans take 200ms to press a button once a green light flashes.<p>I set that experiment up on Engineering Days at my college. Kids who put their faces right up to the light could press the button in under 30ms. The human detector response seemed to depend upon the amplitude of the neurological signal!
In case someone's interested in why this doesn't tell us much about what's so ridiculously <i>mysterious</i> about consciousness and subjectivity:<p><a href="http://consc.net/chalmers/" rel="nofollow">http://consc.net/chalmers/</a> [David Chalmer's website]<p><a href="http://instruct.westvalley.edu/lafave/nagel_nice.html" rel="nofollow">http://instruct.westvalley.edu/lafave/nagel_nice.html</a> [Classic paper by Thomas Nagel]
"Functional consciousness" is a fine goal if it helps the bot behave in a way that is useful. I don't know how we would even know if we ever achieved subjective consciousness in a computer, except as it says at the end of the article: 'We can't even prove that each of us isn't the only self-aware being in a world of zombies. "Philosophers have been dealing with that for more than 2000 years," says Franklin. Perhaps we will simply attribute subjectivity to computers once they become sufficiently intelligent and communicative, he says.'
On the other hand, it's a little flaky that they set a parameter to try to get human-like response times. Statistical fitting.<p>We built it to hit this number, we're surprised when it does!<p>Cool calculation process, though.
Where's some source code for this?<p>There are related papers here, <a href="http://ccrg.cs.memphis.edu/papers.html" rel="nofollow">http://ccrg.cs.memphis.edu/papers.html</a>, but I'd like to see some code.<p>This is reminiscent of Funk2, <a href="http://www.funk2.org/about/" rel="nofollow">http://www.funk2.org/about/</a>, it would be interesting to compare and contrast these approaches.