The Turing Test was a very clever way of describing an AI without having to get into dead end philosophical arguments about what is or isn't intelligence (Ants have very complex social structure and engineering abilities - but are they intelligent?).<p>Turing picked something uniquely human and used it as a baseline. Unfortunately, what we got is passing the cargo cult turing test, as described in the Chinese Room Experiment.<p>I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.
One thing I've always wondered about turing tests: Wouldn't AI's need to lie a hell of a lot in order to pass it.<p>For example, if I asked someone to tell me the capital city of every country in the world, I'd be very surprised if they could. However, a half decent AI could do this easily. But what if I pushed it further and started to ask really complex maths questions (something computers are much better at than humans) then It would become clear very quickly that I'm talking to a machine.<p>Also, humans have holes in their knowledge. For example, given the question "Who is the prime minister of the Netherlands?" the answer for most people is going to be, "I don't know". Or what about "Which team won the first ever FA cup?". Despite not knowing the answer (The Royal Electrical & Mechanical Engineers) most people would hazard a guess (Manchester United, Liverpool) and be wrong.<p>Programming an AI to play dumb would be relatively easy. But what use is an AI that lies? Passing the test may well be possible, but what use is Artificial intelligence that pretends to as dumb as humans?
I find it hilarious that <i>half an hour</i> after this is posted here, there are no comments. Because, you're right, O HN Reader: they're not even close. But you already knew that without reading TFA, didn't you? And just to satisfy your curiosity, they're talking about Watson-style machines, but then following this talk with quotes about "huge challenges " remaining, which are apparently too insignificant to mention in the title.
Like some others have said, the Turing Test is more a human mimicry test than a test of intelligence or consciousness. We humans love to anthropomorphize, so tricking us into believing a machine is human shouldn't be how we gauge the effectiveness of our AI.<p>I ran across these "Fundamental Principles of Cognition" that might do a better job:<p>Principle 1. Object Identification (Categorization)<p>Principle 2. Minimal Parsing ("Occam’s Razor")<p>Principle 3. Object Prediction (Pattern Completion)<p>Principle 4. Essence Distillation (Analogy Making)<p>Principle 5. Quantity Estimation and Comparison (Numerosity Perception)<p>Principle 6. Association-Building by Co-occurrence (Hebbian Learning)<p>Principle 6½. Temporal Fading of Rarity (Learning by Forgetting)<p>See: <a href="http://www.foundalis.com/res/poc/PrinciplesOfCognition.htm" rel="nofollow">http://www.foundalis.com/res/poc/PrinciplesOfCognition.htm</a><p>Also, Hofstadter suggests some similar "essential abilities for intelligence".<p>1. To respond to situations very flexibility.<p>2. To make sense out of ambiguous or contradictory messages.<p>3. To recognize the relative importance of different elements of a situation.<p>4. To find similarities between situations despite differences, which may separate them.<p>5. To draw distinctions between situations despite similarities, which may link them.
Despite its hyped-up title, this Wired news piece contains no real news of scientific advances. In fact, I can summarize it in one sentence: "with more and more data coming online, and with sophisticated techniques for collecting, organizing, and processing all this data, computers might soon be able to fool the Turing Test."
Here is one of my favourite articles on the Turing Test.<p>The method described in this article appears similar in its approach, "Suppose, for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible".<p><a href="https://sites.google.com/site/asenselessconversation/" rel="nofollow">https://sites.google.com/site/asenselessconversation/</a>
I think of the Turing test not as an actual experiment that can be performed, but more of a first crack at a working definition of intelligence.<p>Sort of like Shannon said "Let's leave 'meaning' to the psychologists, and define 'information' based on properties inherent in the message itself" and ended up revolutionizing information theory.<p>Turing is saying "Stop bickering over 'comprehension' and 'intent'. Can we just agree that if a machine can fool an intelligent human being into thinking it is also an intelligent human being - based only on its information output rather than its physical shape - that machine deserves the label 'intelligent'?"<p>And I agree. Philosophers can argue about the internal state of that mind all they want. But if I can converse and crack jokes with my new computer buddy, I have no qualms about calling him intelligent. At least until he blue screens and finally fails the test by spitting out a hex dump.
We already have a sort of AI that can be used for real world applications - it's called the Internet. Using Google, Facebook, Wikipedia and dozens of other sites, it's relatively easy to create a robot that can do quite a lot of things - the problem is that creating the actual physical body of the robot is expensive - humans are cheaper and still do everything better.<p>We don't even need AI, we need robots for specific tasks that would also be programmed to work around any potential issues (most of which can be identified and programmed if the field of application is narrow enough) - making them create workarounds/solutions for new problems would be awesome and all, but it's not necessary, IMO.
A few months ago there was a rumor about a Google X AI project that passes the Turing Test 93% of the time in an IM conversation:<p><a href="http://www.webpronews.com/is-google-x-all-about-highly-intelligent-robots-2011-12" rel="nofollow">http://www.webpronews.com/is-google-x-all-about-highly-intel...</a>
If siri 5.0 or Cyc 1000000.0 passes the Turing test, it will be the same story all over again...the goal posts will be moved one more time. People will say that passing the Turing test is not really a mark of true intelligence, its just an imitation!
Ah, another rehash the "AI springs forth from complexity" argument. This is entertainingly laid out in "The Moon is a Harsh Mistress", "Colossus: The Forbin Project", "The Adolescence of P1", "Man Plus", etc.
We are moving into a post-literate world. "Artificial Intelligence" is a field of endeavor, not an entity that can try to pass a test. The headline is not as bad as movie stars and news anchors using objective pronouns as the subject of a sentence, but it still shows that the language is changing, and not for the better, IMHO.