> Passing the Turing Test is not a sensible goal for
Artificial Intelligence<p>yes, but I never understood it as that. I thought it was more a philosophical thought experiment about humans and a reply to people who don't/didn't believe in (strong) AI. "You think humans are intelligent and have conciousness - but how do you know that another person has that?"
Since there are no comments after 4 hours, I'll write a longer one.<p>I read this paper a while back and learned that the Turing Test, even if it was purely a test for imitating a human rather than specifically a woman, is still greatly underspecified. It depends a lot on who is being tested, what questions they ask, how long can they have this conversation, and how the human being tested responds.<p>I would like to propose a different test that is more clearly specified and has more modern day relevance, that I call the imPoster test.<p>The imPoster Test evaluates whether a human can judge whether a sample of social sharing activity is from someone they already know, or if it is generated by computer software (a bot) to imitate that person. Like a Twitter or Instagram feed. Therefore, the relationship between the judge and who they're evaluating is controlled rather than random. It can be set up like the original Turing Test Imitation Game, where the judge sees the feeds from both the bot and their friend, but doesn't know which is which. This test has a few desirable properties compared to the Turing Test:<p>1) Variation from the conversation is reduced, as the communication is purely one-way, and the judge is trying to determine if it is a known specific person rather than a random person (who can exhibit an unpredictable wide range of behaviors).<p>2) We're testing a form of digital identity of a person, which can build on more parts of artificial intelligence (user modeling, computer vision, NLP, social graph), rather than ad-hoc conversation which is mostly NLP.<p>3) Rather than a resulting "pass" or "fail" decision, it is possible compute a score based on how many posts a judge correctly labels as belonging to the poster.<p>I'd be interested in hearing if others think this is an interesting challenge as an AI benchmark.
> Why should we take it as our goal to build something which is just like us? A dog would never win any imitation game; but there seems to be no doubt that dogs exhibit cognition, and a machine with the cognitive and communicative abilities of a dog would be an interesting challenge for AI (and might usefully be incorporated, for example, into automobiles.)<p>This is such a good point.<p>After all it's called the "imitation game" -- is it really about AI?
It seems like this is the kind of grand theory which harms actual research and development: instead of gaining theory from experience, we get our experience to conform to a pre-set theory -- this is a top-down approach when, in fields of science and engineering, we rather might agree to a bottom-up design (to some degree, obviously).
Turing test may be a decent first approxiamtion, but not the long term goal. Consider:<p>1. We can't give a satisfactory definition intellect or consciousnes but
2. We know that humans have it<p>therefore Turing test becomes the natural choice.
It's just like blind hearing for musicians.
There is a recent related article from Amazon Science if anyone is interested:<p>"Does the Turing Test pass the test of time?"
<a href="https://www.amazon.science/latest-news/does-the-turing-test-pass-the-test-of-time" rel="nofollow">https://www.amazon.science/latest-news/does-the-turing-test-...</a>