One thing I've always wondered about turing tests: Wouldn't AI's need to lie a hell of a lot in order to pass it.<p>For example, if I asked someone to tell me the capital city of every country in the world, I'd be very surprised if they could. However, a half decent AI could do this easily. But what if I pushed it further and started to ask really complex maths questions (something computers are much better at than humans) then It would become clear very quickly that I'm talking to a machine.<p>Also, humans have holes in their knowledge. For example, given the question "Who is the prime minister of the Netherlands?" the answer for most people is going to be, "I don't know". Or what about "Which team won the first ever FA cup?". Despite not knowing the answer (The Royal Electrical & Mechanical Engineers) most people would hazard a guess (Manchester United, Liverpool) and be wrong.<p>Programming an AI to play dumb would be relatively easy. But what use is an AI that lies? Passing the test may well be possible, but what use is Artificial intelligence that pretends to as dumb as humans?