>However, work on artificial intelligence, especially general intelligence, will be improved by a clearer idea of what intelligence is. One way is to give a purely behavioural or black-box definition. In this case we have to say that a machine is intelligent if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment.<p>What if my expectation is that it be able to decide if an algorithm, given some inputs, will halt? Or being able to decide if a proposition is true within an axiomatic system? Maybe the better question is why Turing himself was optimistic about intelligent machines.