Silly. So very silly.<p>First of all: he doesn't use his definition of intelligence consistently. According to his usage of the word a being can be truly intelligent, and another one can _appear_ perfectly intelligent but be in reality a dumb machine that merely simulates intelligence. That's his chess example. But he _defines_ intelligence as the science of making machines do things that lead us to believe they are intelligent. A chess computer that outsmarts me most definitely fits that definition of intelligence.<p>He continually makes the claim that "Technological artifacts do not have a will or a desire". There is no reason to assume that you need some primal force to get a will or desire (perhaps any sufficiently complex system will have a will or desire as a side-effect), and there is no reason to assume a will or desire can't be perfectly emulated.<p>He claims that there are no signs that computer processing speed will eventually overtake that of the human brain. I say that computer processing speed is undeniably improving rapidly, and that the number of tasks that computers can do is rapidly increasing. Unless there is a hard limit somewhere the assumption should be that we will be eventually overtaken.<p>Computer AIs can play Chess, Checkers and Super Mario. They can create art, and compose music. They can drive some vehicles, and land planes at night. How about science? Some proofs are made and proved correct almost entirely by computer. Some proofs are so complex they can only be verified by computer. In many research fields a single human is almost guaranteed to contribute nothing. A computer, on the other hand, can probably brute-force his way to many new discoveries.<p>He again makes the claim that computers are innately unable to feel compassion or empathy. Only to finish with the dumbest remark of the article: "I don't think they will be very good at faking fouls", which is clearly a matter of basic game theory. And I think that robots will completely crush humans at soccer, even if they lack strategy. The moment robots are good enough to take the ball away from a pro human player, they will be able to do so consistently. Even if they run slower and shoot only semi-accurately, they will never make big mistakes. And history shows that in any game where computers can compete the humans have to play (almost) perfectly to even stand a chance (see: Chess / Checkers / Poker). We meat bags with our 100ms+ response times will never be in the same league as robots. Either we will be far superior to the robots, or the robots will run circles around us.
His sticking point is that the brain or mind could be a physical system that is not computable or equivalent to a Turing machine. If we take the assumption that the mind is a physical system, then it would still only be a matter of time before we found the reason for the difference in computational ability (maybe it would be like Penrose's quantum arguments for the mind), but those systems would still be within reach of implementation within a computer system. The brain is still a combination of proteins, amino acids, and a few other chemicals. We would just have to move to using chemical computers (which are already being researched).<p>Even following his arguments, it would only be a matter of time before sentient machines are created, equivalent to our own minds.<p>His arguments against AI are mostly 'just because' and has little proof.
You should probably just read Eliezer Yudkowsky on this topic. <a href="http://yudkowsky.net/singularity/ai-risk" rel="nofollow">http://yudkowsky.net/singularity/ai-risk</a>
<i>Soccer robots can move quickly, punch the ball hard and get it accurately into the net, but they cannot look at the pattern of the game and guess where the ball is going to end up.</i><p>I'm pretty sure that computers already play sports (minus the robots) acceptably well.
"But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer." Child Please. Why is anybody even listening to this kook?
Just as an aside, Noel Sharkey taught a neural networks course I took but everything I learnt about them came from implementing a back-prop net in Occam for another class!
In this interview he is suggesting that people stop a self-fufilling prophecy before it is to late. I think his fear is that if people believe superior ai is inevitable that researchers will work off of those assumptions as if they were given facts - eventually creating systems which might pose threats or general harms to humanity. Other than that, only time will tell who is more correct.
>I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.<p>This sounds like the same kind of argument as "intelligent design" people use. ""What else could there be?" is not a religious argument, it's a perfectly reasonably question, and to suggest that there are physical systems that cannot be recreated by computers shows a lack of fundamental theoretical understanding of computation.
While I acknowledge the possibility that "real" intelligence may be inherently non-computational, it's ridiculous to claim that it "equally might be" so. (which I take to mean roughly 50% probable). I claim 99.9% certainty that a computational process will demonstrate general human intelligence in the next hundred years, given that civilization isn't decimated before then. We don't even have to be clever enough to organize superpowerful computers into rudimentary general intelligence; we just need to understand our own brains at a fine enough level to emulate them computationally.<p>And, of course, you don't need anything close to general intelligence to do well in sports.
This was posted 5 days ago; 3 days before this reposting. At that time I left the comment: I thought this might be something like Eliezer's arguments against developing a GAI until it could be made provably Friendly AI, instead I just got an argument exactly like the ones in 1903 that said heavier than air flight by men was impossible - go back and read some of them, some of the arguments were almost identical. Some of the arguments are currently true, but some of them amount to "I can't do it, and no one else has done it, therefore there must be some fundamental reason it can't be done".
Some very interesting points and discussion here; what surprises me is no mention yet of the major thought experiment addressing many of the AI/intelligence arguments made here - the 29-year old Chinese Room paper by John Searle:<p><a href="http://en.wikipedia.org/wiki/Chinese_room" rel="nofollow">http://en.wikipedia.org/wiki/Chinese_room</a><p>In sum,<p>(A1) "Programs are formal (syntactic)."
(A2) "Minds have mental contents (semantics)."
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
(C1) Programs are neither constitutive of nor sufficient for minds.