Seriously, why won't this 2+ decade old argument die? It's sophomoric and picked apart in second year cognitive systems classes by 19 year old children. It glaringly misunderstands and misrepresents. If people are genuinely interested in AI they should take a basic course in it at university or pick up a real book on the subject.
Richard Gabriel has a nice write-up on this, well worth reading especially if you're in the sputtering 'zomg this is SO dumb' camp. It's also a much quicker read than the entirety of the Stanford page.<p><a href="http://www.dreamsongs.com/Searle.html" rel="nofollow">http://www.dreamsongs.com/Searle.html</a><p>"Searle's argument is subtle in a way that seems to confuse intelligent readers. "
I've never been able to figure out how this (and most other philosophical arguments against AI) don't apply just as well to individual neurons. Though I suppose if one wants to frame it as an argument that neither computers nor human brains are capable of intelligence, I might be persuaded.<p>In fact, for sake of argument, I claim that I am, in fact, just an elaborate system of symbolic manipulation with no actual comprehension or conscious experience, a bunch of meaningless neural impulses with no greater understanding of English than the "Chinese Room" understands Chinese; and I invite anyone to attempt to persuade me otherwise.
I think you could make the same argument for lots of things computers do. Could Grand Theft Auto 4, including the character AI and per-pixel 3D rendering, be implemented by people following instructions on cards? Yes, because Turing machines yadda yadda. But it's inconceivable to non-programmers. The Chinese room argument is convincing for the same reason: doing something AI-like requires billions of steps and non-programmers can't imagine building up something that complex from primitive operations.
From my reading over the last few months, I think the unanswered question of subjective experience/qualia comes down to the Born Probabilities. <a href="http://lesswrong.com/lw/py/the_born_probabilities/" rel="nofollow">http://lesswrong.com/lw/py/the_born_probabilities/</a>
I don't understand why people are willing to even accept the premise. The system as described (a book of instructions for manipulating Chinese symbols) can't usefully answer the question "what time is it?"; why should I believe it could carry on a lucid-but-very-slow conversation in Chinese?<p>Imagine that also in the room is a triangle with four sides. Now the Chinese Room Argument disproves AI and geometry! What subject do you want to demolish next?
I have a grudge against written philosophy. The meaning of words is subjective. It is defined by what relations you draw in your brains when you read or hear that word. This not only makes words quite unimportant, they turn your mind into a philosophical mine field. It is constantly under the assumption that words have some universal meaning. People seeking for the "intelligence" or "truth" that they have in their head, but they're constantly redefining it based on new insights obtained in their search. As a result, all these concepts seem unattainable.<p>If you take the focus off the word, much of your prejudice disappears. You see the relationships, the structure, the observations, the logic, the nuance. You can see you used two distinct meanings of the word "intelligence". One is a set of expected reactions, the other is your consciousness. Now this gives rise to a logical question: Are these two things the same? The Chinese room shows that's not necessarily the case. However, might that feeling you call consciousness be a side-effect of a the particular type of Chinese room that's going on in your head? That's how you get interesting philosophy.
The field Searle is retardedly attempting to dismiss is now referred to as artificial general intelligence (AGI), not AI.<p>Also, philosophers should try to realize that we have this thing called science now.