TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

John Searle: Consciousness in Artificial Intelligence [video]

71 pointsby nolantaitover 9 years ago

11 comments

bhickeyover 9 years ago
There isn&#x27;t much new here. Skip ahead to the first audience question from Ray Kurzweil (<a href="http:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=rHKwIYsPXLg&amp;t=38m51s" rel="nofollow">http:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=rHKwIYsPXLg&amp;t=38m51s</a>).<p>Kurzweil, in summary, asks &quot;You say that a machine manipulating symbols can&#x27;t have consciousness. Why is this different than consciousness arising neurons manipulating neurotransmitter concentrations?&quot; Searle gives a non-answer: &quot;My dog has consciousness because I can look at it and conclude that it has consciousness.&quot;
评论 #10742536 未加载
评论 #10742566 未加载
评论 #10744711 未加载
DonaldFiskover 9 years ago
I think Searle&#x27;s mostly correct and Kurzweil&#x27;s completely wrong on this. It took me a long time to understand Searle&#x27;s argument, because Searle conflates consciousness and intelligence and this confuses matters. Understanding Chinese is a difficult problem requiring intelligence, but I don&#x27;t think it requires consciousness.<p>It is important to distinguish between &quot;understanding Chinese&quot; and &quot;knowing what it&#x27;s like to understand Chinese&quot;. We immediately have a problem: knowing what it&#x27;s like to understand Chinese involves various qualia, none of which is unique to Chinese speakers.<p>So I&#x27;ll simplify the argument. Instead of having a room with a book containing rules about Chinese, and a person inside who doesn&#x27;t Chinese, we have a room, with some coloured filters, and a person who can&#x27;t see any colours at all (i.e. who has achromatopsia). Such people (e.g. <a href="http:&#x2F;&#x2F;www.achromatopsia.info&#x2F;knut-nordby-achromatopsia-p&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.achromatopsia.info&#x2F;knut-nordby-achromatopsia-p&#x2F;</a>) will confirm they have no idea what it&#x27;s like to see colours. If you shove a sheet of coloured paper under the door, the person in the room will place the different filters on top of the sheet in turn, and by seeing how dark the paper then looks, be able to determine its colour, which he&#x27;ll write on the paper, and pass it back to the person outside. The person outside thinks the person inside can distinguish colours, but the person inside will confirm that not only can he not, but he doesn&#x27;t even know what it&#x27;s like. Nothing else in the room is obviously conscious.<p>A propos of the dog, this is the other minds problem. It&#x27;s entirely possible that I&#x27;m the only conscious being in the universe and everyone else (and their pets) are zombies. But we think that people, dogs, etc. are conscious because they are similar to us in important ways. Kurzweil presumably considers computers to be conscious too. Computers can be intelligent, and maybe in a few years or decades will be able to pass themselves off over the Internet as Chinese speakers, but there&#x27;s no reason to believe computers have qualia (i.e. know what anything is like), and given the above argument, every reason to believe that they don&#x27;t.
评论 #10744065 未加载
评论 #10746074 未加载
评论 #10744509 未加载
novaover 9 years ago
I can only recommend reading this paper: <a href="http:&#x2F;&#x2F;www.scottaaronson.com&#x2F;papers&#x2F;philos.pdf" rel="nofollow">http:&#x2F;&#x2F;www.scottaaronson.com&#x2F;papers&#x2F;philos.pdf</a><p>It really lives up to its title. Suddenly computational complexity is not just a highly technical CS matter anymore, and the Chinese Room paradox is explained away successfully, at least for me.
amorusoover 9 years ago
Searle makes two assertions:<p>1) Syntax without semantics is not understanding. 2) Simulation is not duplication.<p>Claim 1 is a criticism of old-style Symbolic AI that was in fashion when he first formulated his argument. This is obviously right, but we&#x27;re already moving past this. For example, word2vec or the recent progress in generating image descriptions with neural nets. The semantic associations are not nearly as complex as those of a human child, but we&#x27;re past the point of just manipulating empty symbols.<p>Claim 2 is an assertion about the hard problem of consciousness. In other words, about what kinds of information processing systems would have subjective conscious experiences. No one actually has an answer for this yet, just intuitions. I can&#x27;t really see why a physical instantiation of a certain process in meat should be different from a mathematically equivalent instantiation on a Turing machine. He has a different intuition. But neither one of us can prove anything, so there&#x27;s nothing else to say.
评论 #10745317 未加载
评论 #10753313 未加载
评论 #10746599 未加载
cromwellianover 9 years ago
The systems response is pretty much the right answer. You can put yourself at any level of reductionism of a complex system and ask how in the hell the system accomplishes anything. If you imagine yourself running a simulation of physics on paper for the universe, you may ask yourself, how does this simulation create jellyfish.<p>I think people fall for Searle&#x27;s argument the same way people fall for creationist arguments that make evolution seem absurd. Complex systems that evolve over long periods of time have enormous logical depth complexity and exhibit emergent properties that really can&#x27;t be computed analytically, but only but running the simulation, and observing macroscopic patterns.<p>If I run a cellular automaton that computes the sound wave frequencies of a symphony playing one of Mozart&#x27;s compositions, and it takes trillions of steps before even the first second of sound is output, you can rightly ask, at any state, how is this thing creating music?
spooningtamarinover 9 years ago
Consciousness and understanding are human created symbolism. Talking about it seriously is a waste of time.<p>I could be an empty shell imitating a human perfectly, all other humans would buy my lack of consciousness, and nothing would be different, from their perspective I exist, from mine, I don&#x27;t have mine.<p>How does one know that I really understand something? Maybe I can answer all the questions to convince them?
评论 #10743376 未加载
kriroover 9 years ago
It&#x27;s pretty frustrating to watch. Feels like an endless repetition of &quot;well humans and dogs are conscious because that&#x27;s self evident&quot;. There&#x27;s no sufficient demarcation criterion other than &quot;I know it when I see it&quot; that he seems to apply. [I guess having a semantics is his criterion but he doesn&#x27;t elaborate on a criterion for that]<p>The audience question about intelligent design summed up my frustration nicely (or rather the amoeba evolving part of it).
sethevover 9 years ago
I think what it boils down to is that Searle believes consciousness is a real thing that exists in the universe. A simulation of a thing isn&#x27;t the same as the thing itself, no matter how accurate the outputs. The Chinese Room argument just amplifies that intuition (my guess is that the idea of a room was inspired by the Turing Test).<p>I think studying the brain (as opposed to philosophical arguments) is the thing that will eventually answer these kinds of questions, though.
pbwover 9 years ago
I think the argument about consciousness is vacuous. Searle admits we might create an AI which acts 100% like a human in every way.<p>Nothing Searle says stands in the way of creating intelligent or super-intelligent entities. All Searle is saying is those entities won&#x27;t be conscious.<p>No can prove this claim today. But more significantly I think it&#x27;s extremely likely no one will ever prove the claim. Consciousness is a private subjective experience. I think it&#x27;s likely you simply cannot prove it exists or doesn&#x27;t exist.<p>Mankind will create a human-level robots and we&#x27;ll watch them think and create and love and cry and we&#x27;ll simply not know what their conscious experience is.<p>Even if we did <i>prove</i> it one way or the other, the popular opinion would be unaffected.<p>Some big chunk of people will insist robots are conscious entities who feel pain and have rights. And some big chunk of people will insist they are not conscious.<p>It might be our final big debate. An abstruse <i>proof</i> is not going to change anyone&#x27;s mind. Look at how social policies are debated today. Proof is not a factor.
orblivionover 9 years ago
So, supposing there&#x27;s any chance that it has consciousness, is there any sort of movement doing all it can to put the brakes on AI research? If it&#x27;s true, it&#x27;s literally the precursor to the worst realistic (or hypothetical, really) outcome I can fathom, which has been discussed before on HN (simulated hell, etc). I&#x27;m not sure why more people aren&#x27;t concerned about it. Or is it just that there&#x27;s &quot;no way to stop progress&quot; as they say, and this is just something we&#x27;re going to learn to live with, the way we live with, say, the mistreatment of animals?
评论 #10743126 未加载
nnqover 9 years ago
This guy so smart but at the same time such an idiot. SYNTAX and SEMANTICS are essentially SAME THING. It&#x27;s only a context-dependent difference, and this difference is quantitative, even if we still don&#x27;t have a good enough definition of what those quantitative variables underlying them are. You must have a really &quot;fractured&quot; mind not to instantly &quot;get it&quot;. And &quot;INTRINSIC&quot; is simply a void concept: nothing is intrinsic, everything (the universe and all) is obviously observer dependent, it just may be that the observer can be a &quot;huge entity&quot; that some people choose to personalize and call God.<p>It&#x27;s amazing to me that people with such a pathological disconnect between mind and intuition can get so far in life. He&#x27;s incredibly smart, has a great intuition, but when exposed to some problems he simply can&#x27;t CONNECT his REASON with his INTUITION. <i>This is a MENTAL ILLNESS and we should invest in developing ways to treat it, seriously!</i><p>Of course that &quot;the room + person + books + rule books + scratch paper&quot; can be self conscious. You can ask the room questions about &quot;itself&quot; and it will answer, proving that it has a model of itself, even if that model is not specifically encoded anywhere. It&#x27;s just like mathematics, if you have a procedural definitions for the set of all natural numbers (ie. a definition that can be executed to generate the first and the next natural number), you &quot;have&quot; the entire set of natural numbers, even if you don&#x27;t have them all written down on a piece of paper. Same way, if you have the processes for consciousness, you have consciousness, even if you can&#x27;t pinpoint &quot;where&quot; in space and time exactly is. Consciousness is closer to a concept like &quot;prime numbers&quot; than to a physical thing like &quot;a rock&quot;, you don&#x27;t need a space and time for the concept of prime numbers to exist in, it just is.<p>His way o &quot;depersonalizing&quot; conscious &quot;machines&quot; is akin to Hitler&#x27;s way of depersonalizing Jews, and this &quot;mental disease&quot; will probably lead to similar genocides, even if the victims will not be &quot;human&quot; ...at least in the first phase, because you&#x27;ll obviously get a HUGE retaliation in reply to any such stupidity, and my bet it that such a retaliation will be what will end the human race.<p>Now, of course the Chinese room discussion is stupid: you can&#x27;t have &quot;human-like consciousness&quot; with one Chinese room. You&#x27;d need a network of Chinese rooms that talk to each other and also operate under constraints that make their survival dependent on their ability to model themselves and their neighbors, in order to generate &quot;human-like consciousness&quot;.
评论 #10743661 未加载