TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Opinion about Searle's Chinese Room?

12 点作者 wsieroci超过 9 年前
Hi,<p>I would like to know your opinion about John Searle&#x27;s Chinese Room argument against &quot;Strong AI&quot;.<p>Best, Wiktor

7 条评论

cousin_it超过 9 年前
Some people are uncomfortable with the idea that brains can work on &quot;mere&quot; physics. But what&#x27;s the alternative? A decree from Cthulhu saying human brains are the only devices in the universe that can communicate with consciousness, and all other objects are doomed to be dumb matter? I think that&#x27;s really unlikely. So it seems like we must bite the bullet and admit that the computation implemented in the Chinese room exhibits genuine understanding. I&#x27;m not completely certain, because we don&#x27;t really understand consciousness yet and we could always find new evidence, but the evidence so far points in one direction.
fallingfrog超过 9 年前
Well, a Chinese room can simulate a computer, a computer can run a physics simulation and a brain is a physical thing and brains are conscious. So it&#x27;s pretty open and shut imho; yes, a Chinese room can be conscious. That much is clear. Now what it means to say that something is conscious is another question. I&#x27;ve never seen the word rigorously defined.
lazyant超过 9 年前
we had a thread just yesterday <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10867791" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10867791</a> and also last month <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10740748" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10740748</a>
评论 #10877733 未加载
chrisdevereux超过 9 年前
Its popularity in undergraduate philosophy courses reflects the fact that it is easy to criticise more than it telling us something interesting about consciousness.
makebelieve超过 9 年前
you might want to watch this lecture by Searle at Google: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=rHKwIYsPXLg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=rHKwIYsPXLg</a>
评论 #10884786 未加载
makebelieve超过 9 年前
Searle is making an argument about awareness. That a computer system is explicitly unaware of any of it&#x27;s content. That it&#x27;s programs are functions and the data also performs a purely functional role. In essence, computers cannot engage in acts of meaning. The programmers and users are engaged in acts of meaning.<p>For instance, saying a program &quot;has a bug&quot;, is a completely misleading statement. No programs have bugs. It is impossible for a program to have a bug, just as it is impossible for a physical process to &quot;do something wrong&quot;. Programs do what they do, just as molecular processes do what they do. The concept of error and meaning does not exist in a program, just as it does not exist in the physical universe. Meaning (and errors and bugs are a kind of meaning) are things outside programs and outside physics. When a program &quot;has a bug&quot; it means the programmer screwed up, not the program. A program cannot produce errors, because programs, and computer systems in general, do not have the capacity to have meaning. This is what Searle is demonstrating with his argument.<p>This is true for all the popular computational approaches we have today. However, because the human brain appears to function in a purely physical way, and computers function in a purely physical way, it should be theoretically possible to create a computer system that is conscious and aware of meaning just as we are. You refer to this as &quot;Strong AI&quot;. Other refer to it as Artificial General Intelligence. I refer to this as machine consciousness. To solve the machine consciousness problem means understanding how awareness, meaning, and representation in general, works. Then building a computer system that engages in representation and instantiates awareness.<p>If an actual person were put into Searle&#x27;s box, the person would learn chinese. Also, the person could &#x27;intentionally&#x27; produce incorrect answers annoying the &quot;programmers&quot; who set the box up in the first place. But a modern computer system cannot &#x27;intentionally&#x27; produce errors. it&#x27;s completely non-sensical to talk about computers as having intention at all. programmers have intention, not computers.<p>Solving the intentionality problem is the other leg of machine consciousness. Elon Musk, Steven Hawking, Nick Bostrom and others make arguments about the dangers of an AI (of any variety) which may acquire intentionality and representational ability, while ignoring the actual deep problems embedded in acquiring those abilities.<p>Awareness, representation, and intention are so fundamental to experience that we have a very difficult time understanding when they happen and when they do not. We see a representational world all around us, but very explicitly, there are no representations at all in the physical world.<p>I believe machine consciousness is possible, but none of the existing approaches will get us there. Searle&#x27;s chinese room is one succinct argument as to why.<p>The approach I am taking is a kind of metabolic computing. Where single function processes interact in some way similar to molecular interactions and those processes are developed to produce, computational structures like membranes and DNA and eventually &quot;cells&quot;. These cells then form multi-cellular structures. These multi-cellular structures and underlying &quot;molecular&quot; interactions instantiate representations and representational processes, like a nervous system. A computational nervous system which embodies representation, intention, sensation, action, imagination, and because it engages in representation making, would be aware.<p>I would love to hear someone describe how any kind of computational approach can produce meaning inside a computer system. We produce meaning and representations so easily; it&#x27;s hard to understand the difference of perspective necessary to see how representations must form. If someone has an easier approach than the one I am taking, I would be very interested in seeing how they solve the problems of meaning and intention with code.
评论 #10879243 未加载
评论 #10880025 未加载
dbpokorny超过 9 年前
In theory we could rig up a quad copter with a machine gun, loudspeaker, and AI, with programming that demands that every human it encounters be interrogated with the question, &quot;do I, a mechanical computer, have the ability to think?&quot; and if the human gives a negative reply, it uses the machine gun on the human until the human is dead or decides to utter a positive response: &quot;yes, this flying death machine can think&quot;.<p>Whether or not we say &quot;machines can think&quot; is a political question, and political power comes out of the barrel of a gun. If a machine can wield political power, then it can get you to say &quot;machines can think&quot; because truth is inherently political.
评论 #10878029 未加载