Searle is making an argument about awareness. That a computer system is explicitly unaware of any of it's content. That it's programs are functions and the data also performs a purely functional role. In essence, computers cannot engage in acts of meaning. The programmers and users are engaged in acts of meaning.<p>For instance, saying a program "has a bug", is a completely misleading statement. No programs have bugs. It is impossible for a program to have a bug, just as it is impossible for a physical process to "do something wrong". Programs do what they do, just as molecular processes do what they do. The concept of error and meaning does not exist in a program, just as it does not exist in the physical universe. Meaning (and errors and bugs are a kind of meaning) are things outside programs and outside physics. When a program "has a bug" it means the programmer screwed up, not the program. A program cannot produce errors, because programs, and computer systems in general, do not have the capacity to have meaning. This is what Searle is demonstrating with his argument.<p>This is true for all the popular computational approaches we have today. However, because the human brain appears to function in a purely physical way, and computers function in a purely physical way, it should be theoretically possible to create a computer system that is conscious and aware of meaning just as we are. You refer to this as "Strong AI". Other refer to it as Artificial General Intelligence. I refer to this as machine consciousness. To solve the machine consciousness problem means understanding how awareness, meaning, and representation in general, works. Then building a computer system that engages in representation and instantiates awareness.<p>If an actual person were put into Searle's box, the person would learn chinese. Also, the person could 'intentionally' produce incorrect answers annoying the "programmers" who set the box up in the first place. But a modern computer system cannot 'intentionally' produce errors. it's completely non-sensical to talk about computers as having intention at all. programmers have intention, not computers.<p>Solving the intentionality problem is the other leg of machine consciousness. Elon Musk, Steven Hawking, Nick Bostrom and others make arguments about the dangers of an AI (of any variety) which may acquire intentionality and representational ability, while ignoring the actual deep problems embedded in acquiring those abilities.<p>Awareness, representation, and intention are so fundamental to experience that we have a very difficult time understanding when they happen and when they do not. We see a representational world all around us, but very explicitly, there are no representations at all in the physical world.<p>I believe machine consciousness is possible, but none of the existing approaches will get us there. Searle's chinese room is one succinct argument as to why.<p>The approach I am taking is a kind of metabolic computing. Where single function processes interact in some way similar to molecular interactions and those processes are developed to produce, computational structures like membranes and DNA and eventually "cells". These cells then form multi-cellular structures. These multi-cellular structures and underlying "molecular" interactions instantiate representations and representational processes, like a nervous system. A computational nervous system which embodies representation, intention, sensation, action, imagination, and because it engages in representation making, would be aware.<p>I would love to hear someone describe how any kind of computational approach can produce meaning inside a computer system. We produce meaning and representations so easily; it's hard to understand the difference of perspective necessary to see how representations must form. If someone has an easier approach than the one I am taking, I would be very interested in seeing how they solve the problems of meaning and intention with code.