While I personally believe modelling the human brain is the wrong path to follow for creating a general AI, I think the arguments raised in the article are effectively metaphysical. This seems like an extension of the arguments about whether animals are capable of having consciousness. If one were to take the writings of Descartes and replace every instance of the word 'animal' with the word 'computer,' the resulting argument would mesh very well with this article.<p>I don't think anyone would argue a simulation of photosynthesis creates energy in the same way actual photosynthesis does, however I have no doubt it is possible to create a process that produces the same outputs of photosynthesis using the same inputs. In the same sense, what makes a human is not the process by which it thinks, but the thoughts the human generates. You don't necessarily need to have a brain to think, and you don't need to have understanding to think.<p>For instance, I would heartily disagree with the assertion that computers cannot be human because they cannot understand the meaning behind a symbol. This implies that a) humans must always understand the meaning behind symbols in order to utilize them effectively, and b) that it is not possible to create a machine which can understand symbols. Understanding symbols is a function of the brain, but understanding how symbols are created can be abstracted, and we call that field of study Semiotics.<p>We are capable of using symbols long before we understand exactly what they mean. People who are born deaf and dumb still smile when they are happy, and they still cry when they are sad. These symbols change throughout our life in response to society around us, but the fundamental aspects of the symbols are inherent in every person. If our understanding of symbols is driven entirely by biology, is there a functional difference between being driven by an emotion, and being driven by a processes which fully replicates being driven by emotion?<p>I can't say that I think there is a difference. Pleasure and pain are effectively physical phenomena, in the sense that they exist outside the part of the brain that forms our sentient selves. If we model a machine that is capable of thought, and provide it external feedback based on the decisions it made internally, I think there is a very good chance we will reach a level of complexity that allows for sentience.<p>At a certain level, you have to wonder why we would even create an exact replica of the human brain. What advantages would a computer that thinks like a human have? I can only begin to imagine the ethical implications of putting a human brain inside a computer. Would turning it off be murder? How would you raise a human that is born inside a computer? What psychological pitfalls would that person face?<p>I think it is much more likely that we recognize that we have created a sentient AI long after it has actually happened. The structure of machines means that a sentient machine will exhibit its sentience in a way that we as humans would not immediately recognize as sentience.<p>Stanford has a great philosophy library which would be a good starting point for anyone who'd like to think about this some more:<p><a href="http://plato.stanford.edu/entries/consciousness-animal/" rel="nofollow">http://plato.stanford.edu/entries/consciousness-animal/</a>