While I instinctively agree with the author's conclusion, I have some questions about his argument.<p>1) The foundational assumption is that, by "sentient", Kurzweil means "Homo-complete":<p><pre><code> > Because of some of the criteria Kurzweil has set for sentient machines (e.g. that they have emotional systems indistinguishable from those of humans), I like to go ahead and assume that the kind of machine Kurzweil is talking about would have fears, inhibitions, hopes, dreams, beliefs, a sense of aesthetics, understanding (and opinions about) spiritual concepts, a subconscious "mind," and so on.
</code></pre>
I'm not sure how Kurzweil feels about that but to me the whole point of creating sentient machines is to achieve human-like intelligence without all hindrances and idiosyncrasies that come with being a human. I'm thinking of a thing like Data (or Spock) from Star Trek. Are these guys not "sentient"? I know they're not real but they don't seem outwardly implausible or self-contradictory.<p>Furthermore, wouldn't some humans fail to qualify as "sentient" (or, at least, "Homo-complete") based on this definition? What about early homo sapiens? Do we know for sure they had the capacity for schizophrenia or autism? Is it at least conceivable that very early humans lacked the capacity to love in the same way as modern humans? How about future generations? If they find a cure for depression, won't tomorrow's pill-popping, ultra-content humans also be "sentient"?<p>2) Even if you grant him this assumption, it seems like the author is essentially saying that the only way to be "Homo-complete" is to be a "Homo" (pardon me). That may very well be true and I think the author makes a compelling argument that it is. But I still don't see why being a human is the only way to be "sentient".