It seems to me that a lot of people just assume that we will be able to program machines that are "self-aware". We know so little about are own consciousness and artificial intelligence != artificial consciousness.<p>I am surprised by how hard it is to find any information from dissenters of this speculative claim.
Define "consciousness".<p>That's the fundamental problem, that we don't know what it is. We know what it looks like in a human, and we know what it feels like to ourselves. But we don't have any rigorous, non-intuitive idea of what it means.<p>For myself, I think of consciousness as the ability to watch yourself think - of being aware of your thought process. By that definition, yes, artificial consciousness could be possible - <i>but first you have to have a machine that thinks.</i> And now we're hung up on trying to find a definition of "think" that's rigorous...<p>I like the way that axotty labeled the claim as "speculative". It is, even though that speculative assumption is the dominant paradigm of AI. But it definitely is speculative, at least at this time, because actual evidence is quite lacking.
Well, it depends on how you define "consciousness", and that's far from a settled issue as I understand it. But taking my own (naive) idea of what it means to be "conscious", and what I (think) I know about AI, I don't see any reason to think we won't achieve artificial consciousness. In fact, I wouldn't reject the notion out of hand, that some machine somewhere already is conscious, and we just don't know it.