For achieving general AI, why are we hung up on the question of consciousness?<p>We don't have a way to prove consciousness in humans, let alone AI. For all I know, everyone around me in reality on earth is a NPC bot similar to Westworld, and I am the only conscious person. I have no way to prove otherwise.<p>Does it matter if an AGI robot is conscious, if it behaves and acts like a normal intelligent human being? Once it passes the "human being like" Turing test, such that there is no way to distinguish a real human from an AGI robot, then the question of consciousness is moot anyway, correct?
First, it has ethical implications, since we wouldn't want certain experiences for conscious beings compared to unconscious algorithms. Second, it is not clear that general intelligence can be achieved *without* consciousness. Our only evidence so far is the human brain, which has consciousness.<p>For example, the main feature of C is a type of self-awareness, and also existential awareness, both of which contribute to intelligence. An agent that is not good at understanding its own existence could not answer certain type of questions that humans can, therefore it is not a general intelligence yet. It may know all kinds of data, but self-referencing tasks or questions about its place in the world would not be as good as a human's. That said, it is easy to imagine a future where we just have a variety of really smart AI in each domain, and the same results as having AGI are achieved except without needing to emulate the human brain.
> Does it matter if an AGI robot is conscious<p>In terms of its ability to function, I can't see why it would.<p>In terms of your right to put it in a car crusher, I can see potential problems.
I agree and think consciousness as we know it is a red herring. I suspect it is but one type of OS for a general intelligence and not a particularly useful one.<p>We have a ton of trouble anthropomorphizing AI. We want it to be like US because being like is would validate our design. It’s not going to be like us. Honestly we probably won’t even recognize it at first.
I think you're asking the wrong question or you are confused. People on this website are hung up on if a large language model is conscious. We don't currently have an artificial general intelligence. So we're not hung up on a consciousness question for an artificial general intelligence.
I tend to think consciousness is probably something we subjectively assign importance to, but doesn't really describe anything meaningful.<p>I personally suspect consciousness is just a side-effect of empathy because it's very hard to empathise for mechanical "unconscious" agents. Therefore it benefits us all to have an illusion consciousness, otherwise why would I care if you're in pain, or for your feelings? And we see attitude manifest with psychopaths. Wood is on fire vs human is on fire is only different in our minds because of our perception of consciousness, and without consciousness your care would probably be more utility based.<p>So perhaps it matters if we don't want computers to treat us like the machines we are, but I think it's possible there are other ways to achieve empathy without an illusion of consciousness.<p>That's just my take though. I don't think consciousness really matters for AGI and like you say, there's really no way we'd know if a computer is conscious anyway. I'd actually side on it not being conscious if a computer claimed to be. I tend to think the best indicator of consciousness wouldn't be the claim of it, but acting irrationally. I feel confident people around me are conscious, not because they tell me they are, but because I've seen them do objectively stupid things which are explainable by irrational conscious feelings like fear or pride.