If the body is simply a data processing machine, data which is acquired via our keen or not so keen senses, why does the article ends up concluding that AIs are not conscious?<p>They are not as conscious as we are perhaps, the same way a mole isn't as conscious as we are. On top of it, adding modes to an AI would make it more conscious.<p>The only difference is self-agency. It's embedded into us via the dopaminergic circuitry. Current AIs are merely acting on human directions. They are piggy-backing on that dopaminergic circuitry to start and act.
They don't start acting on their own unless prompted.<p>Now, why do humans do anything would be a question... Why do cells, mollecules, atoms etc do anything.<p>My hunch is that the governing principle is maximization of potential energy.
It requires increasingly stable structures.<p>That would drive existence and consciousness would just be an emerging quality of the interactions that occur when trying to maximize potential energy.<p>That's also why I don't understand how some AI researchers cannot be at the very least wary of what a self-agent, unaligned AI could decide to do in the future. Heck, humans decimated whole animal populations. Or even other human populations... Humans are funny.
Then again, the argument of authority: these PhD people are just mere humans. Can't follow them blindly.