> <i>Given this definition, it’s clear that artificial systems cannot feel empathy. They do not know what it’s like to feel something. This means that they cannot fulfil the congruence condition. Consequently, the question of whether what they feel corresponds to the asymmetry and other-awareness condition does not even arise. What artificial systems can do is recognise emotions, be it on the basis of facial expressions, vocal cues, physiological patterns or affective meanings; and they can simulate empathic behaviour by ways of speech or other modes of emotional expression.<p>Artificial systems hence show similarities to what common sense calls a psychopath.</i><p>Ugh. That's so much motivated reasoning, it might start to fall in the "not even wrong" category. I have seen the same semantic trick before, to declare - purely by playing with definitions, without any empirical observations - that animals cannot have emotions.<p>A lot of the vocabulary around consciousness and emotions is defined on top of <i>human</i> subjective experiences, simply because those are the only ones we have access to and where we definitely know they exist. However, this means that those terms are literally only applicable to humans and not animals (or AI), because we literally only define them for humans. That's the core problematic of the "we don't know what's it like to be a bat" essay and the reasons we have words like "nociception" to describe "neurological pain responses" in animals without making any implications about any subjective experience of pain.<p>It's important to stress that none of that means that we'd <i>know</i> that animals don't feel pain or don't have emotions or don't have conscious thought, etc. It just means that the terms become un-applicable to animals for formal reasons. However, the confusion between saying we don't (and possibly can't ever) know and <i>we know they don't</i> is often a very convenient one, especially if you wanted to inflict things on animals that would definitely cause pain and suffering if they had a consciousness.<p>For animals, the situation has luckily somewhat changed in the last decades and more scientists are calling for instead adopting the assumption that (many) animals do have a consciousness - not a human one, but one that is comparable to humans in core aspects, such as to experience pain. (See "Cambridge Declaration on Consciousness")<p>I feel we're at a similar danger with AI: I don't want to say that LLMs have consciousness - and we can be sure they don't have <i>human</i> consciousness, that's impossible from the way the work. However, the article confuses a lot of "don't know"/"not applicable" with "we know they don't" (and then brings a number of other terms into the mix, that, paradoxically, <i>would</i> require human consciousness to even be applicable) to conclude something like psychopathy.<p>You don't have to buy into any and all AGI fantasies, but this is intellectually dishonest.