TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

'Empathetic' AI has more to do with psychopathy than emotional intelligence

24 点作者 porterde大约 1 年前

6 条评论

scotty79大约 1 年前
&gt; As a psychologically informed philosopher, I define genuine empathy [...]<p>Psychology actually recognizes different kinds of empathy.<p><a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Empathy#Classification" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Empathy#Classification</a>
kordlessagain大约 1 年前
Defining empathy with no mention of compassion? To have human empathy for another is to take some action, no matter how small, while echoing the feeling itself.
评论 #40416667 未加载
评论 #40416794 未加载
Lockal大约 1 年前
Related: <a href="https:&#x2F;&#x2F;www.empatheticfiring.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.empatheticfiring.com&#x2F;</a>
m3047大约 1 年前
Was expecting something about a hypothesis or finding that psychopaths are more likely to empathize with AI or feel that it empathizes with them. Didn&#x27;t find it.<p>(Before Tim Leary became the Acid Guru he did groundbreaking work in group psychotherapy, and in particular wrote a book about personality diagnosis based on group interactions. As a parlour game you can apply the same principles to personality diagnosis of the cultures which emerge around software languages and tools.)
评论 #40417542 未加载
barrenko大约 1 年前
Common possible misconception - it&#x27;s not that psychopaths don&#x27;t posess empathy, they can just turn it on or off.<p>&quot;He who can destroy a thing has the real control of it.&quot;
xg15大约 1 年前
&gt; <i>Given this definition, it’s clear that artificial systems cannot feel empathy. They do not know what it’s like to feel something. This means that they cannot fulfil the congruence condition. Consequently, the question of whether what they feel corresponds to the asymmetry and other-awareness condition does not even arise. What artificial systems can do is recognise emotions, be it on the basis of facial expressions, vocal cues, physiological patterns or affective meanings; and they can simulate empathic behaviour by ways of speech or other modes of emotional expression.<p>Artificial systems hence show similarities to what common sense calls a psychopath.</i><p>Ugh. That&#x27;s so much motivated reasoning, it might start to fall in the &quot;not even wrong&quot; category. I have seen the same semantic trick before, to declare - purely by playing with definitions, without any empirical observations - that animals cannot have emotions.<p>A lot of the vocabulary around consciousness and emotions is defined on top of <i>human</i> subjective experiences, simply because those are the only ones we have access to and where we definitely know they exist. However, this means that those terms are literally only applicable to humans and not animals (or AI), because we literally only define them for humans. That&#x27;s the core problematic of the &quot;we don&#x27;t know what&#x27;s it like to be a bat&quot; essay and the reasons we have words like &quot;nociception&quot; to describe &quot;neurological pain responses&quot; in animals without making any implications about any subjective experience of pain.<p>It&#x27;s important to stress that none of that means that we&#x27;d <i>know</i> that animals don&#x27;t feel pain or don&#x27;t have emotions or don&#x27;t have conscious thought, etc. It just means that the terms become un-applicable to animals for formal reasons. However, the confusion between saying we don&#x27;t (and possibly can&#x27;t ever) know and <i>we know they don&#x27;t</i> is often a very convenient one, especially if you wanted to inflict things on animals that would definitely cause pain and suffering if they had a consciousness.<p>For animals, the situation has luckily somewhat changed in the last decades and more scientists are calling for instead adopting the assumption that (many) animals do have a consciousness - not a human one, but one that is comparable to humans in core aspects, such as to experience pain. (See &quot;Cambridge Declaration on Consciousness&quot;)<p>I feel we&#x27;re at a similar danger with AI: I don&#x27;t want to say that LLMs have consciousness - and we can be sure they don&#x27;t have <i>human</i> consciousness, that&#x27;s impossible from the way the work. However, the article confuses a lot of &quot;don&#x27;t know&quot;&#x2F;&quot;not applicable&quot; with &quot;we know they don&#x27;t&quot; (and then brings a number of other terms into the mix, that, paradoxically, <i>would</i> require human consciousness to even be applicable) to conclude something like psychopathy.<p>You don&#x27;t have to buy into any and all AGI fantasies, but this is intellectually dishonest.
评论 #40416864 未加载