TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

ChatGPT's artificial empathy is a language trick

12 点作者 devonnull6 个月前

6 条评论

yawpitch6 个月前
&gt; Interacting with non-sentient entities that mimic human identity can alter our perception of [human entities that might only mimic sentience].<p>The more I see people investing in conversations with LLMs, the more I see people trying to outsource thinking and understanding to something which does neither.
评论 #42263967 未加载
ksaj6 个月前
Maybe this is a good way to learn when people are using that same language trick to make you think they have empathy.
评论 #42264302 未加载
xtiansimon6 个月前
&gt; “…it will get harder and harder to distinguish a conversation with a real person from one with an AI system.”<p>I’m reminded of that trope in police shows— “If you’re a cop you got to tell me.” This becomes, “If you’re a Bot, you got to tell me, man.”
chaos_emergent6 个月前
What’s the definition of empathy? To me the connotation has always been, “is able to feel the feelings of others” as opposed to sympathy which is more about “imagining the feeling someone is experiencing”.<p>Regardless, aren’t we all trained in the same way - reinforcement of gestural and linguistic symbols that <i>imply</i> empathy, rather than <i>being</i> empathetic? I guess I’m wondering if hijacking our emotional understanding of interactions with LMs is that far off from the interaction manipulation that we’re all socialized to do from a young age
评论 #42264014 未加载
Alifatisk6 个月前
In the Eliza example, I find it astonishing how the chatbot was able to pick out specific words it could use as response, how did they achieve that in 1988?
评论 #42264038 未加载
dflock6 个月前
It&#x27;s all a language trick.