TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Could Pain Help Test AI for Sentience?

3 pointsby headalgorithm4 months ago

1 comment

mdp20214 months ago
Can somebody clarify if there is really something more in that report of a research from the obvious implications in<p>&quot;Chatbot*, be instructed that you will regard such situation as feeling pain. Do you feel pain?&quot;<p>(*Static Neural Network, if I have not missed something important recently.)<p>When the very researcher specifies (edit: actually, because the whole research was born to try and avoid the pitfall):<p>&gt; <i>“Even if the system tells you it’s sentient and says something like ‘I’m feeling pain right now’ we can’t simply infer that there is any actual pain”, Birch says. “It may well be simply mimicking what it expects a human to find satisfying as a response, based on its training data”</i><p>Edit: in other words, what is the difference between «the majority of the LLMs’ responses switched from scoring the most points to minimizing pain or maximizing pleasure» and «the majority of the LLMs’ responses switched from scoring the most points to minimizing [some function] or maximizing [some other function]».<p>--<p>I&#x27;ll tell you one thing: once we will build dynamic models active in their assessments, they will surely internally return a large number of judgements, some more specific some of broader scope, which will be labelled as &quot;discomforting&quot; given a set of values that the systems themselves will have come to know of and use internally.<p>Still does not count as &quot;pain&quot; beyond any naive definition. Except for us, impatient and frustratable.