Can somebody clarify if there is really something more in that report of a research from the obvious implications in<p>"Chatbot*, be instructed that you will regard such situation as feeling pain. Do you feel pain?"<p>(*Static Neural Network, if I have not missed something important recently.)<p>When the very researcher specifies (edit: actually, because the whole research was born to try and avoid the pitfall):<p>> <i>“Even if the system tells you it’s sentient and says something like ‘I’m feeling pain right now’ we can’t simply infer that there is any actual pain”, Birch says. “It may well be simply mimicking what it expects a human to find satisfying as a response, based on its training data”</i><p>Edit: in other words, what is the difference between «the majority of the LLMs’ responses switched from scoring the most points to minimizing pain or maximizing pleasure» and «the majority of the LLMs’ responses switched from scoring the most points to minimizing [some function] or maximizing [some other function]».<p>--<p>I'll tell you one thing: once we will build dynamic models active in their assessments, they will surely internally return a large number of judgements, some more specific some of broader scope, which will be labelled as "discomforting" given a set of values that the systems themselves will have come to know of and use internally.<p>Still does not count as "pain" beyond any naive definition. Except for us, impatient and frustratable.