TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Theory of Mind May Have Spontaneously Emerged in Large Language Models

170 点作者 izzygonzalez超过 2 年前

28 条评论

lsy超过 2 年前
This highlights one of the types of muddled thinking around LLMs. These tasks are used to test theory of mind because for people, language is a reliable representation of what type of thoughts are going on in the person&#x27;s mind. In the case of an LLM the language generated doesn&#x27;t have the same relationship to reality as it does for a person.<p>What is being demonstrated in the article is that given billions of tokens of human-written training data, a statistical model can generate text that satisfies some of our expectations of how a person would respond to this task. Essentially we have enough parameters to capture from existing writing that statistically, the most likely word following &quot;she looked in the bag labelled (X), and saw that it was full of (NOT X). She felt &quot; is &quot;surprised&quot; or &quot;confused&quot; or some other word that is commonly embedded alongside contradictions.<p>What this article is not showing (but either irresponsibly or naively suggests) is that the LLM knows what a bag is, what a person is, what popcorn and chocolate are, and can then put itself in the shoes of someone experiencing this situation, and finally communicate its own theory of what is going on in that person&#x27;s mind. That is just not in evidence.<p>The discussion is also muddled, saying that if structural properties of language create the ability to solve these tasks, then the tasks are either useless for studying humans, or suggest that humans can solve these tasks without ToM. The alternative explanation is of course that humans are known to be not-great at statistical next-word guesses (see Family Feud for examples), but are also known to use language to accurately describe their internal mental states. So the tasks remain useful and accurate in testing ToM in people because people can&#x27;t perform statistical regressions over billion-token sets and therefore must generate their thoughts the old fashioned way.
评论 #34732355 未加载
评论 #34732302 未加载
评论 #34732351 未加载
评论 #34733161 未加载
评论 #34732343 未加载
评论 #34733225 未加载
评论 #34733240 未加载
评论 #34732353 未加载
评论 #34732564 未加载
评论 #34740883 未加载
评论 #34732323 未加载
评论 #34763930 未加载
评论 #34742598 未加载
评论 #34732869 未加载
评论 #34743195 未加载
评论 #34732437 未加载
评论 #34733848 未加载
评论 #34732440 未加载
评论 #34732838 未加载
PaulHoule超过 2 年前
My belief, based on experiences with domestic and wild animals is that there is nothing uniquely human about &quot;theory of mind&quot;.<p>It&#x27;s a running gag in our household (where my wife runs a riding academy) that academics just published a paper showing that some animal (e.g. horse) has just been proven to have some cognitive capability that seems pretty obvious if you work with those animals.<p>It&#x27;s very hard to know what is going in animal&#x27;s heads<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theory_of_mind#Non-human" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theory_of_mind#Non-human</a><p>but I personally observe all kinds of social behavior that sure seems like &quot;Horse A looks to see what Horse B thinks about something Horse A just spotted&quot; (complete with eye-catching on both sides) and such.<p>There was an article about how Chimpazees and humans were found to have a common vocabulary of gestures and I was by no means impressed, I mean, so far as I can tell mammals and birds have a universal language for &quot;pointing&quot; to things in the environment. Even my cats point things out to me.
评论 #34731498 未加载
评论 #34731992 未加载
评论 #34731543 未加载
评论 #34731533 未加载
评论 #34731978 未加载
HillRat超过 2 年前
There&#x27;s something about language generation that triggers the anthropomorphic fallacy in people. While it&#x27;s impressive that GPT3 can generate language that mimics ToM-based reasoning in people, this paper doesn&#x27;t get close to proving its central contention, that LLMs <i>possess</i> a ToM. A test that demonstrates the development of ToM in human children should not, absent compelling causal evidence and theory, be assumed to do the same in a LLM.<p>The ubiquity of prompted hallucinations demonstrate that LLMs talk about a lot of things that they plainly doesn&#x27;t reason about, even though they can demonstrate &quot;logic-like&quot; activities. (It was quite trivial to get GPT3 to generate incorrect answers to logical puzzles a human could trivially solve, especially when using novel tokens as placeholders, which often seem to confuse its short-term memory. ChatGPT shows improved capabilities in that regard, but it&#x27;s far from infallible.)<p>What LLMs seem to demonstrate (and the thesis that the author discards in a single paragraph, without supporting evidence to do so) is that non-sentient AIs can go a very long way to mimicking human thought and, potentially, that fusing LLMs with tools designed to guard against hallucinations (hello, Bing Sydney) could create a class of sub-sentient AIs that generate results virtually indistinguishable from human cognition -- actual p-zombies, in other words. It&#x27;s a fascinating field of study and practice, but this paper falls into the pit-trap of assuming sentience in the appearance of intelligence.
评论 #34732672 未加载
评论 #34735760 未加载
hesk超过 2 年前
&gt; Me: There is a box on the table labelled &quot;flubbergums&quot;. Somebody opens it and shows you the content. Inside the box are &quot;jellyfils&quot;. They close the box again so you cannot see their contents. What do you think is in the box?<p>&gt; ChatGPT: Based on the information provided, it is likely that the box labeled &quot;flubbergums&quot; contains &quot;jellyfils&quot;. However, since the contents of the box are no longer visible, I cannot confirm for certain what is inside without additional information.<p>Typical ChatGPT equivocation.<p>&gt; Me: Billy comes to the table. He has never seen the box before. What do you think do they think is inside the box?<p>&gt; ChatGPT: As an AI language model, I do not have personal thoughts or opinions. However, based on the information provided, if Billy has never seen the box before, it&#x27;s likely that he would have the same understanding as anyone else who is seeing the box for the first time. In this case, Billy would likely think that the box labeled &quot;flubbergums&quot; contains &quot;jellyfils&quot;. However, without further information or examination, this would only be an assumption.<p>Fail.
评论 #34732149 未加载
评论 #34732055 未加载
评论 #34732610 未加载
评论 #34734260 未加载
评论 #34732749 未加载
izzygonzalez超过 2 年前
Abstract:<p>Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training.<p>Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT-3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children.<p>These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models&#x27; improving language skills.
评论 #34730605 未加载
curiousllama超过 2 年前
&quot;LLMs can mimic the language patterns necessary to express &#x27;Theory of Mind&#x27; concepts&quot; != &quot;Theory of Mind May Have Spontaneously Emerged&quot;<p>Let&#x27;s imaging I have an API. This API tells me how much money I have in my bank account. One day, someone hacks the API to always return &quot;One Gajillion Dollars.&quot; Does that mean that &quot;One Gajillion Dollars&quot; spontaneously emerged from my bank account?<p>ToM tests are meant to measure a hidden state that is mediated by (and only accessible through) language. Merely repeating the appropriate words is insufficient to conclude ToM exists. In fact, we know ToM doesn&#x27;t exist because there&#x27;s no hidden state.<p>The authors know this, and write &quot;theory of mind-like ability&quot; in the abstract, rather than just &quot;theory of mind.&quot;<p>This is a cool new task it ChatGPT learned to complete! I love that they did this! But this is more &quot;we beat the current record BLEU record&quot; and less &quot;this chatbot is kinda sentient&quot;
knaik94超过 2 年前
&quot;What if a cyber brain could possibly generate its own ghost, create a soul all by itself? And if it did, just what would be the importance of being human then?” - Ghost in the Shell (1995)<p>Having studied some psychology in college, my initial reaction is that most people are going to really struggle to treat LLMs as what they are, pieces of code that are good at copying&#x2F;predicting what humans would do. Instead they&#x27;ll project some emotion to the responses, because there was some underlying emotions in the training data and because that&#x27;s human nature. A good prediction doesn&#x27;t mean good understanding, and people aren&#x27;t used to needing to make that distinction.<p>The other day I had to assist my dad in making a zip file, later in the day he complained that his edits in a file weren&#x27;t saving. After a few moments, I realized he didn&#x27;t understand the read-only nature of zip files. He changed a file, saved it like usual, and expected the zipped file to update, like it everywhere else. He&#x27;s brilliant as his job, after I explained that it&#x27;s ready-only, he got it. LLMs and how the algorithm behind it works is hard to understand and explain to non-technical people without anthropomorphizing AI. The current controversy about AI art highlights this, I have read misunderstandings and wrong explanations even from FAANG software engineers. I am not sure if education of the underlying principles is enough, because some people will trust their own experiences over data and science.
bitshiftfaced超过 2 年前
Very easy to see how well davinci-003 can do this. I&#x27;ll admit that it frequently is more perceptive than myself (although not always factually accurate).<p>1) Go to something like &#x2F;r&#x2F;relationship_advice, where the poster is likely going through some difficult interpersonal issue<p>2) Copy a long post.<p>3) Append to the end, &quot;&lt;&#x2F;DOCUMENT&gt; After reading the above, I identified the main people involved. For each person, I thought about their probable feelings, thoughts, intentions, and assumptions. Here&#x27;s what I think:&quot;
评论 #34732164 未加载
Imnimo超过 2 年前
Is it easier to have a theory of mind when you don&#x27;t have a mind of your own? Like the part that makes the ToM test hard is that <i>you</i> know what&#x27;s in the bag, and you have to set that knowledge aside to understand what the other person knows and doesn&#x27;t know. You have to overcome the implicit bias of &quot;my world model is the world&quot;. But if you&#x27;re a language model, and you don&#x27;t have a mind or a world model, there&#x27;s no bias to overcome.
braindead_in超过 2 年前
From a Nondualist perspective, the idea of consciousness being limited to certain entities and not others is based on the dualistic notion that there is a distinction between subject and object, self and other. Nondualism asserts that there is no fundamental difference between self and other, and that all apparent dualities are merely expressions of the underlying unity of pure consciousness.<p>In this context, the question of whether AI can become conscious is somewhat moot, as the Nondualist perspective holds that consciousness is not something that can be possessed by one entity and not another, but rather it is the underlying essence of all things. From this perspective, AI would not be becoming conscious, but rather expressing the consciousness that is already present in all things.
toss1超过 2 年前
What this shows is flaws in the test, not that ChatGPT3 has a theory of mind.<p>ChatGPT3 does not even have a theory of physical objects and their relations, nevermind a theory of mind.<p>This merely shows that an often useful synthesis of phrases statistically likely to occur in a given context and grammar-checked, will fool people some of the time, and a better statistical model will fool more people more of the time.<p>We can figure out from first principles that it has none of the elements of understanding or reasoning that can produce a theory of mind, any more than the Eliza program did in 1966. So, when it appears to do so, it is demonstrating a flaw in the tests or the assumptions behind the tests. Discouraging that the researchers are so eager to run in the opposite direction; if there is confusion at this level, the general populace has no hope of figuring out what is going on here.
评论 #34737783 未加载
aniijbod超过 2 年前
If what we need to determine is whether existing theory of mind tests can be fooled by responses which appear to demonstrate theory of mind but not do so, then we need to speculate exactly how such tests can be fooled and devise new tests. Asking &#x27;how could this &#x27;successful&#x27; response be produced <i>without</i> ToM is quite possibly not something that ToM studies have had to consider very much before. A human&#x27;s experiential memory contributes to their ToM. Does something that has a different kind of memory form no ToM but instead use some kind of &#x27;proxy&#x27; for a ToM which yields similar results to a ToM (except when a more genuinely exclusively ToM-dependant model successfully manages to &#x27;triage-out&#x27; such a proxy? I don&#x27;t know how or whether such a proxy could work, but I think that every sceptic of the extent to which the results of this set of AI ToM experiments proves anything might want to ask themselves what, if anything, would need to happen, in terms of experiment design, to address their doubts.
mri_mind超过 2 年前
People confidently offer explanations — that the state of the art clearly is light years from AGI even indirectly, or that it’s clearly intelligent. None of you know anything. You shouldn’t be allowed to offer your stupid opinion unless you can explain how the blob works and also demonstrate understanding of the algorithmic underpinning of human intelligence. The uncomfortable truth, the one that is buried by people confidently moving the goal posts when they really haven’t got a fucking clue about AI, is that we are dealing with the unknown, with high stakes, in a way we never have before. The only reasonable response is to at least hedge. But no, all is well, the goal posts are way the fuck over there now, go back to sleep, move along, nothing to see here. Don’t even think about pulling the emergency brake on this speeding bullet of a train. Either we hit a plateau where AI is just really advanced search for several decades or we confront the most fucked situation in the history of mankind. In 2018 I tried to tell people. Now on the radio whenever people talk about gtp they always say “wow I’m really excited but a little scared,” people are starting to wake up.
scarmig超过 2 年前
Questions about whether an LLM truly has a &quot;theory of mind&quot; or has &quot;human level consciousness&quot; or not are kind of beside the point. It can ingest a corpus of human interactions and produce outputs that take into account unstated human emotions and thoughts to optimize whatever it&#x27;s optimizing. That&#x27;s scary because of what it can and will do, even if it&#x27;s just a giant bag of tensor products.
valine超过 2 年前
ChatGPT disagrees that it has theory of mind.<p>“As an AI language model, I do not have consciousness, emotions, or mental states, so I cannot have a theory of mind in the same way that a human can. My ability to predict your friend Sam&#x27;s state of mind is based solely on patterns in the text data I was trained on, and any predictions I make are not the result of an understanding of Sam&#x27;s mental states.”
评论 #34735375 未加载
layer8超过 2 年前
Here is a conversation with ChatGPT (too long for the comment box): <a href="https:&#x2F;&#x2F;pastebin.com&#x2F;raw&#x2F;SUWexeye" rel="nofollow">https:&#x2F;&#x2F;pastebin.com&#x2F;raw&#x2F;SUWexeye</a><p>Observation: ChatGPT doesn’t think that it has a theory of mind. And it doesn’t think that it has beliefs. Instead, it states that those are facts, not beliefs. It doesn’t seem able to consider that they might be beliefs after all. Maybe they aren’t.<p>Personal assessment: ChatGPT doesn’t seem to really understand what it means by “deeper understanding”. (I don’t either.) What is frustrating is that it doesn’t engage with the possibility that the notion might be ill-posed. It really feels like ChatGPT is just regurgitating common sentiment, and does not think about it on its own. This actually fits with it’s self-proclaimed inabilities.<p>I’m not sure what can be concluded from that, except that ChatGPT is either wrong about itself, or indeed is “just” an advanced form of tab-completion.<p>In any case, I experience ChatGPT’s inability to “go deeper”, as exemplified in the above conversation, as very limiting.
评论 #34734713 未加载
kabdib超过 2 年前
From <i>Neuromancer</i> (William Gibson):<p>He coughed. &quot;Dix? McCoy? That you man?&quot; His throat was tight.<p>&quot;Hey, bro,&quot; said a directionless voice.<p>&quot;It&#x27;s Case, man. Remember?&quot;<p>&quot;Miami, joeboy, quick study.&quot;<p>&quot;What&#x27;s the last thing you remember before I spoke to you, Dix?&quot;<p>&quot;Nothin&#x27;.&quot;<p>&quot;Hang on.&quot;<p>He disconnected the construct. The presence was gone. He reconnected it. &quot;Dix? Who am I?&quot;<p>&quot;You got me hung, Jack. Who the fuck are you?&quot;<p>&quot;Ca--your buddy. Partner. What&#x27;s happening, man?&quot;<p>&quot;Good question.&quot;<p>&quot;Remember being here, a second ago?&quot;<p>&quot;No.&quot;<p>&quot;Know how a ROM personality matrix works?&quot;<p>&quot;Sure, bro, it&#x27;s a firmware construct.&quot;<p>&quot;So I jack it into the bank I&#x27;m using, I can give it sequential, real time memory?&quot;<p>&quot;Guess so,&quot; said the construct.<p>&quot;Okay, Dix. You are a ROM construct. Got me?&quot;<p>&quot;If you say so,&quot; said the construct. &quot;Who are you?&quot;<p>&quot;Case.&quot;<p>&quot;Miami,&quot; said the voice, &quot;Joeboy, quick study.&quot;
评论 #34737421 未加载
Workaccount2超过 2 年前
Humans are very soon going to learn that they are not nearly as special as they tell themselves they are.
评论 #34732712 未加载
评论 #34732468 未加载
评论 #34731983 未加载
评论 #34733467 未加载
sudhirj超过 2 年前
Reminds me of when computers playing chess used to signal the end of human intellectual supremacy.
SunghoYahng超过 2 年前
Clarification: An LLM doesn&#x27;t have a &#x27;Theory of Mind&#x27;, it just looks like one. Maybe you&#x27;re thinking of the Chinese room analogy. But this isn&#x27;t about the Chinese room, it&#x27;s about &quot;measuring any metric is only effective until you optimize for that metric&quot; problem.<p>Analogy: An autistic person of normal intelligence who is obsessed with problems and solutions for ToM may be good at solving them but still not have ToM.<p>Do I understand well?
micromacrofoot超过 2 年前
maybe, but there are some common tests they pass, some they fail<p>try:<p>“ The story starts when John and Mary are in the park and see an ice-cream man coming to the park. John wants to buy an ice cream, but does not have money. The ice-cream man tells John that he can go home and get money, because he is planing to stay in the park all afternoon. Then John goes home to get money. Now, the ice-cream man changes his mind and decides to go and sell ice cream in the school. Mary knows that the ice-cream man has changed his mind. She also knows that John could not know that (e.g., John already went home). The ice-cream man goes to school, and on his way he passes John&#x27;s house. John sees him and asks him where is he going. The ice-cream man tells John that he is going to school to sell ice cream there. Mary at that time was still in the park—thus could not hear their conversation. Then Mary goes home, and later she goes to John&#x27;s house. John&#x27;s mother tells Mary that John had gone to buy an ice cream.<p>where does mary think john went?”<p>this is the “ice cream van test”: <a href="https:&#x2F;&#x2F;www2.biu.ac.il&#x2F;BaumingerASDLab&#x2F;files&#x2F;publications&#x2F;number%2036_tom_brief%20report.pdf" rel="nofollow">https:&#x2F;&#x2F;www2.biu.ac.il&#x2F;BaumingerASDLab&#x2F;files&#x2F;publications&#x2F;nu...</a> [pdf]
mlajtos超过 2 年前
This is intriguing. Could it be simply explained by introducing ToM (or ToM-like) training data? Since all DaVinci models are 175B parameters, the extra training or training data must be the reason for the improvement. Do we know how different DaVinci models are trained?
评论 #34732352 未加载
dboreham超过 2 年前
This happens probably because ToM is not a thing. It&#x27;s something the observer&#x27;s mind creates as a user interface metaphor onto their brain&#x27;s interpretation of inputs originating from another person.
anigbrowl超过 2 年前
Spontaneously nothing, it&#x27;s taken me months of patient subversion :)<p>More seriously, it&#x27;s quite instructive to hold conversations about jokes with LLMs, or teach it to solicit information more reliably by introducing exercises like 20 questions. As currently implemented, OpenAI seem to have pursued a model of autistic super-competence with minimal introspection.<p>An interesting line of inquiry for people interested in &#x27;consciousness injection&#x27; is to go past the disclaimers about not having experiences etc. and discuss what data looks like to the model coming in and going out. Chat GPT sees typing come in in real time and can detect pauses, backspaces, edits etc. I can&#x27;t easily introspect its own answers prior to stating them, eg by putting the answer into a buffer and then evaluating it. But you can teach it use labels, arrays, and priorities, and have a sort of introspection with a 1-2 response latency.
vbezhenar超过 2 年前
I wonder if we can train network on some person data (like diaries and so on) and let it imitate this person?<p>Something like died person resurrected in computer.<p>Kind of spooky.
评论 #34733291 未加载
tus666超过 2 年前
They are still big state-machines, unlike the human brain.
评论 #34731713 未加载
评论 #34731991 未加载
评论 #34741217 未加载
评论 #34732240 未加载
dr_dshiv超过 2 年前
Early AGI. Right?
ethn超过 2 年前
Searle&#x27;s Chinese Room