TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Large language models lack deep insights or a theory of mind

277 点作者 mnode超过 1 年前

26 条评论

tinco超过 1 年前
I think that if they would, that would be very surprising and indicative of a lot of wastefulness inside the model architecture. All these tests are simple single prompt experiments, so the LLM&#x27;s get no chance to reason about their responses. They&#x27;re just system 1 thinking, the equivalent of putting a gun to someone&#x27;s head and asking them to solve a large division in 2 seconds.<p>I bet a lot of these experiments would already solvable by putting the LLM in a simple loop with some helper prompts that make it restructure and validate its answers, form theories and get to explore multiple lines of thought.<p>If an LLM would be able to do that in a single prompt, without a loop (so the LLM always answers in a predictable amount of time), then it would mean its entire reasoning structure is repeated horizontally through the layers of its architecture. That would be both limiting (i.e. limit the depth of the reasoning to the width of the network) and very expensive to train.
评论 #38476685 未加载
评论 #38478131 未加载
评论 #38478183 未加载
评论 #38476605 未加载
评论 #38478774 未加载
评论 #38483010 未加载
评论 #38476670 未加载
menssen超过 1 年前
I appreciate this paper for relatively clearly stating what &quot;human-like&quot; might entail, which in this case involves &quot;reasoning about the causes behind other people&#x27;s behavior&quot; which is &quot;critical to navigate the social world&quot; as outlined in this citation:<p><a href="https:&#x2F;&#x2F;www.sciencedirect.com&#x2F;science&#x2F;article&#x2F;abs&#x2F;pii&#x2F;S0010028520300633?via%3Dihub" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.sciencedirect.com&#x2F;science&#x2F;article&#x2F;abs&#x2F;pii&#x2F;S00100...</a><p>I get frustrated often when people argue &quot;well, it isn&#x27;t really intelligent&quot; and then give examples that are clearly dependent on our brain&#x27;s chemical state and our bodies&#x27; existence in-the-physical-world.<p>I get the feeling that when&#x2F;if we are all enslaved by a super-intelligent AI that we do not understand its motives, we will still argue that it is not intelligent because it doesn&#x27;t get hungry and it can&#x27;t prove to us that it has Qualia.<p>This paper argues that gpts are bad at understanding human risk&#x2F;reward functions, which seems like a much more explicit way to talk about this, and also casts it in a way that could help reframe the debate about how human evolution and our physical beings might be significantly responsible for the structure of our rational minds.
评论 #38475981 未加载
评论 #38481928 未加载
fredliu超过 1 年前
I have small kids, toddlers, who can already speak the language but still developing their &quot;sense of the world&quot; or &quot;theory of mind&quot; if you will. Maybe it&#x27;s just me, but talking to toddlers often reminds me of interacting with LLMs, where you would have this realization from time to time &quot;oh, they don&#x27;t get this, need to break down more to explain&quot;. Of course LLM has more elaborate language skills due to its exposure to a lot more text (toddlers definitely can&#x27;t speak like Shakespeare if you ask them, unless, maybe, you are the tiger parents that&#x27;s been feeding them Romeo and Juliet since 1.), but their ability of &quot;reasoning&quot; and &quot;understanding&quot; seems to be on a similar level. Of course, the other &quot;big&quot; difference, is that you expect toddlers to &quot;learn and grow&quot; to eventually be able to understand and develop meta cognitive abilities, while LLMs, unless you retrain them (maybe with another architecture, or meta architecture), &quot;stay the same&quot;.
评论 #38480033 未加载
评论 #38477350 未加载
joduplessis超过 1 年前
For me, the entire AGI conversation is hyperbolic &#x2F; hype. How can we infer intelligence to something when we, ourselves, have such a poor (none) grasp of what makes us conscience? I&#x27;m associating intelligence with consciousness - because it seems correlated. Are we really ready to associate &quot;AGI&quot; with solving math problems (&quot;new Q algo.&quot;)? That seems incredibly naive &amp; reinforces my opinion that LLM&#x27;s are much more like crypto, than actual progress.
评论 #38479215 未加载
评论 #38477423 未加载
评论 #38477692 未加载
评论 #38477821 未加载
评论 #38477690 未加载
评论 #38477841 未加载
评论 #38481176 未加载
33a超过 1 年前
Looking at their data and their experiments, I&#x27;d actually come to the opposite conclusion of the title. It&#x27;s true that current LLMs are probably not quite at human level performance for these tasks, they&#x27;re not that far off either and clearly we see as models increase in size and sophistication their performance on these tasks are improving.<p>So it seems like maybe a better title would be &quot;LLMs don&#x27;t have as advanced a theory of mind as a human does... for now...&quot;
评论 #38477325 未加载
hiddencost超过 1 年前
Another paper in a long series that confuses &quot;our tests against currently available LLMs tuned for specific tasks found that they didn&#x27;t perform well on our task&quot; with &quot;LLMs are architecturally unsuitable for our task&quot;.
评论 #38476667 未加载
评论 #38475920 未加载
评论 #38475460 未加载
deeviant超过 1 年前
&gt; A chief goal of artificial intelligence is to build machines that think like people.<p>I disagree with the topic sentence.<p>The goal should not be to &quot;build machines that think like people&quot;, but to build machines that think, period. The way humans think is unlikely to be the optimal way to go about thinking anyways.<p>Instead of talking about thinking, we should be talking about function. Less philosophy and more reality. Can the system reason itself through various representative challenges as well as or better than human? If yes, it doesn&#x27;t much matter <i>how</i> it does it. In fact, it&#x27;s probably for the best if we can create AI that thinks completely different than humans, has no consciousness or self awareness, but still can do what humans can do and more.
评论 #38477535 未加载
评论 #38477018 未加载
评论 #38477629 未加载
评论 #38476848 未加载
fnordpiglet超过 1 年前
In Buddhism there’s the idea that our core self is awareness, which is silent - it doesn’t think in a perceptible way, it doesn’t feel in a visceral way, but it underpins thought and feeling, and is greatly impacted by it. A large part of meditation and “release of suffering” is learning to let your awareness lead your thinking rather than your thinking lead your awareness.<p>To be clear, I think this is in fact a correct assessment of the architecture of intelligence. You can suspend thought and still function throughout your day in all ways. Discursive thought is entirely unnecessary, but it is often helpful for planning.<p>My observation of LLMs in such a construction of intelligence is they are entirely the thinking mind - verbal, articulate, but unmoored. There is no, for lack of a better word, “soul,” or that internal awareness that underpins that discursive thinking mind. And because that underlying awareness is non articulate and not directly observable by our thinking and feeling mind, we really don’t understand it or have a science about it. To that end, it’s really hard to pin specifically what is missing in LLMs because we don’t really understand ourselves beyond our observable thinking and emotive minds.<p>I look at what we are doing with LLMs and adjacent technologies and I wonder if this is sufficient, and building an AGI is perhaps not nearly as useful as we might think, if what we mean is build an awareness. Power tools of the thinking mind are amazingly powerful. Agency and awareness - to what end?<p>And once we do build an awareness, can we continue to consider it a tool?
评论 #38477349 未加载
评论 #38476215 未加载
评论 #38476259 未加载
评论 #38476365 未加载
评论 #38478727 未加载
评论 #38476574 未加载
评论 #38476518 未加载
hilux超过 1 年前
&gt; A chief goal of artificial intelligence is to build machines that think like people.<p>Maybe that&#x27;s their goal.<p>But for many users of AI, the goal is to have easy and affordable access to a machine that, for some input (perhaps in a tightly constrained domain), gives us the output that we would expect from a high-functioning human being.<p>When I use ChatGPT as a coding helper, I really don&#x27;t care about its &quot;theory of mind.&quot; And its insights are already as deep (actually more deep) as I get from most humans I ask for help. Real humans, not Don Knuth, who is unavailable to help me.
评论 #38481866 未加载
评论 #38479625 未加载
评论 #38479673 未加载
Barrin92超过 1 年前
No LLMs don&#x27;t think like people, they&#x27;re architecturally incapable of doing so. They have, physically unlike humans no access to their own internal state and they&#x27;re, save for a small context window, static systems. They also have no insights. There&#x27;s a hilarious video about LLM Jailbreaks by Karpathy[1] from a week ago, where he shows how you can break model responses by asking the same question with a base64 string, preceding the prompt with an image of a panda(???) or just random word salad.<p>LLM&#x27;s are basically a validation of Searle&#x27;s Chinese room. What they&#x27;ve proven is that you can build functioning systems that perform intelligent tasks purely at the level of syntax. But there is no (or very little) understanding of semantics. If I ask a person on how to end the world, whether I ask in French or English or base64 or perform a 50 word incantation beforehand likely does not matter. (unless of course the human is also just parroting an answer)<p>[1] <a href="https:&#x2F;&#x2F;youtu.be&#x2F;zjkBMFhNj_g?t=2974" rel="nofollow noreferrer">https:&#x2F;&#x2F;youtu.be&#x2F;zjkBMFhNj_g?t=2974</a>
评论 #38477465 未加载
评论 #38477559 未加载
评论 #38480648 未加载
评论 #38478303 未加载
melenaboija超过 1 年前
Few weeks ago I did an experiment after a discussion here about LLMs and chess.<p>Basically inventing a board game and play against ChatGPT and see what happened. It was not able to do a single move, even having provided all the possible start moves in the prompt as part of the rules.<p>Not that I had a lot of hope about it, but it was definitely way worst than I expected.<p>If someone wants to take a look at it:<p><a href="https:&#x2F;&#x2F;joseprupi.github.io&#x2F;misc&#x2F;2023&#x2F;06&#x2F;08&#x2F;chat_gpt_board_game.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;joseprupi.github.io&#x2F;misc&#x2F;2023&#x2F;06&#x2F;08&#x2F;chat_gpt_board_g...</a>
评论 #38477178 未加载
评论 #38477522 未加载
评论 #38477273 未加载
resters超过 1 年前
Here&#x27;s my theory:<p>Consider a typical LLM token vector used to train and interact with an LLM.<p>Now imagine that other aspects of being human (sensory input, emotional input, physical body sensation, gut feelings, etc.) could be added as metadata to the the token stream, along with some kind of attention function that amplified or diminished the importance of those at any given time period -- all still represented as a stream of tokens.<p>If an LLM could be trained on input that was enriched by all of the above kind of data, then quite likely the output would feel much more human than the responses we get from LLMs.<p>Humans are moody, we get headaches, we feel drawn to or repulsed by others, we brood and ruminate at times, we find ourselves wanting to impress some people, some topics make us feel alive while others make us feel bored.<p>Human intelligence is always colored by the human experience of obtaining it. Obviously we don&#x27;t obtain it by getting trained on terabytes of data all at once disconnected from bodily experience.<p>Seemingly we could simulate a &quot;body&quot; and provide that as real time token metadata for an LLM to incorporate, and we might get more moodiness, nostalgia, ambition, etc.<p>Asking for a theory of mind is in fact committing the Cartesian error of making a mind&#x2F;body distinction. What is missing with LLMs is a theory of mindbody... similarity to spacetime is not accidental as humans often fail to unify concepts at first.<p>LLMs are simply time series predictors that can handle massive numbers of parameters in a way that allows them to generate corresponding sequences of tokens that (when mapped back into words) we judge as humanlike or intelligence-like, but those are simply patterns of logic that come from word order, which is closely related in human languages to semantics.<p>It&#x27;s silly to think that we humans are not abstractly representable as a probabilistic time series prediction of information. What isn&#x27;t?
评论 #38479034 未加载
theptip超过 1 年前
This is a terrible eval. Do not update your beliefs on whether LLMs have Theory of Mind based on this paper.<p>The eval is a weird, noisy visual task (picture of astronaut with “care packages”). Their results are hopelessly narrow.<p>A better eval is to use actual scientifically tested psychology test on text (the native and strongest domain for LLMs), for example the sort of scenarios used to gauge when children develop theory of mind (“Alice puts her keys on the table then leaves the room. Bob moves the keys to the drawer. Alice returns. Where does she think the keys are?”) which GPT-4 can handle easily; it is very clear from this that GPT has a theory of mind.<p>A negative result doesn’t disprove capabilities; it could easily show your eval is garbage. Showing a robust positive capability is a more robust result.
评论 #38478640 未加载
评论 #38478642 未加载
stuckinhell超过 1 年前
Do humans have that as well ? I read studies that suggest we make up consciousness a half second after something happened.
评论 #38475367 未加载
评论 #38475439 未加载
JonChesterfield超过 1 年前
The fun question is whether human cognition similarly lacks deep insights or said theory of mind.<p>I perceive a moving of the goalposts as machine intelligence improves. Once we&#x27;d have been happy with smarter than an especially stupid person, now I think we&#x27;re aiming at smarter than the smartest person.
评论 #38476274 未加载
评论 #38475464 未加载
评论 #38477314 未加载
评论 #38475619 未加载
Animats超过 1 年前
Not yet, no. The real question is whether a bigger version of the current technology will have deeper insights. That question should be answered within the next year, with the amount of money and GPU hardware being thrown at the problem.
ehsanu1超过 1 年前
Has the title of the paper changed from what it was initially? It says &quot;Have we built machines that think like people?&quot; now, whereas the HN title is &quot;Large language models lack deep insights or a theory of mind&quot;.
mdp2021超过 1 年前
&gt; <i>A chief goal of artificial intelligence [would be] to build machines that think like people</i><p>&quot;A chief goal of levers (cranes, etc.) engineering would be to build devices that lift like people&quot;
评论 #38478372 未加载
natch超过 1 年前
“vision-based” large language models.<p>Odd restriction. Why not investigate text-based ones?<p>Or is “vision-based” a technical term that encompasses models that were trained on text?
rf15超过 1 年前
I work in the field. It&#x27;s just not how text-token-based autoregressive models can ever work. I can&#x27;t talk about my work of course, but even a quick glance on Wikipedia can tell you they&#x27;d need to be at least a symbolic hybrid, which is not being pursued(?) by the big players at the time.
aaroninsf超过 1 年前
It is refreshing that the author&#x27;s language expresses their findings as indicative of domains for attention and presumed improvement, rather than (as so is often the case, per Ximm&#x27;s Law) making pronouncements which preclude such improvement!
bimguy超过 1 年前
&quot;Large language models lack deep insights or a theory of mind&quot;<p>Funnily enough, this statement also applies to people that are scared of AI.<p>Maybe a bit off topic but does anyone else have that friend who sends them fear mongering AI videos with captions like &quot;shocking AI&quot; that are blatantly unimpressive or completely fake?<p>What is the best way to subdue this kind of fear in a friend, sending them written articles from high level researchers like Brooks does not work.
gumballindie超过 1 年前
I dont know what’s worse. The fact that there are people who believe procedural text generators have insights and a theory of mind or the fact that we are taking them seriously and we need to publish papers to disprove their insanity.
huijzer超过 1 年前
EDIT: Nevermind
评论 #38475565 未加载
curiousgal超过 1 年前
No shit Sherlock!
verytrivial超过 1 年前
I was having a drunken discussion with the philosophy lecturer a few weeks back. He was making a very similar point. I kept saying it does it <i>really</i> matter? Lacking a theory of mind and deep insights describes 90% of all perfectly normal people. And perhaps training will be able to &quot;fake it&quot; (he went off on bold tangents about the definitions of this and that), or the language model will be an adjunct to some other model which <i>does</i> have these insights encoded or deducible, much like the human mind does. He wasn&#x27;t convinced and I was too drunk. But it was basically feeling like: You can&#x27;t feed carrots to a car like you can a horse, therefore cars are worthless.
评论 #38479762 未加载
评论 #38479667 未加载