TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LLMs can't perform "genuine logical reasoning," Apple researchers suggest

115 点作者 samizdis7 个月前

12 条评论

rahimnathwani7 个月前
Discussed the day before yesterday: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41823822">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41823822</a><p>And the day before that: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41808683">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41808683</a>
评论 #41850672 未加载
wkat42427 个月前
LLMs were never designed for this. In Apple&#x27;s language: &quot;you&#x27;re holding it wrong&quot;.<p>It&#x27;s an impressive technology but its limits are highly overlooked in the current hype cycle.<p>AI researchers have known this from the start and won&#x27;t be surprised by this because it was never intended to be able to do this.<p>The problem is the customers who are impressed by the human-sounding bot (sounding human is exactly what an LLM is for) and mentally ascribe human skills and thought processes to it. And start using it for things it&#x27;s not, like an oracle of knowledge, a reasoning engine or a mathematics expert.<p>If you want to have knowledge, go to a search engine (a good one like kagi) which can be ai assisted like perplexity. If you want maths, go to Wolfram Alpha. For real reasoning we need a few more steps on the road to general AI.<p>This is the problem with hypes. People think a tech is the be all end all for everything and no longer regard its limitations. The metaverse hype saw the same problem even though there&#x27;s some niche usecases where it really shines.<p>But now it&#x27;s labelled as a flop because the overblown expectation of all the overhyped investors couldn&#x27;t be met.<p>What an LLM is great at is the human interaction part. But it needs to be backed by other types of AI that can actually handle the request and for many usecases this tech still needs to be invented. What we have here is a toy dashboard that looks like one of a real car, except it&#x27;s not connected to one. The rest will come but it&#x27;ll take a lot more time. Meanwhile making LLMs smarter will not really solve the problem that they&#x27;re inherently not the tool for the job they&#x27;re being used for.
评论 #41848665 未加载
评论 #41859077 未加载
评论 #41850458 未加载
评论 #41846934 未加载
评论 #41846276 未加载
gota7 个月前
This seems to be a comprehensive repeat of the &quot;Rot13&quot; and &quot;Mystery Blocks world&quot; experiments as described by Prof. Subbarao Kambhampati<p>Rot13 meaning that LLMs can&#x27;t do Rot 3, 4, ..., n except for Rot13 (because that&#x27; in the training data)<p>Mystery Blocks World being a trivial &quot;translation&quot; (by direct replacement of terms) of a simple Blocks World. The LLMs can solve the original, but not the &quot;translation&quot; - susprisingly, even when provided with the term replacements!<p>Both are discussed in Prof. Subbarao&#x27;s Machine Learning Street Talks episode
TexanFeller7 个月前
A couple years ago I heard an interview with someone that has stuck with me. He said that human “reasoning” didn’t evolve to reason in the logical sense, but to _provide reasons_ likely to be accepted by other humans, allowing better survival by manipulating other humans. This matches my perception of most people’s reasoning.<p>What’s funny is that AI is now being trained by a human accepting or rejecting its answers, probably not on the basis of the rigor of the answer since the temp worker hired to do it is probably not a logician, mathematician, or scientist. I suspect most people’s reasoning is closer to an LLM’s than we would be comfortable admitting.
评论 #41849034 未加载
评论 #41849335 未加载
评论 #41848837 未加载
airstrike7 个月前
<i>&gt; OpenAI&#x27;s ChatGPT-4o, for instance, dropped from 95.2 percent accuracy on GSM8K to a still-impressive 94.9 percent on GSM-Symbolic.</i><p>In other words, ChatGPT continues to dominate. A 0.3% drop might as well be noise.<p>Also the original, allegedly more expensive GPT-4 (can we call it ChatGPT-4og??) is conspicuously missing from the report...
osigurdson7 个月前
Companies have been salivating at the possibility of firing everybody and paying ChatGPT $20 per month instead to run the entire business. I don&#x27;t have any moral objections to it but find it incredibly naive. ChatGPT &#x2F; LLMs help a bit - that&#x27;s it.
评论 #41866415 未加载
randcraw7 个月前
One think I like about this effort is their attempt to factor out the cacheing of prior answers due to having asked a similar question before. Due to the nearly eidetic memoization ability of LLMs, no cognitive benchmark can be meaningful unless the LLM&#x27;s question history can somehow be voided after each query. I think this is especially true when measuring reasoning which will surely benefit greatly from the cacheing of answers from earlier questions into a working set that will enhance its associations on future similar questions -- which only <i>looks</i> like reasoning.
评论 #41850681 未加载
cyanydeez7 个月前
Best thing LLMs do is add to the theory of p-zombies among the population.<p>Instead of the dead Internet theory, we should start finding what percent of the population is no better than a LLM.
评论 #41844099 未加载
评论 #41845749 未加载
评论 #41843898 未加载
jokoon7 个月前
Finally, some people are using basic cognition science to evaluate AI<p>Also they mapped an insect brain<p>Seems like my several comments suggesting AI scientists should peek other fields, did get some attention.<p>That probably makes me the most talented and insightful AI scientist on the planet.
bubble123457 个月前
I mean so far LLMs can&#x27;t even do addition and multiplication of integers accurately. So we can&#x27;t really expect too much in terms of logical reasoning.
评论 #41844128 未加载
krick7 个月前
First off, I want to say this is kinda baffling to me, that this is some kind of novel &quot;research&quot;, and it&#x27;s published by Apple of all companies in the field. I could be more forgiving that some journalists try to sell it as &quot;look, LLMs are incapable of logical reasoning!&quot;, because journalists always shout loud stupid stuff, otherwise they don&#x27;t get paid, apparently. But still, it&#x27;s kind of hard to justify the nature of this &quot;advancement&quot;.<p>I mean, what is being described seems like super basic debug step for any real world system. This is kind of stuff not very advanced QA teams in boring banks do to test your super-boring not very advanced back-office bookkeeping systems. After this kind of testing reveals a number of bugs, you don&#x27;t erase this bookkeeping system and conclude banking should be done manually on paper only, since computers are obviously incapable of making correct decisions, you fix these problems one by one, which sometimes means not just fixing a software bug, but revisioning the whole business-logic of the process. But this is, you know, routine.<p>So, not being aware of what are these benchmarks everyone uses to test LLM-products (please note, they are not testing LLMs as some kind of concept here, they are testing <i>products</i>), I would assume that OpenAI in particular, and any major company that released their own LLM product in the last couple of years in general, already does this super-obvious thing. But why this huge discovery happens now, then?<p>Well, obviously, there are 2 possibilities. Either none of them really do this, which sounds unbelievable - what all these high-paid genius researchers even do then? Or, more plausibly, they do, but not publish that. This one sounds reasonable, given there&#x27;s no OpenAI, but AltmanAI, and all that stuff. Like, they compete to make a better general reasoning system, <i>of course</i> they don&#x27;t want to reveal all their research.<p>But this doesn&#x27;t really look reasonable to me (at least, at this very moment) given how basic the problem being discussed is. I mean, every school kid knows you shouldn&#x27;t test on data you use for learning, so to be &quot;peeking into answers when writing a test&quot; only to make your product to perform slightly better on popular benchmarks seems super-cheap. I can understand when Qualcomm tweaks processors specifically to beat AnTuTu, but trying to beat problem-solving by improving your crawler to grab all tests on the internet is pointless. It seems, they should actively try to not contaminate their learning step by training on popular benchmarks. So what&#x27;s going on? Are people working on these systems really that uncreative?<p>This said, all of it only applies to general approach, which is to say it&#x27;s about what article <i>claims</i>, not what it <i>shows</i>. I personally am not convinced.<p>Let&#x27;s take kiwi example. The whole argument is framed as if it&#x27;s obvious that the model shouldn&#x27;t have substracted these 5 kiwies. I don&#x27;t know about that. Let&#x27;s imagine, this is a real test, done by real kids. I guarantee you, the most (all?) of them would be rather confused by the wording. Like, what should we do with this information? Why was it included? Then, they will decide if they should or shouldn&#x27;t substract the 5. I won&#x27;t try to guess how many of them will, but the important thing is, they&#x27;ll have to make this decision, and (hopefully) nobody will suddenly multiply the answer by 5 or do some meaningless shit like that.<p>And neither did LLMs in question, apparently.<p>In the end, these students will get the wrong answer, sure. But who decides if it&#x27;s wrong? Well, of course, the teacher does. Why it&#x27;s wrong? Well, &quot;because it wasn&#x27;t said you should discard small kiwies!&quot; Great, man, you also didn&#x27;t tell us we shouldn&#x27;t discard them. This isn&#x27;t a formal algebra problem, we are trying to use some common sense here.<p>In the end, it doesn&#x27;t really matter, what teacher thinks the correct answer is, because it was just a stupid test. You may never really agree with him on this one, and it won&#x27;t affect your life. Probably, you&#x27;ll end up making more than him anyway, so here&#x27;s your consolation.<p>So framing situations like this as a proof that LLM gets things objectively wrong just isn&#x27;t right. It did subjectively wrong, judged by opinion of Apple researchers in question, and some other folks. Of course, this is what LLM development essentially is: doing whatever magic you deem necessary, to get it give more subjectively correct answers. And this returns it&#x27;s to my first point: what is OpenAI&#x27;s (Anthropic&#x27;s, Meta&#x27;s, etc) subjectively correct answer here? What is the end goal anyway? Why this &quot;research&quot; comes from &quot;Apple researchers&quot;, not from one of these compenies&#x27; tech blogs?
fungiblecog7 个月前
No shit!