TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Order Doesn’t Matter, But Reasoning Does

14 点作者 spaintech2 个月前

6 条评论

scribu2 个月前
The HN submission title is editorialized in a non-helpful way. Why beat a dead horse instead of focusing on what’s actually new in TFA?<p>The linked paper proposes an obvious-in-retrospect form of data augmentation: shuffle the order of the premises, so that the model can’t rely on spurious patterns. That’s kinda neat.
评论 #43247723 未加载
spaintech2 个月前
When a language model is trained for chain-of-thought reasoning, particularly on datasets with a limited number of sequence variations, it may end up memorizing predetermined step patterns that seem effective but don’t reflect true logical understanding. Rather than deriving each step logically from the previous ones and the given premises, the model might simply follow a “recipe” it learned from the training data. As a result, this adherence to learned patterns can overshadow genuine logical relationships, causing the model to rely on familiar sequences instead of understanding why one step logically follows from another.<p>In other words, language models are advanced pattern recognizers that mimic logical reasoning without genuinely understanding the underlying logic.<p>We might need to shift our focus on the training phase for better performance?
评论 #43247072 未加载
评论 #43247026 未加载
评论 #43247173 未加载
评论 #43247015 未加载
评论 #43246948 未加载
spaintech2 个月前
If an LLM’s logic is derived primarily from its training phase… essentially, by following patterns it has previously seen; doesn’t that underscore the critical role of training? We invest significantly in reinforcement learning and subsequent processes, so if the paper’s claim is accurate, perhaps we need to explore innovative approaches during the training phase
fancyfredbot2 个月前
The title is actually &quot;Order Doesn&#x27;t Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation&quot;.
farts_mckensy2 个月前
Statistical inference is a form of logic. Can it do pure logical deduction? No. And neither can humans without some underlying pattern recognition to form premises. This notion of true &quot;understanding&quot; is a fantasy.
srveale2 个月前
Is there someone you&#x27;re trying to disprove? LLMs are inherently statistical, as opposed to other techniques that rely on symbolic or logical relationships. I&#x27;m no expert but this is one of the very first things I learned when taking a class on neural networks.