The HN submission title is editorialized in a non-helpful way. Why beat a dead horse instead of focusing on what’s actually new in TFA?<p>The linked paper proposes an obvious-in-retrospect form of data augmentation: shuffle the order of the premises, so that the model can’t rely on spurious patterns. That’s kinda neat.
When a language model is trained for chain-of-thought reasoning, particularly on datasets with a limited number of sequence variations, it may end up memorizing predetermined step patterns that seem effective but don’t reflect true logical understanding. Rather than deriving each step logically from the previous ones and the given premises, the model might simply follow a “recipe” it learned from the training data. As a result, this adherence to learned patterns can overshadow genuine logical relationships, causing the model to rely on familiar sequences instead of understanding why one step logically follows from another.<p>In other words, language models are advanced pattern recognizers that mimic logical reasoning without genuinely understanding the underlying logic.<p>We might need to shift our focus on the training phase for better performance?
If an LLM’s logic is derived primarily from its training phase… essentially, by following patterns it has previously seen; doesn’t that underscore the critical role of training? We invest significantly in reinforcement learning and subsequent processes, so if the paper’s claim is accurate, perhaps we need to explore innovative approaches during the training phase
Statistical inference is a form of logic. Can it do pure logical deduction? No. And neither can humans without some underlying pattern recognition to form premises. This notion of true "understanding" is a fantasy.
Is there someone you're trying to disprove? LLMs are inherently statistical, as opposed to other techniques that rely on symbolic or logical relationships. I'm no expert but this is one of the very first things I learned when taking a class on neural networks.