Assessing the reasoning capabilities of large language models (LLMs) poses a significant challenge, particularly in distinguishing reasoning from memorization.<p>For instance, when an LLM answers "2 + 2 = 4," it relies on training data repetition rather than an understanding of arithmetic. This behavior parallels Daniel Kahneman’s "System 1" thinking—fast and reflexive.<p>Yet, with more complex tasks, such as adding large numbers or solving multi-step puzzles, LLMs typically fail unless they can access external tools.<p>This inability to shift to "System 2" thinking—slow, deliberate reasoning—remains a fundamental limitation.<p>Vendors have addressed this by integrating tools like calculators -- an useful addition that works around the inability of LLMs to reason.<p>But how can progress be accurately measured if simple reasoning tasks are replaced with tools?<p>## Tricky Questions: A Flawed Metric<p>To overcome this challenge, researchers have crafted "tricky" questions designed to test reasoning, such as:<p>> "You have 3 apples, and I give you 2 more—but one is much smaller. How many apples do you have?"<p>An LLM might misinterpret the detail about size as a cue to exclude the smaller apple. While such tests highlight weaknesses, they mainly probe linguistic ambiguity rather than reasoning. Moreover, as vendors train models to handle these patterns, the tests lose diagnostic value.<p>Instead, we propose focusing on straightforward tasks requiring deliberate reasoning, which cannot be solved through pattern recognition.<p>## A Reasoning Benchmark Framework<p>*Effective evaluation demands benchmarks that are clear, simple, and tool-free*.<p>We propose the following milestones:<p>1. *Basic Arithmetic Competence*: A reasoning model should reliably compute sums, products, or powers for large numbers without external tools.<p>2. *Execution of Simple Algorithms*: The model should be able to perform basic algorithmic tasks, such as sorting a list, computing a factorial, or simulating a logical circuit without external tools.<p>3. *Structured Puzzles*: Tasks like sudoku or nonograms without external tools.<p>4. *Strategic Gameplay*: Games such as tic-tac-toe, checkers, or chess without external tools.<p>5. *Novel Problem Solving*: Finally, a capable reasoning system should propose original solutions to well-defined mathematical or logical problems. Generating new proofs or contributing insights to unsolved problems would demonstrate a high degree of reasoning aptitude.<p>These benchmarks establish a baseline for reasoning but do not imply artificial general intelligence (AGI).<p>At the same time, we can use these benchmarks to discard claims that LLMs are somehow "close" to AGI.<p>## External Tools and Transparency<p>Proprietary LLMs often integrate tools to enhance performance, but this prevents evaluation of the models.<p>To ensure fair assessment, vendors should provide a way to disable tools during evaluations.<p>## Simplicity as a Strength<p>Critics may argue that simple benchmarks fail to capture real-world complexity. Yet, as shown by arithmetic, simplicity can illuminate reasoning processes without sacrificing rigor.<p>Straightforward tasks like multi-step computations and logical puzzles reveal essential reasoning skills without relying on tricky or convoluted questions.<p>## Conclusion<p>Evaluating reasoning in LLMs does not require convoluted tests. Transparent, tool-free benchmarks grounded in deliberate problem-solving provide a clearer measure of progress. By focusing on tasks that demand "System 2" thinking, we can set meaningful milestones for development.<p>No LLM should be deemed closer to AGI if it cannot solve simple reasoning problems independently. Transparency and simplicity are essential for advancing our understanding of these systems and their potential.