TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Machine learning and information theory concepts towards an AI Mathematician

109 pointsby marojejian7 months ago

8 comments

youoy7 months ago
I feel that here the authors think about mathematics as just the language and not what it talks about. We know at least since Gödel that mathematics is more than just the formal system. I even remember going to a talk by Gromov and he getting &quot;angry&quot; about a question from the public in the lines of &quot;what if we suppose this extra thing and change this other thing on that theorem&quot;, and him saying that it&#x27;s stupid to think of new theorems for the sake of doing it, that the language of mathematics are like metaphors speaking about something else.<p>In my experience learning math, their claim that &quot;The central hypothesis is that a desirable body of theorems better summarizes the set of all provable statements, for example by having a small description length&quot; is not true. Short != Better, better is what gets me faster to form the correct intuitive idea about the mathematical statement. For example, several times I have experienced the fact of understanding the formal definitions and proofs of a theory, but it&#x27;s not until I form the correct intuitions (maybe months later), that I truly understand the theory. And it&#x27;s not until I have the correct intuitions, that I can successfully apply the theory to create meaningful new theorems.<p>Anyways, I understand that one has to start from somewhere and the point of view of the article is more tractable and explicit.
sgt1017 months ago
I love that &quot;system 1&quot; and &quot;system 2&quot; are tossed out as fundamental concepts of mind when they&#x27;re basically labels for some semi-random ideas based on bullshit studies in a pop sci fluff book. Is there any actual science that ties these ideas to mathematical reasoning?<p>Then we have:<p>&gt;&gt; By analogy, we can now state the main hypothesis proposed in this paper: a crucial component of the usefulness of a new proven theorem t (in the context of previous theorems T(S)) is how efficiently T(S)∪{t} compresses the set of all provable mathematical statements M. That is, T (S) ∪ {t} is a good compression of M if many provable statements, beyond those in S, can be derived from T (S) ∪ {t}, say using at most k derivation steps.<p>which says &quot;Occam&#x27;s Razor, that&#x27;s good isn&#x27;t it?&quot; in more words.
评论 #41822674 未加载
评论 #41822392 未加载
评论 #41823136 未加载
marojejian7 months ago
I found this to be an interesting read, though it&#x27;s just a high-level plan, vs. breaking new ground. I like how the ideas are synthesized.<p>A fun detail is to be found in a footnote: &quot;A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.” (attributed to Stefan Banach in Randrianantoanina and Randrianantoanina [2007]).<p>And reading this, it seems to me this implies the &quot;ultimate&quot; mathematician is a poet.<p>P.S. I initially assumed &quot;Randrianantoanina and Randrianantoanina&quot; was the title of some nerdy speculative fiction. It turns out this is a regular paper reference. The authors are Malagasy, no doubt. They have such cool names there.
评论 #41821327 未加载
评论 #41821886 未加载
评论 #41823029 未加载
thomasahle7 months ago
A key aspect in AlphaProof was how to find good training problems. For AlphaProof they managed to scrape 1 million &quot;International Mathematical Olympiad (IMO) style&quot; problems.<p>Here Bengio seems to just suggest using estimated entropy `E_P(t|T(S))[− log P(t | T(S))]` of the theorem to find interesting&#x2F;surprising theorems to try and prove.<p>This reminds me of the early work in estimating LLM uncertainty&#x2F;hallucinations using perplexity. It didn&#x27;t work very well.<p>Actually generating good &quot;exercises&quot; for the system to &quot;practice&quot; on, and theorems to try and prove, still seems like the biggest road block for an AI mathematician. It connects to the philosophical question of why we are doing math in the first place?
评论 #41824758 未加载
marojejian7 months ago
Abstract. The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning. What could be missing? Can we learn something useful about that gap from how the brains of mathematicians go about their craft? This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities – which correspond to our intuition and habitual behaviors – but still lacks something important regarding system 2 abilities – which include reasoning and robust uncertainty estimation. It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement, which could guide future work in crafting an AI mathematician. The focus is not on proving a given theorem but on discovering new and interesting conjectures. The central hypothesis is that a desirable body of theorems better summarizes the set of all provable statements, for example by having a small description length while at the same time being close (in terms of number of derivation steps) to many provable statements.
ford7 months ago
Given the pace of AI lately - might be worth mentioning this is from March of this year
jlopes27 months ago
This feels a little to high level for me, the leap from what LLM does today to provable theorem set is rough.<p>community is having large debates on whether an LLM can reason outside of its training.This feels ignored in here.
n00b1017 months ago
### *Formalization and Implementation*: While the paper lays out a theoretical framework, its practical implementation may face significant challenges. For instance, generating meaningful mathematical conjectures is far more abstract and constrained than tasks like generating text or images. The space of potential theorems is vast, and training an AI system to navigate this space intelligently would require further breakthroughs in both theory and computational techniques.<p>### *Compression as a Measure of Theorem Usefulness*: The notion that a good theorem compresses provable statements is intriguing but may need more exploration in terms of practical utility. While compression aligns with Occam&#x27;s Razor and Bayesian learning principles, it&#x27;s not always clear whether the most &quot;compressed&quot; theorems are the most valuable, especially when considering the depth and complexity of many foundational theorems in mathematics.<p>### *Human-AI Collaboration*: The paper lightly touches on how this AI mathematician might work alongside humans, but the real power of such a system might lie in human-AI collaboration. A mathematician AI capable of generating insightful conjectures and proofs could dramatically accelerate research, but the interaction between AI and human intuition would be key.<p>### *Computational and Theoretical Limits*: There are also potential computational limits to the approach. The &quot;compression&quot; and &quot;conjecture-making&quot; frameworks proposed may be too complex to compute at scale, especially when considering the vast space of possible theorems and proofs. Developing approximation methods or heuristics that are effective in real-world applications will likely be necessary.<p>Here&#x27;s how we can unpack this paper:<p>### *System 1 vs. System 2 Thinking*: - *System 1* refers to intuitive, fast, and automatic thinking, such as recognizing patterns or generating fluent responses based on past experience. AI systems like GPT-4 excel in this area, as they are trained to predict and generate plausible content based on large datasets (e.g., text completion, language generation). - *System 2* refers to deliberate, logical, and slow thinking, often involving reasoning, planning, and making sense of abstract ideas—such as solving a mathematical proof, engaging in formal logic, or synthesizing novel insights. The claim that AI lacks System 2 abilities suggests that while AI can mimic certain behaviors associated with intelligence, it struggles with tasks that require structured, step-by-step reasoning and deep conceptual understanding.<p>### &quot;Not so much in terms of mathematical reasoning&quot;<p>The claim is *partially true*, but it must be put into context:<p><pre><code> - **Progress in AI**: AI has made **tremendous advances** in recent years, and while it may still lack sophisticated mathematical reasoning, there is significant progress in related areas like automated theorem proving (e.g., systems like Lean or Coq). Specialized systems can solve well-defined, formal mathematical problems—though these systems are not general-purpose AI and operate under specific constraints. - **Scope of Current Models**: General-purpose models like GPT-4 weren&#x27;t specifically designed for deep mathematical reasoning. Their training focuses on predicting likely sequences of tokens, not on formal logic or theorem proving. However, with enough specialized training or modularity, they could improve in these domains. We’ve already seen AI systems make progress in proving mathematical theorems with reinforcement learning and imitation learning techniques. - **Frontiers of AI**: As AI continues to develop, future systems might incorporate elements of both System 1 and System 2 thinking by combining pattern recognition with symbolic reasoning and logical processing (e.g., systems that integrate neural networks with formal logic solvers or reasoning engines). </code></pre> ### Conclusion: AI excels in tasks involving intuitive, pattern-based thinking but struggles with deliberate, goal-oriented reasoning required for deep mathematical work. However, as research evolves—especially in hybrid models that combine deep learning with symbolic reasoning and formal logic—these limitations may become less pronounced.<p>The future of AI may very well involve systems that are capable of the same level of mathematical reasoning (or better) as &quot;human experts.&quot;