TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Absolute Zero: Reinforced Self-Play Reasoning with Zero Data

3 点作者 sinuhe696 天前

2 条评论

sinuhe696 天前
Reinforcement learning with verifiable rewards (RLVR) has shown promise in enhancing the reasoning capabilities of large language models by learning directly from outcome-based rewards. Recent RLVR works that operate under the zero setting avoid supervision in labeling the reasoning process, but still depend on manually curated collections of questions and answers for training. The scarcity of high-quality, human-produced examples raises concerns about the long-term scalability of relying on human supervision, a challenge already evident in the domain of language model pretraining. Furthermore, in a hypothetical future where AI surpasses human intelligence, tasks provided by humans may offer limited learning potential for a superintelligent system. To address these concerns, we propose a new RLVR paradigm called Absolute Zero, in which a single model learns to propose tasks that maximize its own learning progress and improves reasoning by solving them, without relying on any external data. Under this paradigm, we introduce the Absolute Zero Reasoner (AZR), a system that self-evolves its training curriculum and reasoning ability by using a code executor to both validate proposed code reasoning tasks and verify answers, serving as an unified source of verifiable reward to guide open-ended yet grounded learning. Despite being trained entirely without external data, AZR achieves overall SOTA performance on coding and mathematical reasoning tasks, outperforming existing zero-setting models that rely on tens of thousands of in-domain human-curated examples. Furthermore, we demonstrate that AZR can be effectively applied across different model scales and is compatible with various model classes.
sinuhe696 天前
This approach aims to train reasoning models without relying on human-curated data, allowing models to learn by proposing tasks, solving them, and learning from both stages through self-play with the aid of an environment.<p>The core of this research is the Absolute Zero Reasoner (AZR), which focuses on proposing and solving coding tasks, utilizing a code executor for verifiable feedback.<p>Key Findings and Contributions:<p><pre><code> State-of-the-Art Performance: AZR has demonstrated state-of-the-art performance in coding and mathematical reasoning tasks, outperforming models trained on traditional human-curated datasets. Enhanced Reasoning Capabilities: The study suggests that coding capabilities developed through AZR training may amplify overall improvements in reasoning. Models trained with AZR showed stronger gains in generalized reasoning compared to those trained with expert code. Scalability: The performance improvements observed with AZR appear to scale with the size of the model. Cognitive Behaviors: AZR exhibits emergent cognitive behaviors such as step-by-step reasoning and trial-and-error. The research also noted that token counts grow with training and vary depending on the type of task. </code></pre> (Summarized by Gemini)