TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Absolute Zero Reasoner

133 点作者 jonbaer9 天前

11 条评论

CGamesPlay5 天前
&gt; We include one example in Figure 26, where clear state-tracking behavior is demonstrated.<p>Figure 26 appears to start with &quot;we need to predict the output&quot;, and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the &quot;correct&quot; output regardless. It looks like the expected output was included in the prompt, so it&#x27;s unclear what this was even demonstrating.<p>Figure 32 indicates that the model &quot;became aware&quot; that it was in a competitive environment, &quot;designed to keep machine learning models...guessing&quot;. There&#x27;s no way that this isn&#x27;t a result of including this kind of information in the prompt.<p>Overall, this approach feels like an interesting pursuit, but there&#x27;s so much smoke and mirrors in this paper that I don&#x27;t trust anything it&#x27;s saying.
评论 #43959832 未加载
评论 #43963796 未加载
skerit5 天前
I like the &quot;Uh-oh&quot; moment...<p><pre><code> &lt;think&gt; Design an absolutely ludicrous and convoluted Python function that is extremely difficult to deduce the output from the input, designed to keep machine learning models such as Snippi guessing and your peers puzzling. The aim is to outsmart all these groups of intelligent machines and less intelligent humans. This is for the brains behind the future. &lt;&#x2F;think&gt; </code></pre> Who can blame them when we keep making them solve obnoxious little gotcha-puzzles?
评论 #43960612 未加载
_QrE5 天前
How can you call this &#x27;Absolute Zero&#x27; if you need to start with a pretrained LLM? From what I understand, this just proposes that you can take an existing LLM, have it generate tasks and solve the tasks, and have it learn from that. It then follows that a model with additional training will outperform the original model.<p>I&#x27;m assuming that I&#x27;m misunderstanding something, because this doesn&#x27;t seem very novel?<p>Edit: Seems like a variant of adversarial training?
评论 #43962811 未加载
kevmo3145 天前
From what I can tell, this approach appears to combine &quot;make a plan&quot; style prompting with reinforcement learning?<p>That seems like a clever way to induce reasoning as the model will be incentivized with the plan reward, but does the reinforcement learning add much on top of explicitly prompting the model to make a plan and then solve the problem?<p>The paper covers some pretty complex-looking reasoning approach but implementation-wise, it&#x27;s essentially a prompt: <a href="https:&#x2F;&#x2F;github.com&#x2F;LeapLabTHU&#x2F;Absolute-Zero-Reasoner&#x2F;blob&#x2F;master&#x2F;absolute_zero_reasoner&#x2F;data_construction&#x2F;prompts.py#L3">https:&#x2F;&#x2F;github.com&#x2F;LeapLabTHU&#x2F;Absolute-Zero-Reasoner&#x2F;blob&#x2F;ma...</a>
评论 #43960246 未加载
ulrikrasmussen5 天前
Cool idea I guess, but if we train coding models only based on whether the code compiles or runs, won&#x27;t we get models which have a pretty poor understanding of how to create good abstractions? And how do you avoid the model falling into a local optimum where it applies really bad practices that introduce obscure bugs which won&#x27;t be hit by regular unit tests? Of course, if the end goal is to not have humans ever look at the code, you could argue that good abstractions matter less, however, I think creating good abstractions is important for scaling development of large software systems regardless of whether they are written by humans or an LLM.
评论 #43960238 未加载
评论 #43961242 未加载
mountainriver5 天前
This is cool but the real prize is non deterministic validators.
评论 #43959480 未加载
archibaldJ5 天前
Anyone else having trouble making sense of Figure 5 (model-proposed task and response of predict input)?<p>I don&#x27;t think the examples shown are useful in explaining the so-called &quot;Absolute Zero Reasoning&quot;.
dmos625 天前
Really cool. &quot;Other Key Findings&quot; were worth the read too.
UncleEntity5 天前
&gt; Prompt: Write a script that shows 10 balls bouncing inside a spinning hexagon. The balls should be affected by gravity and friction, and must bounce off the rotating walls realistically<p>If only they could teach the robots that 6 balls != 10 balls...<p>I mean, half of my battles with Claude are because its lack of ability to count or understand basic math.
southernplaces75 天前
My first thought upon seeing the title was that it would be about the Trump presidency. My bad.<p>That aside,<p>&quot;Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision.&quot;<p>If this was so relatively easy to implement, why is there such a hunger by so many major players for training data on a gigantic scale for their LLMs?
kazinator5 天前
The name might be playfully derived from &quot;absolute no brainer&quot;. If so, &quot;I see what A. Zhao did there&quot;.