TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

There may not be aha moment in R1-Zero-like training

78 点作者 qianli_cs3 个月前

7 条评论

vessenes3 个月前
I sort of flipped between “boring” to “..interesting..” to “maybe boring?” To “possibly interesting?” Reading this.<p>The meaning of the title is simply that models from most providers can do some self-reflection when prompted to do so, without any R1-Zero type fine tuning.<p>This is put out as surprising, and I do not think that it is surprising at all. We’ve known about Chain of Thought type prompting for some time, and this is a variant of it.<p>They then mention that 1.5B-ish parameter models don’t seem to be <i>very good</i> at self reflection out of the box. This does not surprise me in the slightest. Unless heavily distilled and tuned for a very specific job, 1.5B parameter models aren’t good at much.<p>They then note that something about the reward functions in R1 Zero’s setup create a typical pattern of shorter and shorter self-reflection until some sort of inflection point where the reflection gets longer, and correct answers are more common.<p>This seems pretty interesting! The so-called “Aha” moment is when a model during training hits this inflection point and starts productively using and extending the self-reflection.<p>I think my reaction overall is that the research is worth doing, as it’s trying to get at what exactly works about R1-Zero training, and why it works, and that’s great. It’s just a small start though.
Vetch3 个月前
The essence of the article is that self-correction exists as a nascent ability in base models already (more robustly in some like Qwen than others). This is highly reminiscent of Chain of Thought, which was found to be a capability already present in base models too. The result of RL is to reinforce already present authentic self-correction patterns and down weight superficial self-correction.<p>Thoughts:<p>- An analogy you shouldn&#x27;t zoom too close into is going from CoT to reasoning traces is like going from purely ballistic trajectories to including navigation and thrusters. RL is for learning how to use the thrusters for adjustments based on its internal encodings of rare samples† where some author fully spelled out their thought process.<p>- This might also explain why SFT on reasoner traces seems to be surprisingly effective. If it were purely an RL mediated phenomenon, SFT for reasoning would not work nearly as well.<p>- Deepseek struggled to get RL to work on smaller models, if this is replicated, it might be the case that larger models encode self-correction patterns more robustly while having them as more probable.<p>- Imitating traces is easier than pure RL for bringing such patterns to the fore, for smaller models. However, we still want models to learn how to dynamically adjust their thrusters, SFT does not provide ample opportunity for this. Further training with RL or alternatively, replacing SFT with methods like [Critique Fine-Tuning](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.17703" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.17703</a>) are needed.<p>- The article incidentally reinforces that having a low temperature means consistency not correctness. Except for high confidence scenarios, the highest greedily computed probability answer is generally less likely to be among the best ones the model can give.<p>†Question: First thought is blogs by people who discuss what didn&#x27;t work. But, I wonder how much of reasoning model patterns and ability is shaped by Detective Conan transcripts?
Jean-Papoulos3 个月前
&gt;We found Superficial Self-Reflection (SSR) from base models’ responses, in which case self-reflections do not necessarily lead to correct final answers.<p>I must be missing something here. No one was arguing that the AI answers are correct to begin with, just that self-reflection leads to more correct answers when compared to not using the process ?
littlestymaar3 个月前
TL;DR;<p>Base models exhibit what rhe authors call &quot;Superficial Self-Reflection&quot; where it looks like it&#x27;s reasoning but it doesn&#x27;t lead to an actual improvement in answer quality. Then with RL the models learn to effectively use this reflection to improve answer quality.<p>The whole read is interesting but I don&#x27;t think the title is really an accurate description of it…
评论 #42973101 未加载
trash_cat3 个月前
&quot;...found that the increasing response length phenomenon is not due to the emergence of self-reflection, but a consequence of RL optimizing well-designed rule-based reward functions.&quot;<p>What is the difference?
jamiequint3 个月前
Some interesting discussion in the author&#x27;s X thread here: <a href="https:&#x2F;&#x2F;x.com&#x2F;zzlccc&#x2F;status&#x2F;1887557022771712308" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;zzlccc&#x2F;status&#x2F;1887557022771712308</a>
benob3 个月前
This calls for controlling post-training instruction data from the base model. Does it contain many instances of self-reflection?<p>Also, has anyone tried non-instruct tuned base models?