TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Does RL Incentivize Reasoning in LLMs Beyond the Base Model?

84 点作者 leodriesch23 天前

11 条评论

spwa423 天前
I don&#x27;t like papers that ask a question in the title, so here&#x27;s the answer:<p>&quot;RL boosts sampling efficiency but reduces the reasoning capacity boundary.&quot;<p>Perhaps better to put it like this: Given one, or few attempts, RL trained models beat non-RL models. Given many attempts, non-RL models come up with better answers.
评论 #43761664 未加载
评论 #43762528 未加载
yorwba23 天前
They write &quot;We manually inspect CoT validity to ensure correct answers stem from valid reasoning, not lucky guesses.&quot; but the example answer they show at the end only gets the correct number due to two errors canceling out. The model calculates 195+367+562+900 and gets 1924 instead of 2024, and also turns -437 - 2*234 into -805 instead of -905, but in total 1924-805 = 2024-905 = 1119 and from there the remaining steps are correct again.<p>It would be interesting to know how much of the sampling efficiency improvement from reinforcement learning is due to being better at basic arithmetic (something which could also be achieved by giving the model access to a calculator tool) and how much is due to choosing the correct approach for solving the problem more often.
nialv722 天前
&gt; we uncover that RL-trained models excel at low k (e.g., pass@1) but are consistently outperformed by base models at high k (e.g., pass@256).<p>This is a weak argument. I think I get what we are trying to say, but let&#x27;s take this to the extreme, say pass@10^10^100. Just like a group of monkeys could write Shakespeare if given enough time, a complete random model could probably outperform an RL-trained model at pass@10^10^100. Would we then say the random model can reason too?<p>Of course the correct reasoning trace will be in the base model&#x27;s distribution, just like any other well-formed, coherent paragraph. Kind of makes me think, maybe sampling efficiency _is_ intelligence?
评论 #43764415 未加载
评论 #43766657 未加载
iceman_w22 天前
RL constrains the space of possible output token sequences to what is likely to lead to the correct answer. So we are inherently making a trade-off to reduce variance. A non-RL model will have higher variance, so given enough attempts, it will come up with some correct answers that an RL model can&#x27;t.
KTibow22 天前
I&#x27;m a bit skeptical of this until it&#x27;s proven that they&#x27;re getting the right answers in the right ways. It could be that base models are just more random and when given 200 guesses out of 1000 possible answers tend to distribute them more evenly, bringing up the pass@k number.
评论 #43763471 未加载
macleginn23 天前
‘Crucially, all correct solutions from RL-trained models already exist in the base model&#x27;s distribution, proving RLVR enhances sampling efficiency, not reasoning capacity, while inadvertently shrinking the solution space.’ — wouldn&#x27;t any kind of RL fail to converge or even progress at all if the solution weren&#x27;t to be found in the base model distribution? The way training is set up, the models absolutely need to be able to find right solutions in a reasonable time, otherwis there wouldn&#x27;t be any training signal.
评论 #43762725 未加载
评论 #43768096 未加载
imtringued22 天前
&gt;Our key finding is that all reasoning paths in the RLVR model are already present in the base model.<p>This is a really good observation. It means that you don&#x27;t need to RL the full model. You merely need to RL a few LoRAs or maybe a small Mamba model appended to the final layer.
评论 #43770011 未加载
imenani21 天前
They fix the temperature at T=0.6 for all k for all models, even though their own Figure 10 shows that RL model benefits from higher temperatures. I would buy the overall claim much more if they swept of temperature parameter for each k and model like they did in the Codex paper [1].<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2107.03374" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2107.03374</a>
Der_Einzige22 天前
This 100% tracks with my experience.<p>Also fun stuff many don&#x27;t know - If you run a regular models chat template with a reasoning tuned model, it can go back to acting like the base model, with no &quot;thinking&quot; process.<p>&quot;Reasoning&quot; models are not any better than non reasoning models. It&#x27;s a parlor trick, and benchmarks which claimed it wasn&#x27;t are bad.
评论 #43763426 未加载
kk5822 天前
Reasoning models aren&#x27;t really reasoners, its basically neural style transfer protocol where you force a model &quot;decoder&quot; to emit tokens in a style that appears to be Reasoning like a deductive thinking.
whatshisface22 天前
If you don&#x27;t know the answer to a problem, you&#x27;re not going to be able to repeat sampling until it is correct. Random strings will saturate all benchmarks at k=infinity if tested this way.