TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ladder: Self-improving LLMs through recursive problem decomposition

370 点作者 fofoz2 个月前

22 条评论

EMIRELADERO2 个月前
What the hell is going on this week?!?!? (asking positively, with a smile on my face)<p>I have seen at least 3 interesting&#x2F;mildly promising breakthroughs on ML just these past two days! I mean, a Google research team just discovered that you can combine NNs with CLAs using digital logic gates as a medium, so you could potentially reduce many kinds of non-linear problems to a simple, efficient digital circuit! And it was on the HN front page, TODAY![1]<p>I keep seeing more mind-bending stuff related to neural nets and logic&#x2F;intelligence in general, my mind has been running wild with speculation about the future and just how close we could (or could not) be to truly understanding how intelligence works from first principles.<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43286161">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43286161</a>
评论 #43288513 未加载
评论 #43288954 未加载
评论 #43291034 未加载
评论 #43290282 未加载
评论 #43290080 未加载
评论 #43288569 未加载
评论 #43291875 未加载
评论 #43288332 未加载
评论 #43293391 未加载
isaacfrond2 个月前
Reminds me of a quote by famous number theoretic mathematician Hendrik Lenstra:<p><i>For every problem you can&#x27;t solve, there&#x27;s a simpler problem that you also can&#x27;t solve.</i>
评论 #43289594 未加载
评论 #43289839 未加载
barteloniu2 个月前
Their test time RL approach seems a bit fishy. From what I understand, TTRL works by asking a language model to generate simpler versions of the test case. Once we have the simpler problems, we run RL on them, hoping that an improvement on the simplified cases will also strengthen the model performance on the original problem.<p>The issue is, they use a numerical integrator to verify the simpler problems. One could imagine a scenario where a barely simpler problem is generated, and the model is allowed to train on pretty much the test case knowing the ground truth. Seems like training on the test set.<p>The rest of the paper is nice though.
评论 #43290162 未加载
mentalgear2 个月前
&gt; We demonstrate LADDER&#x27;s effectiveness in the subject of mathematical integration, improving Llama 3.2 3B&#x27;s accuracy from 1% to 82% on undergraduate-level problems
评论 #43288800 未加载
niemandhier2 个月前
Frank Herbert knew it: This is basically an implementation of the mentats recursive self inspection described in Dune.
Davidzheng2 个月前
test-time training&#x2F;RL is definitely the right approach for math AI in the future. It is probably one of only a few ways to spend an obscene amounts of compute at a given problem (think 10^5 gpus for a few days) and has hopes of making progress when test-time inference scaling may not at first (think if you try to do MCTS on a go position with a bad value&#x2F;policy net). Alphaproof already did this but nice to see it done again--good results!
评论 #43288244 未加载
neoneye22 个月前
Sidenote: `Tufa Labs` team includes the `MindsAI` team of ARC-AGI fame. <a href="https:&#x2F;&#x2F;tufalabs.ai&#x2F;team.html" rel="nofollow">https:&#x2F;&#x2F;tufalabs.ai&#x2F;team.html</a>
评论 #43291098 未加载
pyryt2 个月前
Some names are just too tempting <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1507.02672" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1507.02672</a>
thomasahle2 个月前
At the end of the paper they mention &quot;two problems from the 2025 MIT Integration Bee qualifying exam which the system continued to answere incorrectly&quot;.<p>They say the questions were among the most complex questions on the exam, but the first one is just<p><pre><code> ∫ ∛(x · ∜(x · ∜(x · √(x · √(x · ⋯ ))))) dx </code></pre> which just requires you to compute<p><pre><code> 1&#x2F;3 + 1&#x2F;(3*4) + 1&#x2F;(3*4*5) + ... </code></pre> So hardly very advanced math.
评论 #43291088 未加载
vessenes2 个月前
That this works at all is pretty interesting. That it seems to work very well with math is quite interesting.<p>That said, this paper is part of the move we have right now blurring the lines of training and inference -- part of their method involves doing some reinforcement learning on questions they don&#x27;t know the answer to, but can decompose into simpler questions, and using GRPO on those with a numerical &#x27;checker&#x27;. This reinforced model then can answer more questions.<p>I like this. I think humans do this a lot; mulling on something, turning it over in their heads, analogizing, etc. Adding test time training is a way to do a lot more thinking than adding tokens to the context for fixed inference.<p>Just as DeepSeek and o1&#x2F;o3 show that we can increase capacity with inference-time-token generation and assessment, it looks like we can increase capacity with inference-time automated fine tuning as well.<p>I&#x27;d hope that as these techniques solidify we&#x27;ll have a new way to talk and think about this -- they are all part of the same fundamental process at some level.<p>Either way, super cool.
mentalgear2 个月前
It&#x27;s exciting to see approaches like RL and curriculum learning, which I always felt were the way to go for real self-improvement ~7y ago when training in robotics (openAI gym days), finally getting successfully applied to NLP&#x2F;LLM to highly boost small model performance.<p>(Ladder is a sort of RL self curriculum learning approach)
评论 #43288373 未加载
评论 #43288147 未加载
flakiness2 个月前
Off topic, but their site is lovely: <a href="https:&#x2F;&#x2F;tufalabs.ai&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;tufalabs.ai&#x2F;index.html</a> It feels like a gold rush for sure.
cratermoon2 个月前
How many rungs of a ladder would you be willing to climb if you knew that each rung was made from half the previous rung?
daxfohl2 个月前
How much GPU would an RL like this need for tuning? Is the approach something someone could experiment with themselves, or is it like thousands of USD in cloud costs and&#x2F;or years of compute if done on a laptop GPU?
评论 #43296523 未加载
nis0s2 个月前
What’s the difference between this and what Wolfram Alpha has been doing?<p><a href="https:&#x2F;&#x2F;www.wolfram.com&#x2F;artificial-intelligence&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wolfram.com&#x2F;artificial-intelligence&#x2F;</a>
explosion-s2 个月前
I would love to be able to use the actual model! If I&#x27;m understanding correctly this makes small models as intelligent as much larger models like GPT4o
evjan2 个月前
I had NotebookLM make a 15 min podcast about it and listened to it while walking the dogs. It was a very interesting way of trying to understand a research paper!<p>You need a google account to access it unfortunately. <a href="https:&#x2F;&#x2F;notebooklm.google.com&#x2F;notebook&#x2F;fbaba495-d4f2-48a3-a3c2-09cb826b351b&#x2F;audio" rel="nofollow">https:&#x2F;&#x2F;notebooklm.google.com&#x2F;notebook&#x2F;fbaba495-d4f2-48a3-a3...</a>
goyel2 个月前
I wonder why nobody made a NN to find the weigths faster and better than gradient descent
majordroid2 个月前
&gt; We also introduce TTRL (Test-Time Reinforcement Learning), where we perform reinforcement learning on variants of test problems at inference time. TTRL enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of 90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1&#x27;s performance.<p>That&#x27;s incredible!
评论 #43291896 未加载
评论 #43288258 未加载
revskill2 个月前
Llm keeps deleting my file content proved that we have far many things to do.
bloomingkales2 个月前
I’m kinda getting the sense this is still just prompt engineering in a loop.<p><i>Persona-based prompting: We prompted the model to adopt different mathematical perspectives (e.g., &quot;think like Euler focusing on series&quot;, &quot;approach like Gauss looking for patterns&quot;).</i><p>I mean … I guess that’s scientific?<p>Besides that, how can the model learn at test time (at inferencing)?. It’s stateless, it doesn’t incorporate the last prompt into the model.
评论 #43288647 未加载
ma9o2 个月前
divide and conquer :)