TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Learning how to think with Meta Chain-of-Thought

229 点作者 drcwpl4 个月前

9 条评论

drcwpl4 个月前
I find their critique compelling, particularly their emphasis on the disconnect between CoT’s algorithmic mimicry and true cognitive exploration. The authors illustrate this with examples from advanced mathematics, such as the &quot;windmill problem&quot; from the International Mathematics Olympiad, a puzzle whose solution eludes brute-force sequential thinking. These cases underscore the limits of a framework that relies on static datasets and rigid generative processes. CoT, as they demonstrate, falters not because it cannot generate solutions, but because it cannot conceive of them in ways that mirror human ingenuity.<p>As they say - &quot;Superintelligence isn&#x27;t about discovering new things; it&#x27;s about discovering new ways to discover.&quot;
评论 #42663067 未加载
评论 #42657975 未加载
评论 #42657715 未加载
评论 #42657969 未加载
评论 #42661962 未加载
adampk4 个月前
This is the big idea in the paper, basically that CoT is limited for some complex problems because there is a class of problems where there is no &#x27;textbook&#x27; way to find a solution. These are novel problems that need a unique methodology. &quot;Essentially, to start generating the solution requires that we already know the full approach. The underlying generative process of the solution is not auto-regressive from left-to-right.&quot;<p>Mathematical meaning:<p>We can formalize this argument through the interpretation of reasoning as a latent variable process (Phan et al., 2023). In particular, classical CoT can be viewed as (equation) i.e., the probability of the final answer being produced by a marginalization over latent reasoning chains.<p>We claim that for complex problems, the true solution generating process should be viewed as (equation) i.e., the joint probability distribution of the solution (a, s1, . . . , s) is conditioned on the latent generative process. Notice that this argument is a meta-generalization of the prior CoT argument, hence why we will refer to the process q → z1 → . . . → z as Meta-CoT.<p>I think this is seminal. It is getting at heart of some issues. Ask o1-pro how you could make a 1550nm laser diode operating at 1ghz have low geometric loss without an expensive collimator using commodity materials or novel manufacturing approaches using first principle physics and the illusion is lost that o1-pro is a big deal. &#x27;Novel&#x27; engineering is out of reach because there is no text book on how to do novel engineering and these class of problems is &#x27;not auto-regressive from left-to-right&#x27;.
评论 #42657094 未加载
评论 #42656794 未加载
评论 #42659537 未加载
评论 #42666081 未加载
erikerikson4 个月前
&gt; That is, language models learn the implicit meaning in text, as opposed to the early belief some researchers held that sequence-to-sequence models (including transformers) simply fit correlations between sequential words.<p>Is this so, that the research community is agreed? Are there papers discussing this topic?
评论 #42657423 未加载
评论 #42656921 未加载
评论 #42656948 未加载
YeGoblynQueenne4 个月前
&gt;&gt; Behind this approach is a simple principle often abbreviated as &quot;compression is intelligence&quot;, or the model must approximate the distribution of data and perform implicit reasoning in its activations in order to predict the next token (see Solomonoff Induction; Solomonoff 1964)<p>For the record, the word &quot;intelligence&quot; appears in the two parts of &quot;A Formal Theory of Inductive Inference&quot; (referenced above) a total of 0 times. The word &quot;Compression&quot; appears a total of 0 times. The word &quot;reasoning&quot; once; in the phrase &quot;using similar reasoning&quot;.<p>Unsurprisingly, Solomonoff&#x27;s work was preoccupied with Inductive Inference. I don&#x27;t know that he ever said anything bout &quot;compression is intelligence&quot; but I believe this is an idea, and a slogan, that was developed only much later. I am not sure where it comes from, originally.<p>It is correct that Solomonoff induction was very much about predicting the next symbol in a sequence of symbols; not necessarily linguistic tokens, either. The common claim that LLMs are &quot;in their infancy&quot; or similar are dead wrong. Language modelling is basically ancient (in CS terms) and we have long since crossed in the era of its technological maturity.<p>_______________<p>[1] <a href="https:&#x2F;&#x2F;raysolomonoff.com&#x2F;publications&#x2F;1964pt1.pdf" rel="nofollow">https:&#x2F;&#x2F;raysolomonoff.com&#x2F;publications&#x2F;1964pt1.pdf</a><p>[2] <a href="https:&#x2F;&#x2F;raysolomonoff.com&#x2F;publications&#x2F;1964pt2.pdf" rel="nofollow">https:&#x2F;&#x2F;raysolomonoff.com&#x2F;publications&#x2F;1964pt2.pdf</a>
评论 #42666907 未加载
pama4 个月前
Congrats to the authors for a thoughtful work! I have been thinking and working on related ideas for a few months now but did not yet spent commensurate compute on them and might have gone in a different direction; this work certainly helps create better baselines along the way of making better use of decoder transformer architectures. Please keep it coming!
lawlessone4 个月前
Is Meta the company here or are they using meta the word? or both?
评论 #42656309 未加载
j454 个月前
I&#x27;m a little curious, would anyone have a way to know how many researchers research something they came up with, vs researching something being done by an independent developer online, it being picked up and then researched and reported on?
jpcom4 个月前
The example in the paper using an plug-and-chug algebra equation, and the step-by-step process to solve it, reinforces the notion that LLMs can only reproduce recipes they have seen before. This is really no different than how we learn mathematics in school, the teacher shows a starting point and moves, step-by-step, to the end of the process. Calling this &quot;Meta Chain-of-Thought&quot; feels like an aggrandizement of basic educational process to me. Next we&#x27;ll be labeling the act of holding basic utensils as Layered Physical Kineticism, or something contrived like that. In school this &quot;Meta Chain of Thought&quot; was called &quot;Show your work.&quot; Is this really a &quot;phenomena&quot; that needs explaining? It might teach us more about how we achieve logical induction (steps of reasoning) but we are pretty deep in the soup to be able to describe accurately the shape of the pot.
评论 #42663620 未加载
naasking4 个月前
Meta&#x27;s recently released Large Concept Models + this Meta Chain of Thought sounds very promising for AGI. The timeline of 2030 sounds increasingly plausible IMO.