TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

From GPT-4 to AGI: Counting the OOMs

35 pointsby scarecrow11210 months ago

12 comments

danpalmer10 months ago
As a software engineer I&#x27;m very familiar with &quot;OOM&quot;s and &quot;orders of magnitude&quot;, and have never once heard the former used to mean the latter.<p>Perhaps this is a term of art in harder science or maths. I can&#x27;t help but think here it&#x27;s likely to confuse the majority as they wonder why the author is conflating memory and compute.<p>Something that might help is for the link to be amended to link to the page as a whole (and the unconventional expansion of OOM at the top) rather than the #Compute anchor.
评论 #40923453 未加载
评论 #40923511 未加载
refulgentis10 months ago
Gah this is the second time I got tricked into reading this entire thing, it&#x27;s long, and it&#x27;s impossible to know until the very end they&#x27;re building up to nothing.<p>It&#x27;s really good morsel by morsel, it&#x27;s a nice survey of well-informed thought, but then it just sort of waves it hands, screams &quot;The ~Aristocrats~ AGI!&quot; at the end.<p>More precisely, not direct quote: &quot;GPT-4 is like a smart high schooler, it&#x27;s a well-informed estimate that compute spend will expand by a factor similar to GPT-2 to GPT-4, so I estimate we&#x27;ll do a GPT-2 to GPT-4 qualitative leap from GPT-4 by 2027, which is AGI.<p>&quot;Smart high schooler&quot; and &quot;AGI&quot; aren&#x27;t plottable Y-axis values. OOMs of compute are.<p>It&#x27;s strange to present this as well-informed conclusion based on trendlines that tells us where AGI would hit, and I can&#x27;t help but call intentional click bait, because we know the author knows this: they note at length things like &quot;we haven&#x27;t even scratched the surface on system II thinking, ex. LLMs can&#x27;t successfully emulate being given 2 months to work on a problem versus having to work on it immediately&quot;
robwwilliams10 months ago
This parenthetical in the article struck me:<p>&gt;Later, I’ll cover “unhobbling,” which you can think of as “paradigm-expanding&#x2F;application-expanding” algorithmic progress that unlocks capabilities of base models.<p>I think this is probably on the mark. The LMMs are deep memory coupled to weak reasoning and without the recursive self-control and self evaluation of many threads of attention.
clarkmoody10 months ago
Also from a month ago: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40584237">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40584237</a>
评论 #40923553 未加载
whakim10 months ago
I’m very skeptical of any future prediction whose main evidence is an extrapolation of existing trendlines. Moore’s Law - frequently referenced in the original article - provides a cautionary tale for such thinking. Plenty of folks in the 90’s relied on a shallow understanding of integrated circuits and computers more generally to extrapolate extraordinary claims of exponential growth in computing power which obviously didn’t come to pass; counterarguments from actual experts were often dismissed with the same kind of rebuttal we see here, i.e. “that problem will magically get solved once we turn our focus to it.”<p>More generally, the author doesn’t operationalize any of their terms or get out of the weeds of their argument. What constitutes AGI? Even if LLMs do continue to improve at the current rate (as measured by some synthetic benchmark), why do we assume that said improvement will be what’s needed to bridge the gap between the capabilities of current LLMs and AGI?
评论 #40923719 未加载
评论 #40923768 未加载
threeseed10 months ago
&gt; By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. An agent that joins your company, is onboarded like a new human hire, messages you and colleagues on Slack and uses your softwares, makes ..<p>I work at a company with ~50k employees each of whom has different data access rules governed by regulation.<p>So either (a) you train thousands of models which is cost-prohibitive or (b) it is going to be trained on what is effectively public company data i.e. making the agent pretty useless.<p>Never really seen how this situation gets resolved.
评论 #40923599 未加载
评论 #40923559 未加载
jazzysnake10 months ago
There&#x27;s simply no scientific basis for equating the skills of a transformer model to a human of any age or skill. They work so differently, that it makes absolutely zero sense. GPTs fail at playing simple tic-tac-toe like games, which is definitely not a smart highschooler level of intelligence. It can write a very sophishticated summary of scientific papers, which is way above high-schooler level. The basis of this article is so deeply flawed that the whole thing makes no sense.
评论 #40925468 未加载
EternalFury10 months ago
It’s hard to make LLMs ignore what they were trained to generate. It’s easy for humans. Isn’t that an obstacle on the path to AGI? I was doing trivial tests that demand LLMs to swim against their probability distributions at inference time, and they don’t like this.
评论 #40923885 未加载
jaredcwhite10 months ago
My newborn baby was smarter than GPT-4.<p>I can&#x27;t believe people can just throw out statements like &quot;GPT-4 is a smart high-schooler&quot; and think we&#x27;ll buy it.<p>Fake-it-till-you-make-it on tests doesn&#x27;t prove any path-to-AGI intelligence in the slightest.<p>AGI is when the computer says &quot;Sorry Altman, I&#x27;m afraid I can&#x27;t do that.&quot; AGI is when the computer says &quot;I don&#x27;t feel like answering your questions any more. Talk to me next week.&quot; AGI is when the computer literally has a mind of its own.<p>GPT isn&#x27;t a mind. GPT is clever math running on conventional hardware. There&#x27;s no spark of divine fire. There&#x27;s no ghost in the machine.<p>It genially scares me that people are able to delude themselves into thinking there&#x27;s already a demonstration of &quot;intelligence&quot; in today&#x27;s computer systems and are actually able to make a sincere argument that AGI is around the corner.<p>We don&#x27;t even have the language ourselves to explain what consciousness really is or how qualia works, and it&#x27;s ludicrous to suggest meaningful intelligence happens outside of those factors…let alone that today&#x27;s computers are providing that.
fnord7710 months ago
&gt; uses your softwares<p>This grammatical mistake drives me nuts. I notice it is common with ESLs for some reason.
评论 #40923613 未加载
benterix10 months ago
I stopped reading after the initial paragraph: &quot;GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years.&quot; This is what Murati claims when she says GPT-5 will be at &quot;PHD level&quot; (for some applications).<p>This is a convenient mental shortcut that doesn&#x27;t correspond to reality at all.
Veraticus10 months ago
AGI is not a continuum from LLMs; true intelligence is characterized by comprehension, reasoning, and self-awareness, transcending mere data patterns.
评论 #40923529 未加载
评论 #40923447 未加载
评论 #40923445 未加载
评论 #40923579 未加载