As a software engineer I'm very familiar with "OOM"s and "orders of magnitude", and have never once heard the former used to mean the latter.<p>Perhaps this is a term of art in harder science or maths. I can't help but think here it's likely to confuse the majority as they wonder why the author is conflating memory and compute.<p>Something that might help is for the link to be amended to link to the page as a whole (and the unconventional expansion of OOM at the top) rather than the #Compute anchor.
Gah this is the second time I got tricked into reading this entire thing, it's long, and it's impossible to know until the very end they're building up to nothing.<p>It's really good morsel by morsel, it's a nice survey of well-informed thought, but then it just sort of waves it hands, screams "The ~Aristocrats~ AGI!" at the end.<p>More precisely, not direct quote: "GPT-4 is like a smart high schooler, it's a well-informed estimate that compute spend will expand by a factor similar to GPT-2 to GPT-4, so I estimate we'll do a GPT-2 to GPT-4 qualitative leap from GPT-4 by 2027, which is AGI.<p>"Smart high schooler" and "AGI" aren't plottable Y-axis values. OOMs of compute are.<p>It's strange to present this as well-informed conclusion based on trendlines that tells us where AGI would hit, and I can't help but call intentional click bait, because we know the author knows this: they note at length things like "we haven't even scratched the surface on system II thinking, ex. LLMs can't successfully emulate being given 2 months to work on a problem versus having to work on it immediately"
This parenthetical in the article struck me:<p>>Later, I’ll cover “unhobbling,” which you can think of as “paradigm-expanding/application-expanding” algorithmic progress that unlocks capabilities of base models.<p>I think this is probably on the mark. The LMMs are deep memory coupled to weak reasoning and without the recursive self-control and self evaluation of many threads of attention.
Also from a month ago: <a href="https://news.ycombinator.com/item?id=40584237">https://news.ycombinator.com/item?id=40584237</a>
I’m very skeptical of any future prediction whose main evidence is an extrapolation of existing trendlines. Moore’s Law - frequently referenced in the original article - provides a cautionary tale for such thinking. Plenty of folks in the 90’s relied on a shallow understanding of integrated circuits and computers more generally to extrapolate extraordinary claims of exponential growth in computing power which obviously didn’t come to pass; counterarguments from actual experts were often dismissed with the same kind of rebuttal we see here, i.e. “that problem will magically get solved once we turn our focus to it.”<p>More generally, the author doesn’t operationalize any of their terms or get out of the weeds of their argument. What constitutes AGI? Even if LLMs do continue to improve at the current rate (as measured by some synthetic benchmark), why do we assume that said improvement will be what’s needed to bridge the gap between the capabilities of current LLMs and AGI?
> By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. An agent that joins your company, is onboarded like a new human hire, messages you and colleagues on Slack and uses your softwares, makes ..<p>I work at a company with ~50k employees each of whom has different data access rules governed by regulation.<p>So either (a) you train thousands of models which is cost-prohibitive or (b) it is going to be trained on what is effectively public company data i.e. making the agent pretty useless.<p>Never really seen how this situation gets resolved.
There's simply no scientific basis for equating the skills of a transformer model to a human of any age or skill. They work so differently, that it makes absolutely zero sense. GPTs fail at playing simple tic-tac-toe like games, which is definitely not a smart highschooler level of intelligence. It can write a very sophishticated summary of scientific papers, which is way above high-schooler level. The basis of this article is so deeply flawed that the whole thing makes no sense.
It’s hard to make LLMs ignore what they were trained to generate. It’s easy for humans.
Isn’t that an obstacle on the path to AGI?
I was doing trivial tests that demand LLMs to swim against their probability distributions at inference time, and they don’t like this.
My newborn baby was smarter than GPT-4.<p>I can't believe people can just throw out statements like "GPT-4 is a smart high-schooler" and think we'll buy it.<p>Fake-it-till-you-make-it on tests doesn't prove any path-to-AGI intelligence in the slightest.<p>AGI is when the computer says "Sorry Altman, I'm afraid I can't do that." AGI is when the computer says "I don't feel like answering your questions any more. Talk to me next week." AGI is when the computer literally has a mind of its own.<p>GPT isn't a mind. GPT is clever math running on conventional hardware. There's no spark of divine fire. There's no ghost in the machine.<p>It genially scares me that people are able to delude themselves into thinking there's already a demonstration of "intelligence" in today's computer systems and are actually able to make a sincere argument that AGI is around the corner.<p>We don't even have the language ourselves to explain what consciousness really is or how qualia works, and it's ludicrous to suggest meaningful intelligence happens outside of those factors…let alone that today's computers are providing that.
I stopped reading after the initial paragraph: "GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years." This is what Murati claims when she says GPT-5 will be at "PHD level" (for some applications).<p>This is a convenient mental shortcut that doesn't correspond to reality at all.