TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Scaling will never get us to AGI

20 pointsby corfordabout 1 year ago

11 comments

jlmortonabout 1 year ago
&gt; This is why driverless cars are still just demos<p>Not directly the point of the article, but is it fair to say driverless cars are still just demos when they&#x27;re operating on every street, road, and freeway from San Francisco to San Jose, with tens of millions of passenger miles?<p>I feel like once there are paying customers sitting in the vehicles, it&#x27;s not a demo, it&#x27;s a reality.
评论 #39973632 未加载
评论 #39973663 未加载
评论 #39973592 未加载
评论 #39973600 未加载
评论 #39973542 未加载
mnk47about 1 year ago
I watched a talk by the OpenAI Sora team [1] yesterday. They achieved amazing results with what they called &quot;the GPT-1 of video&quot;, making a huge leap from the ugly messy low quality GIFs we were getting before Sora. It understands basic motion and object permanence. It can simulate a Minecraft world. These impressive abilities just &quot;emerged&quot;. How did they do it?<p>Scaling. That&#x27;s it. They emphasized multiple times throughout the talk that this is what they achieved with the simplest, most naive approach.<p>[1] <a href="https:&#x2F;&#x2F;youtu.be&#x2F;U3J6R9gfUhU" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;U3J6R9gfUhU</a>
评论 #39975293 未加载
Strilancabout 1 year ago
Wasn&#x27;t the exponential increase in data and compute always part of the scaling hypothesis? That&#x27;s my memory of it from reading [1] years ago. Most of the field thought scaling would hurt, openai thought you&#x27;d get logarithmic benefit from it, and openai won that bet.<p>1: <a href="https:&#x2F;&#x2F;gwern.net&#x2F;scaling-hypothesis" rel="nofollow">https:&#x2F;&#x2F;gwern.net&#x2F;scaling-hypothesis</a>
proc0about 1 year ago
Definitely true. The only way this is not true is if by some miracle we get an emergent property at a ridiculously large scale, and even then it could be something that could be simplified, which means that scale is at best a way of stumbling upon general intelligence. However, biological brains are incredibly efficient, with very small animal brains demonstrating robust mechanisms of awareness and learning. There are severe cases of brain conditions where the majority of the brain is missing, yet these people can still show awareness and basic emotions.<p>We know gut bacteria affects the brain and how emotions are also linked to the state of our bodies, I think there is a knowledge gap in our understanding of intelligence that involves the necessity for embodiment.<p>Our bodies are potentially doing a big part of the &quot;computations&quot; that make up our ability to have general intelligence. This would also explain a lot of how lower level animals like insects are able to display complex behavior with much simpler brains. AGI might be such a hard problem because it&#x27;s not just about recreating the &quot;computations&quot; of the brain, but rather the &quot;computations&quot; of an entire organism, where the brain is only doing the coordination and self-awareness.
vinni2about 1 year ago
There are people who strongly believe data is a non-blocker possibility due to synthetic data that is of high quality.<p>Dario Amodei from Anthropic. <a href="https:&#x2F;&#x2F;www.dwarkeshpatel.com&#x2F;p&#x2F;will-scaling-work" rel="nofollow">https:&#x2F;&#x2F;www.dwarkeshpatel.com&#x2F;p&#x2F;will-scaling-work</a>
bevekspldnwabout 1 year ago
We just need infinite monkeys producing infinite training data and reliable fusion power for infinite data centers.<p>Easy.
评论 #39973566 未加载
xanderlewisabout 1 year ago
Gary’s articles are often a fun read, but he needs to proofread better. Almost every one (and they’re not exactly long) seems to have some sort of glaring typographical error.<p>Ironically, an LLM could probably help him out.
standapartabout 1 year ago
However, scaling will get you your next round led by Nvidia... I mean Microsoft... I mean AWS...
staredabout 1 year ago
Extraordinary claims require extraordinary evidence. Otherwise, it is a clickbait (if done intentionally) or delusion (if not).<p>Well, I was shocked to see LLMs (rather than something intrinsically related to Reinforcement Learning) reach the level of GPT-3.5, not even to mention GPT-4.<p>For starters, he should define what AGI means. By some criteria, it does not exist (no free lunch theorem and stuff). Some others say that GPT-4 already fulfils that. So, the question to the author is: can he say which AGI he means, and would he actually bet money on this claim?
评论 #39973601 未加载
xystabout 1 year ago
Yup, another signal of the end of AI bubble.
评论 #39973631 未加载
drbigabout 1 year ago
0) If there is any distinction between the training phase and the query phase then it cannot, ever, be an AGI.<p>1) LLMs at their core are an auto-complete solution. An extremely good solution! But nothing more, even with all the accoutrements of prompt engineering&#x2F;injection and whatever other &quot;support systems&quot; (_crutches_) you can think of.<p>I&#x27;ll end with my own paraphrasing of a great reply I got in this very forum some time ago: Bugs Bunny isn&#x27;t funny. Bugs Bunny doesn&#x27;t, nor ever existed. The people _writing him_ had a sense of humor. Now replace Bugs Bunny with whatever (very, extremely) flawed image of &quot;&quot;&quot;an AI persona&quot;&quot;&quot; you have.