> This is why driverless cars are still just demos<p>Not directly the point of the article, but is it fair to say driverless cars are still just demos when they're operating on every street, road, and freeway from San Francisco to San Jose, with tens of millions of passenger miles?<p>I feel like once there are paying customers sitting in the vehicles, it's not a demo, it's a reality.
I watched a talk by the OpenAI Sora team [1] yesterday. They achieved amazing results with what they called "the GPT-1 of video", making a huge leap from the ugly messy low quality GIFs we were getting before Sora. It understands basic motion and object permanence. It can simulate a Minecraft world. These impressive abilities just "emerged". How did they do it?<p>Scaling. That's it. They emphasized multiple times throughout the talk that this is what they achieved with the simplest, most naive approach.<p>[1] <a href="https://youtu.be/U3J6R9gfUhU" rel="nofollow">https://youtu.be/U3J6R9gfUhU</a>
Wasn't the exponential increase in data and compute always part of the scaling hypothesis? That's my memory of it from reading [1] years ago. Most of the field thought scaling would hurt, openai thought you'd get logarithmic benefit from it, and openai won that bet.<p>1: <a href="https://gwern.net/scaling-hypothesis" rel="nofollow">https://gwern.net/scaling-hypothesis</a>
Definitely true. The only way this is not true is if by some miracle we get an emergent property at a ridiculously large scale, and even then it could be something that could be simplified, which means that scale is at best a way of stumbling upon general intelligence. However, biological brains are incredibly efficient, with very small animal brains demonstrating robust mechanisms of awareness and learning. There are severe cases of brain conditions where the majority of the brain is missing, yet these people can still show awareness and basic emotions.<p>We know gut bacteria affects the brain and how emotions are also linked to the state of our bodies, I think there is a knowledge gap in our understanding of intelligence that involves the necessity for embodiment.<p>Our bodies are potentially doing a big part of the "computations" that make up our ability to have general intelligence. This would also explain a lot of how lower level animals like insects are able to display complex behavior with much simpler brains. AGI might be such a hard problem because it's not just about recreating the "computations" of the brain, but rather the "computations" of an entire organism, where the brain is only doing the coordination and self-awareness.
There are people who strongly believe data is a non-blocker possibility due to synthetic data that is of high quality.<p>Dario Amodei from Anthropic. <a href="https://www.dwarkeshpatel.com/p/will-scaling-work" rel="nofollow">https://www.dwarkeshpatel.com/p/will-scaling-work</a>
Gary’s articles are often a fun read, but he needs to proofread better. Almost every one (and they’re not exactly long) seems to have some sort of glaring typographical error.<p>Ironically, an LLM could probably help him out.
Extraordinary claims require extraordinary evidence. Otherwise, it is a clickbait (if done intentionally) or delusion (if not).<p>Well, I was shocked to see LLMs (rather than something intrinsically related to Reinforcement Learning) reach the level of GPT-3.5, not even to mention GPT-4.<p>For starters, he should define what AGI means. By some criteria, it does not exist (no free lunch theorem and stuff). Some others say that GPT-4 already fulfils that. So, the question to the author is: can he say which AGI he means, and would he actually bet money on this claim?
0) If there is any distinction between the training phase and the query phase then it cannot, ever, be an AGI.<p>1) LLMs at their core are an auto-complete solution. An extremely good solution! But nothing more, even with all the accoutrements of prompt engineering/injection and whatever other "support systems" (_crutches_) you can think of.<p>I'll end with my own paraphrasing of a great reply I got in this very forum some time ago: Bugs Bunny isn't funny. Bugs Bunny doesn't, nor ever existed. The people _writing him_ had a sense of humor. Now replace Bugs Bunny with whatever (very, extremely) flawed image of """an AI persona""" you have.