I'm very optimistic for near-term AGI (10 years or less). Even just a few years ago most in the field would have said that it's an "unknown unknown", we didn't have the theory or the models, there was no path forward and so it was impossible to predict.<p>Now we have a fairly concrete idea of what a potential AGI might look like - an RL agent that uses a large transformer.<p>The issue is that unlike supervised training you need to simulate the environment along with the agent, so this requires a magnitude more compute compared to LLMs. That's why I think it will still be large corporate labs that will make the most progress in this field.
The interview with Lex Fridman that he was referring to:<p><a href="https://www.youtube.com/watch?v=I845O57ZSy4&t=14567s" rel="nofollow">https://www.youtube.com/watch?v=I845O57ZSy4&t=14567s</a><p>The entire video is worth viewing, an impressive 5:15h!
Interesting to see how he's progressed with this. When he first announced he was getting into AI it sounded almost like a semi retirement thing: something that interested him that he could do for fun and solo, without the expectation that it would go anywhere. But now he seems truly serious about it. Wonder if he's started hiring yet.
Does anyone else subscribe to the idea that AGI is impossible/unlikely without 'embodied cognition', i.e. we cannot create a human-like 'intelligence' unless it has a similar embodiment to us, able to move around a physical environment with it's own limbs, sense of touch, sight, etc. ? Any arguments against the necessity of this? I feel like any AGI developed in silico without freedom of movement will be fundamentally incomprehensible to us as embodied humans.
I'm slightly scared that they'll succeed. But not in the usual "robots will kill us" way.<p>What I am afraid of is that they succeed, but it turns out similar to VR: as an inconsequential gimmick. That they use their AGIs to serve more customized ads to people, and that's where it ends.
"I could write a $20M check myself"<p>Every day, all day. Same boat here.<p>I went to the bank to ask for a mortgage. They asked for my financials. "Oh, well, knowing that other people's money is on the line engenders a greater sense of discipline and determination."
Recent Carmack YouTube interview with him saying the code for AGI will be simple:<p><a href="https://m.youtube.com/watch?v=xLi83prR5fg" rel="nofollow">https://m.youtube.com/watch?v=xLi83prR5fg</a>
That is great news. My friend Ben Goertzel has been working in AGI for decades but I haven’t yet seen anything tangible. I do like the ideas of hybrid neuro-symbolic approaches.<p>I really enjoyed John Carmack’s and Lex Firdman’s 5 hour talk/interview.<p>Anyway, I like to efforts for human values preserving AGI. But, it will be a long time before we see it. I am 71 and I hope that I live to see it, outlying out of intellectual curiosity.
AGI will be more dangerous that nuclear weapons.<p>People are not allowed to start a nuclear weapon company. At all.<p>Why are people allowed to casually start an AGI company?
Is it just me or does anyone else think Carmack is all hype engine now?<p>Don't get me wrong, I've read master of doom/doom engine books like we all have, but first Oculus/Facebook and now this?<p>Maybe I'll care once they produce something amazing, but until then I'll still marvel at the tricks in Doom/Quake code.
Does anyone have any good links/podcasts/books on content that explores what is AGI and how to define it? Probably the best stuff I've listened to so far are Fridman podcasts with guests like Jeff Hawkins or Joscha Boch. But I'd love to read a book that explores this topic if any even exist.
I hate the pop sci ill-defined notion of AGI. As soon as a task is defined, an AI is developed which completes the task from real world data with superhuman success. The work of making the superhuman model isn't even conceptual usually, it's a matter of dispatching training jobs. It's quite clear that if your definition of AGI is superhuman perf at arbitrary tasks, there's no conceptual barriers right now. Everything is mere scale, efficiency and orchestration.
I hope he gets a good domain name and some good SEO, because there are a bunch of consulting companies with the name Keen Technologies, and some of them don't look super reputable.
He doesn't seem to have purchased keen.ai yet, though it seems like it's still for sale. I just naturally went there to see the company info and saw the landing page saying it was available. If they want it, they better move quick. I see an arbitrage opportunity...<p>Also, several Keen Technologies domains already exists in various forms. They're probably going to get a lot of traffic today.
Personally I hope this fails because of the disaster AGI would be for low/entry level jobs.<p>The last thing the world needs is to give technocrats such power. I know it’s an interesting problem to solve but think of who will own that tech in the end…<p>I hope AGI is never figured out in my lifetime.
> This is explicitly a focusing effort for me. I could write a $20M check myself, but knowing that other people's money is on the line engenders a greater sense of discipline and determination.<p>Dude doesn't even need the money...
There's no such thing as AGI in our near future, it's a moniker, a meme, something to 'strive' for but not 'a thing'.<p>AGI will not happen in discrete solutions anyhow.<p>Siri - an interactive layer over the internet with a few other features, will exhibit AGI like features long, long before what we think of as more distinct automatonic type solutions.<p>My father already talks to Siri like it's a person.<p>'The Network Is the Computer' is the key thing to grasp here and our localized innovations collectively make up that which is the real AGI.<p>Every microservice ever in production is another addition to the global AGI incarnation.<p>Trying to isolate AGI 'instances' is something we do because humans are automatons and we like to think of 'intelligence' in that context.
I love the name.<p><a href="https://en.wikipedia.org/wiki/Commander_Keen" rel="nofollow">https://en.wikipedia.org/wiki/Commander_Keen</a>
Artificial general intelligence (AGI) — in other words, systems that could successfully perform any intellectual task that a human can.<p>Not in my lifetime, not in this millennium. Possibly in the year 2,300.<p>Weird way to blow $20 million.
I don’t understand why you would <i>want</i> AGI. Even ignoring Terminator-esque worst case scenarios, AGI means humans are no longer the smartest entities on the planet.<p>The idea that we can control something like that is laughable.
They'll build the AGI, and right when they're ready to boot it up, the Earth will be destroyed by a Vogon Construction Fleet to make way for a hyperspace bypass.
I believe to create AI, we need to first simulate the universe. It's the only way that makes sense to me apart from some magical algorithm people think will be discovered. I'm doubtful we'll reach it in our lifetimes, true AI running on supercomputers, it seems like the final, end all mission. Like switching your Minecraft world from survival to creative mode.
I am not saying the intention here is the same, but the headline doesn't inspire confidence.<p>There's something incredibly creepy and immoral about the rush to create then commercialise <i>sentient</i> beings.<p>Let's not beat about the bush - we are basically talking slavery.<p>Every "ethical" discussion on the matter has been about protecting humans, and none of it about protecting the beings we are in a rush to bring into life and use.<p>It's repugnant.