Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.<p>I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.
Jan Leike has said he's leaving too <a href="https://twitter.com/janleike/status/1790603862132596961" rel="nofollow">https://twitter.com/janleike/status/1790603862132596961</a>
When walking around the U of Toronto, I often think that ~10 years ago Ilya was in a lab next to Alex trying to figure things out. I can't believe this new AI wave started there. Ilya, Karpathy, Jimmy Ba, and many more were at the right time when Hinton was there too.
seemed inevitable after that ouster attempt, probably just working out the details of the exit. But the day after their new features release announcement?
Jakub Pachocki is amazing. He was in top 20 in Polish algorithm competition:<p><a href="https://oi.edu.pl/contestants/Jakub%20Pachocki/" rel="nofollow">https://oi.edu.pl/contestants/Jakub%20Pachocki/</a>
Why do people treat these technologists doing career moves, as if this was lineup changes in a major league sports teams?<p>Are these "first name" (ugh) "influencers" smart? Sure.<p>Smart is not that rare. These people are technologists like most of you, they aren't notably smarter, they just got lucky in their career direction and specialization. They aren't business geniuses.<p>They're just people filling roles.<p>Do changes in leadership affect a business? Sure? I guess? About 5% as much as you'd think from the tea-spilling gossip-rag chatter around AI people.<p>Enough already. Attend to the technology. Attend to the actual work. The number of you who are professionally impacted by these people changing paychecks is closer to zero than 50%.
Meta's next for him? There's lots of money being poured into their AI division and there's lots of compute & being able to do any kind of research he might want.
Does it matter that the people who dedicated the last decade to developing breakthrough work have left? It is a mistake to think that their luck streak will continue and their departure isn't a sign of decay at OpenAI. They may as well cash-in on their notoriety while it is of value. The odds are more in favor of other teams blazing new trails.
Not to be a conspiracy theorist, but the phrase "So long, and thanks for everything" used in the tweet reminds me of "So long, and thanks for all the fish" from the dolphins in The Hitchhiker's Guide To The Galaxy. The background there is that dolphins are secretly more intelligent than humans, and are leaving Earth without them when its destruction is imminent (something the humans don't see coming).<p>I did once leave a company with a phrase just like that :P A few people there actually got the reference and congratulated me for the burn.
I read Sam's Tweet and see "I fired him cause he voted against me"...<p>Im sorry but every time I see Sam speak, or read what he has to say all I can thing is "petulant man child".<p>> ... Ilya is easily one of the greatest minds of our generation<p>> ...Jakub is also easily one of the greatest minds of our generation<p>I'm not calling you a liar sam, but I just dont believe you.
I wonder how the proposed regulations to make noncompetes unenforceable affect moves like this. Or was he sufficiently high up that his existing noncompete would have survived?
At least now we know GPT-5 has finished development and is now in training from this (I would hope that Iyla got to add all that he hoped to before leaving).<p>Ilya, thanks for all you have contributed within OpenAI!
Tesla also lost top AI lead [0]. Will they come to Apple?<p>[0] <a href="https://news.ycombinator.com/item?id=40361350">https://news.ycombinator.com/item?id=40361350</a>
Probably not related, but it's worth pointing out that Daniel Kokotajlo (<a href="https://www.lesswrong.com/users/daniel-kokotajlo" rel="nofollow">https://www.lesswrong.com/users/daniel-kokotajlo</a>) left last month.<p>But if it were related, then that would presumably be because people within the company (or at least two rather noteworthy people) no longer believe that OpenAI is acting in the best interests of humanity.<p>Which isn't too shocking really given that a decent chunk of us feel the same way, but then again, we're just nobodies making dumb comments on Hacker News. It's a little different when someone like Ilya really doesn't want to be at OpenAI.
Why does everyone here think that the guy who quit/lost his job at OpenAI because he didn't agree with their corporate shift and departure from the original non-profit vision is going to be lining up for another big corporate job building closed for-profit AI?
I wonder if he thinks LLMs are an AGI dead end and he's not interested in selling a product. There's some academic papers floating around coming to the same conclusion (no links. sorry, studying for a cert exam).
Altman's tweet (<a href="https://x.com/sama/status/1790518031640347056?s=46" rel="nofollow">https://x.com/sama/status/1790518031640347056?s=46</a>) makes it seem as if he wanted to stay, and Ilya disagreed and "chose" to depart. Very interesting framing.
Sam "Worldcoin" Altman regrets the loss of a friend that called him out on how OpenAi is becoming closed because the engineers realized they could make a lot of money. Doesn't seem like it is impacting the quality of the models, but it will probably impact openai's impact.
There’s a halo around Ilya Sutskever as the Albert Einstein of AI. Are there others on par with his— umm, how would you qualify it—- AI intuition or are we idolizing?
I hope Ilya takes care of himself. I can imagine that what happened during the past year is not helpful for one's mental health. I assume the presented relationship with Sam Altman does not reflect reality and the external press surely also causes a lot of pressure.
'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”<p>An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”<p>He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”<p>The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'[0]<p>[0] What Ilya Sutskever Really Wants <a href="https://www.aipanic.news/p/what-ilya-sutskever-really-wants" rel="nofollow">https://www.aipanic.news/p/what-ilya-sutskever-really-wants</a>
The future of the company doesn't depend on one engineer. If he left, it's likely because he had a vision that wasn't in line with Sam or Microsoft. Others will take his place and OpenAI will likely reach Elon Musks' recent prediction that AI will improve 100x in the next few years.
So the CEO of Amazon Web Services and the Chief Scientist of OpenAI are on the market on the same day...<p>I'm not saying it's a conspiracy, but it's an awful big coincidence, especially since today is Tuesday and usually these things happen on a Friday.