Wow, I'm pretty shocked how this all went down. Given the abrupt nature of the announcement and the pointed message in it (essentially accusing Altman of lying), I was sure there must have been some big scandal, not just normal boardroom differences of opinion and game-of-thrones-style maneuvering.<p>Even if I agree with Sutskever's general position, I think his actions here were colossally stupid. He's basically managed to piss off a ton of people, and I have no doubt lots of employees will just shift to whatever Altman and Brockman end up doing, or else there will otherwise be some huge splintering of OpenAI talent.
I can't find the exact quote, but I distinctly remember Sam giving founders advice along the lines of, "operate under the assumption that co-founders and investors are not going to screw you". Pretty sure it was a Startup School lecture at some point.<p>That still may be good life advice in general (even if it wasn't for Sam in this case) but what I really don't get is the fact that OpenAI's board governance was structured in a way such that this was even possible.<p>I also don't understand what is to be gained from the perspective of the remaining senior leaders at the company. This is a tremendously momentum-killing event. I cannot think of a single facet of their day-to-day operations, product roadmap, competitive position, etc. that would be improved by this decision.<p>Yesterday, when this was announced, I was bracing myself for some truly awful news about something that Sam had done in his personal life that was about to be divulged, since that is the <i>only possible rational reason</i> for the board to make the decision it did.<p>What am I missing? It's all so strange.
This memo creates more questions and answers, and it hints at the forming of internal factions in OpenAI.<p>Who's the COO talking on behalf of when he says "we have had multiple conversations with the board"?<p>Who's full support does Mira (Ilya's choice) have?
Why does it need saying?<p>The rapid escalation from "we seem to disagree on this" to "walk them off the premises" was standard practice at OpenAI, along with firing people on Friday, at noon.<p>Many people were "resigned" while they were actively discussing (or at least so they thought) any disagreements with their immediate management, or the higher echelons of OpenAI.<p>Coordinating this internally took more than a few days, and there must have been a few middle-managers involved in several meetings to coordinate this.<p>Details on Ilya's selection function on who to read in to this would inform a lot of his most intrinsic motivations for this.<p>It's a pity to see brilliant minds unravel under the pressure of their own invention . . . and so publicly.
Serious wrongdoing with smoking gun would have been the only justification for the board acting the way they did. This reflects very poorly on the OpenAI board. And makes it more likely that this affair is far for complete.
No matter how exactly this all shakes out, I am convinced that quote from Sutskever is going to be a legendary summary of the saga:<p>> Ego is the enemy of growth<p><a href="https://twitter.com/ilyasut/status/1707752576077176907" rel="nofollow noreferrer">https://twitter.com/ilyasut/status/1707752576077176907</a><p>The real mystery that we're all trying to solve in realtime is: who was the biggest ego here? Like a good game of Clue, everyone is still a suspect...
Either the board is now lying to the employees, or the board monumentally fucked up with their communication publicly (vaguely indicating some serious wrongdoing in their statement yesterday). Either way, the board look like morons. Concerning they're now in charge of some pretty powerful and important tech.
This looks extremely stupid from the board and validates the theory that most of the board has no clue on large organisation governance, full of people who have mostly not seen boardrooms in action. It will be very very hard for OpenAI to raise money with the presence of such a maverick board, and I doubt MSFT would want to give a penny more to them without some big changes. For the people saying they don't owe anything to MSFT, MSFT is the 49% stock holder in the LLC, and a 49% stockholder almost never gets treated like this anywhere else, doesn't matter if the remaining 51% is held by a non profit or not.<p>For people saying they shouldn't have done the MSFT deals, how else would they have gotten the money to build anything at all considering how much GPUs cost? Their competitor Anthropic is doing the same raising money from all of big tech, made possibly only due to the ridiculous success of ChatGPT. For others saying Ilya is the key, Google had a big lead on AI researchers and the only reason Google is not the undisputed king is due to bad leadership and product development.
I think a reasonable translation of this is something like: "He didn't do anything actually illegal, or outside of the realm of what he was empowered to do as CEO, but he was doing things we didn't like," and then either didn't tell the board about those things, or told them in a way that was framed to make them less controversial.<p>So yeah, to me, really backs up the narrative that the board and sama were in disagreement about some key things.
This may be the end of OpenAI. OpenAI’s big advantage over Google was in production using and making the research available commercially to the public. I think Sam Altman was a big part of that push for commercialization.<p>Now there is a good chance that the “true believers” in AGI have taken over who will want to focus on just trying to achieve that. There is a good chance that true AGI is decades or more away (if ever). Without a product to push, this pure research organization will produce a lot of really cool papers(maybe) but not much else. As the talent see that there are better economic prospects elsewhere, they will leave.
They can spin it how they want, but it'll 100% turn out to be a AI doomerist panic reaction. They didn't even inform MSFT about it which is absolutely ridiculous
At the risk of getting blasted, I keep seeing all these people talking about how it's great OpenAI is being steered back to a research organization. How does one expect research to make money if not being beholden to a product? Forget whatever your immediate reaction to that word is. I can already hear the "who needs another SV overhyped product" group, and your ignoring the realities that people don't pay so you can sit around thinking in industry unless your thinking can generate business value. You can not like that, but it's the system we exist in.<p>So let me frame it another way, how does research expect to be supported if it's not providing material value? Research doesn't exist in a vacuum, if you don't want to be in industry you don't have to, we have academia for that.<p>I'm cool with OpenAI becoming more like Google brain, but then its Microsoft calling the shots. Except wait the company structure is weird, so they're not really and after seeing how Ilya handled this I don't expect him to be able to stomach that relationship long term.<p>I'm not bullish on OpenAI right now.
It seems openAI has ~500 employees (well, 495). This is being handled with somewhat enigmatic, ceremonial language fitting to an imperial palace with different factions. Interesting to watch, but also a bit ridiculous
If it happens to only be miscommunication, that will be the biggest blunder of economic history caused by misunderstandings.<p>If true, Sam would have kept his job just by communicating more and better.<p>If true, the board wouldn’t have a lose CEO in the wild and talents now possibly trying to build a competitor…
This all revolves around what precisely it was that they claimed Altman wasn't candid about. As long as that isn't clear it is the board that has something to explain and in my opinion they don't have a whole lot of time and it had better be good because OpenAI will be hemorrhaging talent and brand equity until this is resolved.
I think this will ultimately be good for everyone, however I can imagine there’s a huge populace of OpenAI employees hoping this was a get rich quick route and are rethinking that. I know OpenAI has a different structure for reward than typical equity, but there had to be the thought this was going to be bigger than Apple and Google combined and there would be legions of billionaires in the making. If I were one of them, I would be seriously thinking about how to value my future at OpenAI.
I think we need Sutskever to do a speech now to tell us what _his_ vision is exactly and what is going to happen to the GPT Store especially.<p>Possibly followed by Murati, but also maybe her comments won't be that relevant if Sutskever is really as into shutting down product development as it seems.
I'm curious what recourse Microsoft has, if any. Presumably there's some clause protecting them against openAI self-immolating?<p>Interestingly, the situation is fully reversible for now:<p>1. Majority of employees sign a strong letter condemning the board and calling for their resignation.
2. MSFT threatening to activate whatever protective clause they have
3. Other investors/donors threatening lawsuits.
4. Sustkever et all resign, new board appointed and Sam and Greg come back.<p>Things could be back to business as usual by Monday morning.
To what extent it Ilya the brains behind this entire wave?<p>As in – without him – this LLM craze, as we know it and as it has manifested – would not exist today?<p>My understanding is that he is at the center of all of it.<p>NY Mag said that Sam Altman is the Oppenheimer, I think it's Ilya.
No malfeasance? Guess who would now be interested in backing Altman and his crew in whatever he chooses to do.<p>Trillion $ companies are not forces you should be meddling with.
Honestly at this point the theory that makes most sense to me is that OpenAI got a new internal result which was notable enough that there was a major disagreement at the board level about how to respond to it.<p>Which feels like complete science fiction, but it comes closest to explaining why the non-profit board would move so quickly and disruptively.
This seems to align with the stuff Kara Swisher was hearing, and suggests Sam was "not consistently candid" about product announcements or something similar.