Information just reported this:<p>Breaking: Sam Altman Will Not Return as CEO of OpenAI<p><a href="https://www.theinformation.com/articles/breaking-sam-altman-will-not-return-as-ceo-of-openai" rel="nofollow noreferrer">https://www.theinformation.com/articles/breaking-sam-altman-...</a>
“Interim CEO Mira Murati plans to rehire Sam and Greg, and is in talks with board rep Adam D’Angelo to do so (in what capacity is not yet finalized). However, concurrently, the OpenAI board is looking to hire its own CEO, and has reached out to two candidates that we’ve spoken to, both prominent execs”<p><a href="https://x.com/emilychangtv/status/1726457543629914389?s=46" rel="nofollow noreferrer">https://x.com/emilychangtv/status/1726457543629914389?s=46</a><p>I can’t imagine why any CEO would want to take the job and be Sam’s boss. There’s no way that goes well.
Most of the people here don't know who founded OpenAI and why they founded it.
Board in the question is the real developers of this technology and has members known to be not sold before. Ilya and his team is the core developer even we can call them inventors of this tech. While others who paid by sam can be replaced, the core teams always stated that they want "open" sourced project and not a value and profit based company.<p>I think it is a good thing that OpenAI won't let silicon valley bully run into company. They spent whole their life on this technology and they won't let any "i'm the network guy and i'm the CEO" type of guy sell and brag about it.<p>He even went to take Hawking Fellowship award. What? Bro, let ilya or alec take it. What a douche!
I see literally no reason for Sam to stay without a full board resignation and return to CEO. All other options are just downsides when he can walk, start Newco, and take everyone with him. He'd lose the restrictive governance model and gain full-control.<p>I think Murati is actually on team Altman, but that just makes me think that he should walk even more. Take Murati and start Newco with the exact same org chart.
So the board fired the CEO and appointed a new one. The new CEO now wants to hire the old CEO back but now the board doesn’t want either of them to be CEO and is trying to find a totally new CEO. What a friggin’ mess.
At this stage, I wonder if @sama and @gdb could just form ReOpenedAI? They could at least forgoe to pretense of the work being for some kind of greater good than profit.
Guessing they want to rehire Altman and Brockman into their old positions, while still keeping them off the board.<p>I think maybe the trigger was that Sam was doing some board-unfriendly moves like signing business contracts with MS without running it by the board, and they found out about this and booted him out hastily. But now they got too much backlash, and are hoping to just go back to normal, but still can't accept Sam keeping the board seats in case he tries again.
Now we know were Murati sits. Are they planning to fire her too? Sounds like they would rather "replace her", problem being that probably no one credible will agree to do it at this point, except maybe Ilya?
And as someone lately quipped but paraphrasing:<p>"Those who can't align six board members safely would surely align AGI safely."<p>May the lords of linear algebra and calculus have mercy on us.
I'd say Elon Musk is a top pick for the alt-ceo. He seems to share the very same concerns regarding AI as the board members that fired Sam. Of course, Musk has had backlash for Neura-link not being focused enough on safety, so who knows. I would figure that the board needs to find someone who shares their beliefs and is also a very good CEO. Wonder who else would fit the bill here.
It seems like a mess -- but what would <i>you</i> do if you were on the board of a nonprofit that believes it's developing the world's most important technology, and you conclude that the CEO is lying to you and/or violating the charter? I don't know if there are any good options.<p>Paul Graham says Sam is "extremely good at becoming powerful", "you could parachute him into an island of cannibals and come back in 5 years and he'd be the king". I don't understand why I'm supposed to support a machiavellian power-seeker to develop the world's most important technology. I just hope he doesn't slip ice-nine into my food after I publish this comment: <a href="https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny" rel="nofollow noreferrer">https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...</a><p>Edit: I suspect the mods of Hacker News downranked this comment, it's voted to +15 points but sits near the bottom of the page... Maybe try not to be quite so cartoonishly evil guys?
Wow, this is not a good article. One of the main issues is the board’s responsibility and legal requirement to support the non-profit’s 501 charter. No mention of this at all in the article!<p>At least one of the article’s authors seems to have a friendship with Sam Altman based on two interviews I have watched with them (and this is just my opinion). It seems to me like the article was written in support of Microsoft’s position, not surprising since Microsoft may be an advertiser in Bloomberg’s media.<p>I wish Sam Altman the very best in his future projects, and as a fan of OpenAI’s work I would like to see rapid progress. However, the more I dig into this, I agree more with the board taking some strong measures to meet their legal obligations.<p>Sorry if this sound like a rant, but I am growing tired of reading articles and then have to do the extra work of analyzing if and why I am being shown biased material. What happened to news outlets fairly telling both sides of the story.
Proves the board just wanted power, and only power.<p>All of the engineers, Sam, and Greg are probably entirely reasonable. If you really wanted to ensure safety like it always has been, you can express your concerns and get basically what you wanted.<p>They will pay up the bill: <a href="https://openai.com/blog/introducing-superalignment" rel="nofollow noreferrer">https://openai.com/blog/introducing-superalignment</a><p>If you disagreed on what would lead to AGI, LLM vs more components, then you can just see it play out. Same thing as the specific transformer being a light at the end of the tunnel that OpenAI pivoted to, the researchers will find what makes the AI more intelligent over time.<p>Only if you wanted to entirely stop the AI development would this occur for you to do. But this is probably a minimal goal if you are a researcher, you want to keep researching. Instead, only if you wanted to stop OPENAI's AI, would you do this.<p>At the end of the day, the board probably was a conflict of interest, and had no real concerns. Power grab 101.