The most embarassing thing about this isn't that the board decided to fire the CEO. That happens all the time. Many startups replace their CEO, it isn't always clear whether it's the right decision, but that's the responsibility of the board.<p>The embarassing part is that the board decided to fire the CEO, announced their decision, refused to say why, attempted to put in place a new CEO but had to immediately demote the new CEO (Mira) after she rejected their plan, upset and alienated their core partners, along with almost all of their employees, and then publicly backtracked to undo the firing that led to this all happening.<p>Once you screw this up to such an incredible degree, how can anyone really trust that you were doing the rest of your job well?
Key passage:<p>"Altman approached other board members, trying to convince each to fire Toner. Later, some board members swapped notes on their individual discussions with Altman. The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said.
By this point, several of OpenAI’s then-directors already had concerns about Altman’s honesty, people familiar with their thinking said."
I think the OpenAI shenanigans are far from over and the next act is likely to play out. A couple of my personal opinions are:<p>- As PG pointed out Sam would find his way to the top of an island of cannibals. If you're generous to Toner's side in this article she likely _was_ alarmed by how Sam tried to get other board members to turn against her. When she walked into the arena with Altman to try and remove him however he had much more experience and orchestrated an incredible counter-coup.<p>- There must be an army of attorneys representing the board members, and investors because there are huge potential legal pitfalls to destroying value, or failing fiduciary duties.<p>- We don't know a lot because anything put out in public has to be spotless legally. If you're a board member and you know the truth, the only upside to leaking more info is clout, but the downsides are being sued and tied up in court losing a lot of time and money.<p>- People seem dissatisfied with what was revealed in this article but Toner has likely been advised by lots of counsel on exactly what can be said, and the list of those things is probably miniscule. I'd say she's doing the best she can to give the public more info and I'm happy she's taken the effort to do so.
> Helen Toner was a relatively unknown 31-year-old academic from Australia—until she became one of the four board members who fired Sam Altman<p>> Toner graduated from the University of Melbourne, Australia, in 2014 with a degree in chemical engineering [...] In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, [...] She succeeded her former manager from Open Philanthropy, Holden Karnofsky, on the OpenAI board in 2021 after he stepped down.<p>Honest question, how do people find themselves in the board of one of the hottest startups at the tender age of 31? Are they geniuses or is it all about connections?
It’s interesting that she was threatened with being held responsible for violating a fiduciary duty for the for-profit entity while she was sitting on the non-profit board.<p>Going fwd, I wonder why they cannot convert the structure to a traditional c-corp. Supposedly: “tax issues”.<p>Whatever problems OpenAI dealt with last week, the current non-profit structure will continue to cause future problems IMO.
Any board member who believes it is her duty to destroy the organization she pledged to serve should simply step down.<p>The folks who argue that it is a CEO's duty to serve the board do not understand governance or power.<p>Open AI would have wallowed in irrelevance had Altman not raised the billions from Microsoft to fund its research. Because he was able to do that, and built the relationship with Nadella, he had and has power. Toner's behavior seems naive in that context. But naiveté among academics does not surprise me.
It seems to me as if Toner was just in the right place at the right time in order to get a seat on the board of Open AI. She has an undergrad degree in chemical engineering and a master's in "security studies." Her work on AI Safety opened this door for her but seems, at least to me, to be ... superficial. I'm no expert but I have worked in tech for awhile and at a public policy think tank. So I guess my question is there a real scientific field of AI Safety? Are there any real experts? Are there any real insights? I dislike the idea of trusting giant tech companies with breakthrough technologies with minimal oversight / regulation. But it just seems like there is no real science regarding AI x public policy. Like the policy experts have no clue what they are talking about. And after this debacle they probably won't be lucky enough to find themselves on the boards of organizations like Open AI.
She specifically declines in the article to provide the additional context that would probably clarify everything. This is just more non-answers and only a tad more information than we had before (threats of violating a fiduciary duty).<p>Nobody involved is being transparent about what went down here. Not sure we'll find out the full story here...
AI Safety or lack thereof, to the extent that it emerges as a real threat, will be a matter for law enforcement.<p>Just like “terrorism” and “drugs” and all the other vaguely menacing things that ended up being worth a pile of money to some incumbent interest, the real story will be boring: some crooks used a fancy-ass LLM to take a bank apart or something. The bank calls the cops, this happens about twice before the cops have a division for this sort of thing, and sooner or later someone is in a courtroom.<p>It would be nauseatingly boring to get into how many times this has happened.<p>Also predictably depressing, the real story, which is yet another brick in the wall of “we’re handing the reigns to people who make movie Zuck look flat fucking normal”, goes largely unreported.<p>We used to complain about dual-class share structures.<p>Sam just eviscerated a board of an ostensible charity with the full backing of MSFT and got himself installed as Ungovernable God Emperor of Arrakis for life. Still doing the eyeball scanner thing.<p>Still has never shown the public he can handle numpy to the tune of MNIST.<p>This is all in broad daylight and the Gemini launch gets clobbered because the 4-series comparable one comes out in January.<p>Sam’s a smart guy, I’m quite sure he’d pay attention if the braintrust (this site) complained loudly.<p>But the message is: “more, not less”.
> Toner maintains that safety wasn’t the reason the board wanted to fire Altman. Rather, it was a lack of trust<p>> Toner declined to provide specific details on why she and the three others voted to fire Altman from OpenAI<p>Does she see herself as more trustworthy? She can't even be bothered to give an excuse for the firing.<p>Someone like this couldn't be trusted in literally any function, how did she get a board seat on OpenAI?
I’m glad she’s off the board. You cannot have a trigger happy fire an important ceo over some to me is a small trust issue, infact it seems like she’s over using her powers and “hold principle” and destroy everything even to achieve it. It proves she cannot think inbetween, saying it’s “protect humanity at all costs”, you really need a brain to execute that and she has proven it was a massive f-up
> Some of Altman’s backers, including OpenAI investor Vinod Khosla, publicly expressed derision specifically toward Toner and Tasha McCauley, another former OpenAI board member who voted to fire Altman and is connected to organizations that promote effective altruism.<p>> “Fancy titles like ‘Director of Strategy at Georgetown’s Center for Security and Emerging Technology’ can lead to a false sense of understanding of the complex process of entrepreneurial innovation,” Khosla wrote in an essay in tech-news publication the Information, referring to Toner and her current position.<p>Strong words from the infamous beach villain and resident asshole, Vinod Khosla...<p>Every Generation Gets the Beach Villain It Deserves (NYT)
<a href="https://archive.is/6mss9" rel="nofollow noreferrer">https://archive.is/6mss9</a>
> Toner maintains that safety wasn’t the reason the board wanted to fire Altman.<p>This is the biggest news of it all So all the Q* thing and Ilya's safety position and other theories were incorrect?
Allowing the board to control AI safety is BS. Is like ethics, the engineer is usually imbued with ethics when they work at any firm. It’s that simple, you work at an airline manufacturer, your duty is to not cause harm, test properly, it’s no different with AI. The engineer understands this more than any board member getting paid to do nothing.
Anyone else come to the realization that when spokespeople talk about "AI Safety" they aren't concerned with the skynet-esque enslavement of mankind or paperclip maximizing, but that controls be in place that prevent people from using the technology in a way that is misaligned with the maximum extraction of profit?
Helen Toner has proven herself to be a poor communicator. This is what will happen if you give inexperienced people, that have not earned it, board seats.
openai is where it's at because of sama & greg (& ofc contributions of employees, but funding & solid engineering leadership is a big part in its success), to think that she could just push them away without it backfiring is mindblowing.
She didn’t think there would be employee backlash when they fired the guy who was almost finished closing a financing that would make most senior employees millionaires?<p>I respect that she and other board members didn’t like that Sam was trying to manipulate them into ousting her but their counterattack was poorly thought out.
> The board’s mandate is to “humanity,” not investors.<p>Somebody put her there, told her "you know, say it's for humanity or something" and she <i>actually</i> believed it.<p>No, it's for people to get rich.