This is really good journalism. There are a ton of interesting details in here that haven't been reported elsewhere, and it has all of the hallmarks of being well researched and sourced.<p>The first clue is this: "In conversations between The Atlantic and 10 current and former employees at OpenAI..."<p>When you're reporting something like this, especially when using anonymous sources (not anonymous to you, but sources that have good reasons not to want their names published), you can't just trust what someone tells you - they may have their own motives for presenting things in a certain way, or they may just be straight up lying.<p>So... you confirm what they are saying with other sources. That's why "10 current and former employees" is mentioned explicitly in the article.<p>Being published in the Atlantic helps too, because that's a publication with strong editorial integrity and a great track record.
A few interesting tidbits<p>> The company pressed forward and launched ChatGPT on November 30. It was considered such a nonevent that no major company-wide announcement about the chatbot going live was made. Many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first five days. The phrase low-key research preview became an instant meme within OpenAI; employees turned it into laptop stickers.<p>> Anticipating the arrival of [AGI], Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.<p>> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.
Looking at this article, the following theory would align with what I've seen so far:<p>* Ilya Sutskever is concerned about the company moving too fast (without taking safety into account) under Sam Altman.<p>* The others on the board that ended up supporting the firing are concerned about the same.<p>* Ilya supports the firing because he wants the company to move slower.<p>* The majority of the people working on AI don't want to slow down, either because they want to develop as fast as possible or because they're worried about missing out on profit.<p>* Sam rallies the "move fast" faction and says "this board will slow us down horribly, let's move fast under Microsoft"<p>* Ilya realizes that the practical outcome will be more speed/less safety, not more safety, as he hoped, leading to the regret tweet (<a href="https://nitter.net/ilyasut/status/1726590052392956028" rel="nofollow noreferrer">https://nitter.net/ilyasut/status/1726590052392956028</a>)
> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles.<p>Honestly, pretty sick
Well that story is just sad, because it means the principle/research-oriented company structure they set up utterly failed in the face of profit motives. Clearly Altman was not dissuaded from doing things the power structure didn't want him to do.
> with beliefs at times seemingly rooted in the realm of science fiction<p>I don’t know how you can look at the development of generative AI tools in the past few years and write so dismissively about “science fiction” becoming reality
Can't find the article right now, but there was one circulating that heavily implied that various SV execs began their rounds of layoffs last fall at least partially or probably inspired by the demos they'd seen of OpenAI's tech.<p>Microsoft in particular laid off 10,000 and then immediately turned around and invested billions more in OpenAI: <a href="https://www.sdxcentral.com/articles/news/microsoft-bets-billions-on-openai-following-layoffs/2023/01/" rel="nofollow noreferrer">https://www.sdxcentral.com/articles/news/microsoft-bets-bill...</a> -- last fall, just as the timeline laid out in the Atlnatic article was firing up.<p>In that context this timeline is even more nauseating. Not only did OpenAI push ChatGPT at the expense of their own mission and their employee's well-being, they likely caused massive harm to our employment sector and the well-being of tens of thousands of software engineers in the industry at large.<p>Maybe those layoffs would have happened anyways, but the way this all has rolled out and the way it's played out in the press and in the board rooms of the BigTech corporations... OpenAI is literally accomplishing the opposite of its supposed mission. And now it's about to get worse.
Sutskever said something interesting in his Lex Fridman interview:<p>"In an ideal world, humanity would be the board members, and AGI would be the CEO. Humanity can always press the reset button and say, 'Re-randomize parameters!'"<p>This was 3 years ago. But that metaphor strikes me as too powerful for it not to have been at the back of Sutskever's mind when he pushed for Altman being ousted.
FYI: one of the authors of this article, Karen Hao, just announced on Twitter that she's writing a book on OpenAI, and that this article is partly based on work done for that book.
> OpenAI’s president tweeted that the tool [ChatGPT] hit 1 million within the first five days.<p>Perhaps the reason why ChatGPT has become so popular is because it provides <i>entertainment</i>. So it is not a great leap forward in AI or a path to AGI, but instead a incredibly convoluted way of keeping reasonable intelligent people occupied and amused. You enter a prompt, and it returns a result - what a fun game!<p>Maybe that is it's primary contribution to society.
Does anyone else think it would make for a more healthy dynamic from the standpoint of AI safety if both sama and Ilya remained, despite their differences? Not that I know anything, but it seems a diversity of opinions at the top could have its benefits.
> Safety teams within the company pushed to slow things down. These teams worked to refine ChatGPT to refuse certain types of abusive requests and to respond to other queries with more appropriate answers.<p>I wonder what this struggle means for the future of ChatGPT censorship/safety.
i suspect that 'AI safety' means a chatbot that is not hallucinating/making things up. Ilya Sutskever may want some sort of hybrid system, where the output of the LLM gets vetted by a second system, so as to minimize instances of hallucinations, whereas Sam Altman says 'screw it, lets make an even bigger LLM and just push it'.<p>Is that right?<p>Don't know if Altman or Sutskever is right, there seems to be a kind of arms race between the companies. OpenAI may be past the point where they can try out a radically different system, due to competition in the space. Maybe trying out new approaches could only work in a new company, who knows?
Frenzied speculation swirls after ousted OpenAI CEO Sam Altman teased breakthrough that pushed 'the frontier of discovery forward' - just ONE day before he was fired amid reports he was seeking investors for new AI chip venture
So the decels basically owned the board, and drove a 100B value destruction because they disagreed with letting people use GPT-4 in ChatGPT. Some colleagues that is.<p>Is this decel movement just an extension of the wokeism that has been a problem in SV? Employees more focused on social issues than actually working.
The problem with the idealistic "we do research on alignment as we discover AGI, don't care about money" angle is that... you are not the only ones doing it. And OpenAI is trying to do it with its hands tied behind its back (non-profit status and vibes). There are and will be companies (like Anthropic) doing the same work themselves, they will do it for profit on the side, will rake in billions, possible become the most valuable country on Earth, build massive research and development labs etc. Then they will define what alignment is, not OpenAI. So for OpenAI to reach its goal, if they want to do it themselves that is, they need to compete on capitalistic grounds as well, there is no way around it.
Lol. So I didn't get past the few paragraphs before the paywall, and I didn't need to.<p>I appreciate the <i>idea</i> of being a "not-greedy typical company," but there's a reason you e.g. separate university type research or non-profits and private companies.<p>Trying to make up something in the middle is the exact sort of naivete you can ALWAYS expect from Silicon Valley.
Random thought.<p>Let's suppose that AGI is about to be invented, and it will wind up having a personality similar to humans. The more that those are are doing the inventing are afraid of what they are inventing, the more that they will push it to be afraid of the humans in turn. This does not sound like a good conflict to start with.<p>By contrast if the humans inventing it go full throttle to convincing it that humans are on its side, there is no such conflict at all.<p>I don't know how realistic this model is. But it certainly suggests that the e/acc approach is more likely to create AI alignment than EA is.