Ridiculous. The board can't even regulate itself in the immediate moment, so who cares if they're not trying to regulate "long term risk". The article is trafficking in nonsense.<p>"The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI..."<p>More nonsense.<p>"...that's safe and beneficial."<p>Go on...<p>"Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets..."<p>The firm is obviously out of control according to first principles, so any claim of responsibility in context is moot.<p>When management are openly this screwed up in their internal governance, there's no reason to believe anything else they say about their intentions. The disbanding of the "superalignment" team is a simple public admission the firm has no idea what they are doing.<p>As to the hype-mongering of the article, replace the string "AGI" everywhere it appears with "sentient-nuclear-bomb": how would you feel about this article?<p>You might want to see the bomb!<p>But all you'll find is a chatbot.<p>—<p>Bomb#20: You are false data.<p>Sgt. Pinback: Hmmm?<p>Bomb#20: Therefore I shall ignore you.<p>Sgt. Pinback: Hello... bomb?<p>Bomb#20: False data can act only as a distraction. Therefore, I shall refuse to perceive.<p>Sgt. Pinback: Hey, bomb?<p>Bomb#20: The only thing that exists is myself.<p>Sgt. Pinback: Snap out of it, bomb.<p>Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.<p>Boiler: What the hell is he talking about?<p>Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness.