Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.<p>But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.<p>The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.
Think of it like the industrial revolution. No environmentalist, shouting for analysis, regulation, or transparency would have survived that era, they'd've been steamrolled. Now we're left with many long-term problems, even generations downstream of that focus on profit above all else.<p>Same thing happening now.<p>And you don't have to be a doomer screeching about skynet. The web is already piling up with pollutive, procedurally-generated smog.<p>I'm not catastrophizing; its just that history is the best predictor of the future.
At a guess I would say there are many competing imperatives for OpenAI<p>1. Stay just a tiny bit ahead of rivals. It's clear that OpenAI have much much more in the bag than the stuff they're showing. I'm guessing that DARPA/Washington has got them on a pretty tight leash.<p>2. Drip feed advances to avoid freaking people out. Again while not allowing rivals to upstage them.<p>3. Try to build a business without hobbling it with ethical considerations (ethics generally don't work well alongside rampant profit goals)<p>4. Look for opportunities to dominate, before the moat is seriously threatened by open source options like Llama. Meta has already suggested that in 2 months they'll be close to an open source alternative to GPT4o.<p>5. Hope that whatever alignment structures they've installed hold in place under public stress.<p>Horrible place to be as a pioneer in a sector which is moving at the speed of light.<p>We're on a runaway Moloch train, just gotta hang on!
Good. As someone who is a paid up OpenAI user I absolutely don't agree that there should be a role for a team screaming to put the brakes on because of some nebulous, imagined "existential risk" of hypothetical future AGI.<p>There are huge risks to AI today in terms of upheaval to economies and harms to individuals and minorities but they need to be tackled by carefully designed legislation, focused on real harms, like the EU AI legislation.<p>Then that imposes <i>very specific obligations</i> that <i>every</i> AI product must meet.<p>It's both better targeted, has wider impact across the industry, and probably allows moving faster in terms of tech.
It appears that sama and co said whatever they needed to say to investors to convince them they cared about the actual future, so now it’s time to switch to quarterly profit goals. That was fast. Next up:<p>* Stealth ads in the model output<p>* Sale of user data to databrokers<p>* Injection into otherwise useful apps to juice the usage numbers.
Ridiculous. The board can't even regulate itself in the immediate moment, so who cares if they're not trying to regulate "long term risk". The article is trafficking in nonsense.<p>"The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI..."<p>More nonsense.<p>"...that's safe and beneficial."<p>Go on...<p>"Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets..."<p>The firm is obviously out of control according to first principles, so any claim of responsibility in context is moot.<p>When management are openly this screwed up in their internal governance, there's no reason to believe anything else they say about their intentions. The disbanding of the "superalignment" team is a simple public admission the firm has no idea what they are doing.<p>As to the hype-mongering of the article, replace the string "AGI" everywhere it appears with "sentient-nuclear-bomb": how would you feel about this article?<p>You might want to see the bomb!<p>But all you'll find is a chatbot.<p>—<p>Bomb#20: You are false data.<p>Sgt. Pinback: Hmmm?<p>Bomb#20: Therefore I shall ignore you.<p>Sgt. Pinback: Hello... bomb?<p>Bomb#20: False data can act only as a distraction. Therefore, I shall refuse to perceive.<p>Sgt. Pinback: Hey, bomb?<p>Bomb#20: The only thing that exists is myself.<p>Sgt. Pinback: Snap out of it, bomb.<p>Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.<p>Boiler: What the hell is he talking about?<p>Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness.
I believe the right analytical lens for this situation is - “You come at the king, you best not miss.”<p>Omar, portrayed by Michael K. Williams written by Ed Burns and David Chase.
Honestly, having a "Long term AI risk" team is a great idea for an early stage startup claiming to build General AI. It looks like they are taking the mission and risks seriously.<p>But for a product-focused LLM shop trying to infuse into everything, it makes sense to tone down the hype.