I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:<p>> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
This is the moment where we fumble the opportunity to avoid a repeat of Web 1.0's ad-driven race to the bottom<p>Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
I see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
The recent flap over ChatGPT's fluffery/flattery/glazing of users doesn't bode well for the direction that OpenAI is headed in. Someone at the outfit appeared to think that giving users a dopamine hit would increase time-spent-on-app or some other metric - and that smells like contempt for the intelligence of the user base and a manipulative approach designed not to improve the quality of the output, but to addict the user population to the ChatGPT experience. Your own personal yes-person to praise everything you do, how wonderful. Perfect for writing the scripts for government cabinent ministers to recite when the grand poobah-in-chief comes calling, I suppose.<p>What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,<p>> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
Huh, so Elon's lawsuit worked? The nonprofit will retain control? Or is this just spin on a plan that will eventually still sideline the nonprofit?
There are a lot of good points here, by multiple vantage points as far as views for the argument of how imminent, if it - metaphysically or logistically - viable at all even, AGI is.<p>I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.<p>That's just me, though.
The explosion of PBC structured corps recently has me thinking it must just be a tax loophole at this point. I can't possibly imagine there is any meaningful enforcement around any of its restrictions or guidelines.
SamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something <i>now</i>.<p>If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.<p>The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
So the non-profit retains control but we all know that Altman controls the board of the non-profit and I'd be shocked if he won't have significant stock in the new for-profit (from TFA: "we are moving to a normal capital structure where everyone has stock"). Which means that regardless of whether the non-profit has control on paper, OpenAI is now <i>even better</i> structured for Sam Altman's personal enrichment.<p>No more caps on profit, a simpler structure to sell to investors, and Altman can finally get that 7% equity stake he's been eyeing. Not a bad outcome for him given the constraints apparently imposed on them by "the Attorney General of Delaware and the Attorney General of California".
The intro sounds awfully familiar...<p>> Sam’s Letter to Employees.<p>> OpenAI is not a normal company and never will be.<p>Where did I hear something like that before...<p>> Founders' IPO Letter<p>> Google is not a conventional company. We do not intend to become one.<p>I wonder if it's intentional or perhaps some AI-assisted regurgitation prompted by "write me a successful letter to introduce a new corporate structure of a tech company".
"Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler."<p>OpenAI admitting that they're not going to win?
Imagine having a mission of “ensure[ing] that artificial general intelligence (AGI) benefits all of humanity” while also believing that it can only be trusted in the hands of the few<p>> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
I'm not gonna get caught in the details, I'm just going to assume this is legalese cognitive dissonance to avoid saying "we want this to stop being an NFP because we want the profits."
From least to most speculative:<p>* The nonprofit is staying the same, and will continue to control the for-profit entity OpenAI created to raise capital<p>* The for-profit is changing from a capped-profit LLC to a PBC like Anthropic and Xai<p>* These changes have been at least tacitly agreed to by the attorneys general of California and Delaware<p>* The non-profit won’t be the <i>largest</i> shareholder in the PBC (likely Microsoft) but will retain control (super voting shares?)<p>* OpenAI thinks there will be multiple labs that achieve AGI, although possibly on different timelines
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity<p>They already fight transparency in this space to prevent harmful bias. Why should I believe anything else they have to say if they refuse to take even small steps toward transparency and open auditing?
Matt Levine on OpenAI's weird capped return structure in November 2023:<p><i>And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.</i><p><a href="https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai" rel="nofollow">https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...</a>
Can you commit to a "swords into ploughshares" goal?<p>We know it's a sword. And there's war, yadda yadda. However, let's do the cultivating thing instead.<p>What other AI players we need to convince?
abc.xyz: "Google is not a conventional company. We do not intend to become one"<p>sam altman: "OpenAI is not a normal company and never will be."<p>Hmmm
I agree that this is simply Altman extending his ability to control, shape and benefit from OpenAI. Yes, this is clearly (further) subverting the original intent under which the org was created - and that's unfortunate. But in terms of impact on the world, or even just AI safety, I'm not sure the governance of OpenAI matters all that much anymore. The "governance" wasn't that great after the first couple years and OpenAI hasn't been "open" since long before the board spat.<p>More crucially, since OpenAI's founding and especially over the past 18 months, it's grown increasingly clear that AI leadership probably won't be dominated by one company, progress of "frontier models" is stalling while costs are spiraling, and 'Foom' AGI scenarios are highly unlikely anytime soon. It looks like this is going to be a much longer, slower slog than some hoped and others feared.
This sounds like a good middle ground between going full capitalism and non-profit. This way they can still raise money and also have the same mission, but a weakened one. You can't have everything.
Here’s a breakdown of the *key structural changes*, and an analysis of *potential risks or concerns*:<p>---<p>## *What Has Changed*<p>### 1. *OpenAI’s For-Profit Arm is Becoming a Public Benefit Corporation (PBC)*<p>* *Before:* OpenAI LP (limited partnership with a “capped-profit” model).
* *After:* OpenAI LP becomes a *Public Benefit Corporation* (PBC).<p>*Implications:*<p>* A PBC is still a *for-profit* entity, but legally required to balance shareholder value with a declared public mission.
* OpenAI’s mission (“AGI that benefits all humanity”) becomes part of the legal charter of the new PBC.<p>---<p>### 2. *The Nonprofit Remains in Control and Gains Equity*<p>* The *original OpenAI nonprofit* will *continue to control* the new PBC and will now also *hold equity* in it.
* The nonprofit will use this equity stake to fund “mission-aligned” initiatives in areas like health, education, etc.<p>*Implications:*<p>* This strengthens the nonprofit’s influence and potentially its resources.
* But the balance between nonprofit oversight and for-profit ambition becomes more delicate as stakes rise.<p>---<p>### 3. *Elimination of the “Capped-Profit” Structure*<p>* The old “capped-return” model (investors could only make \~100x on investments) is being dropped.
* Instead, OpenAI will now have a *“normal capital structure”* where everyone holds unrestricted equity.<p>*Implications:*<p>* This likely makes OpenAI more attractive to investors.
* However, it also increases the *incentive to prioritize commercial growth*, which could conflict with mission-first priorities.<p>---<p>## *Potential Negative Implications*<p>### 1. *Increased Commercial Pressure*<p>* Moving from a capped-profit model to unrestricted equity introduces *stronger financial incentives*.
* This could push the company toward *more aggressive monetization*, potentially compromising safety, openness, or alignment goals.<p>### 2. *Accountability Trade-offs*<p>* While the nonprofit “controls” the PBC, actual accountability and oversight may be limited if the nonprofit and PBC leadership overlap (as has been a concern before).
* Past board turmoil in late 2023 (Altman's temporary ousting) highlighted how difficult it is to hold leadership accountable under complex structures.<p>### 3. *Risk of “Mission Drift”*<p>* Over time, with more funding and commercial scale, *stakeholder interests* (e.g., major investors or partners like Microsoft) might influence product and policy decisions.
* Even with the mission enshrined in a PBC charter, *profit-driven pressures could subtly shape choices*—especially around safety disclosures, model releases, or regulatory lobbying.<p>---<p>## *What Remains the Same (According to the Letter)*<p>* OpenAI’s *mission* stays unchanged.
* The *nonprofit retains formal control*.
* There’s a stated commitment to safety, open access, and democratic use of AI.
> Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.<p>Then why is it paywalled? Why are you making/have made people across the world sift through the worst material on offer by the wide uncensored Internet to train your LLMs? Why do you have a for-profit LLC operating under a non-profit, or for that matter, a "Public Benefit Corporation" that has to answer to shareholders at all?<p>Related to that:<p>> or the needs for hundreds of billions of dollars of compute to train models and serve users.<p>How does that serve humanity? Redirecting billions of dollars to fancy autocomplete who's power demands strain already struggling electrical grids and offset the gains of green energy worldwide?<p>> A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.<p>No, we thought your plagiarism machine was a disgusting abuse of the public square, and to be clear, this criticism would've been easily handled by simply requesting people opt-in to have their material used for AI training. But we all know why you didn't do that, don't we Sam.<p>> It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.<p>Well so far, we've got vulnerable, lonely people being scammed on Facebook, we've got companies charging subscriptions for people to sext their chatbots, we've got various states using it to target their opposition for military intervention, and the White House may have used it to draft the dumbest basis for a trade war in human history. Oh and fake therapists too.<p>When's the good kick in?<p>> We believe this is the best path forward—AGI should enable all of humanity^1 to benefit each other.<p>^1 who subscribe to our services
"We made the decision for the nonprofit to retain control of OpenAI after hearing from..." [CHIEF LAW ENFORCEMENT OFFICERS IN CALIFORNIA AND DELAWARE]<p>This indicates that they didn't actually want the nonprofit to retain control and they're only doing it because they were forced to by threats of legal action.
> We are committed to this path of democratic AI.<p>So were do I vote? How do I became a candidate to be a representative or a delegate of voters? I assume every single human is eligible for both, as OpenAI serves the humanity?
There's really nothing "open" about this company. If they want to be, then:<p>(1) be transparent about exactly which data was collected for the model<p>(2) release all the source code<p>If you want to benefit humanity, then put it under a strong copyleft license with no CLA. Simple.
Does anyone truly believe Musk had benevolent intentions? But before we even evaluate the substance of that claim, we must ask whether he has standing to make it. In his court filing, Musk uses the word "nonprofit" 111 times, yet fails to explain how reverting OpenAI to a nonprofit structure would save humanity, elevate the public interest, or mitigate AI’s risks. The legal brief offers no humanitarian roadmap, no governance proposal, and no evidence that Musk has the authority to dictate the trajectory of an organization he holds no equity in. It reads like a bait and switch — full of virtue-signaling, devoid of actionable virtue. And he never had a contract or an agreement for with OpenAI to keep it a non-profit.<p>Musk claimed Fraud, but never asked for his money back in the brief. Could it be his intentions were to limit OpenAI to donations thereby sucking the oxygen out of the venture capital space to fund Xai's Grok?<p>Musk claimed he donated $100mil, later in a CNBC interview, he said $50-mil. TechCrunch suggests it was way less.<p>Speakingof humanitarian, how about this 600lbs Oxymoron in the room: A Boston University mathematician has now tracked an estimated 10,000 deaths linked to the Musk's destruction of USAID programs, many of which provided basic health services to vulnerable populations. He may have a death count on his reume in the coming year.<p>Non profits has regulation than publicly traded companies. Each quarterly filings is like a colonoscopy with Sorbonne Oxley rules etc. Non profits just file a tax statement. Did you know the Chirch of Scientology is a non-profit.
Mmh am I the only one who has been offered to participate in a “comparison between 2 chatgpt versions”?<p>The newer version included sponsored products in its response. I thought that was quite effed up.
Here's a critical summary:<p>Key Structure Changes:<p>- Abandoning the "capped profit" model (which limited investor returns) in favor of traditional equity structure
- Converting for-profit LLC to Public Benefit Corporation (PBC)
- Nonprofit remains in control but also becomes a major shareholder<p>Reading Between the Lines:<p>1. Power Play: The "nonprofit control" messaging appears to be damage control following previous governance crises. Heavy emphasis on regulator involvement (CA/DE AGs) suggests this was likely not entirely voluntary.<p>2. Capital Structure Reality: They need "hundreds of billions to trillions" for compute. The capped-profit structure was clearly limiting their ability to raise capital at scale. This move enables unlimited upside for investors while maintaining the PR benefit of nonprofit oversight.<p>3. Governance Complexity: The "nonprofit controls PBC but is also major shareholder" structure creates interesting conflicts. Who controls the nonprofit? Who appoints its board? These details are conspicuously absent.<p>4. Competition Positioning: Multiple references to "democratic AI" vs "authoritarian AI" and "many great AGI companies" signal they're positioning against perceived centralized control (likely aimed at competitors).<p>Red Flags:<p>- Vague details about actual control mechanisms
- No specifics on nonprofit board composition or appointment process
- Heavy reliance on buzzwords ("democratic AI") without concrete governance details
- Unclear what specific powers the nonprofit retains besides shareholding<p>This reads like a classic Silicon Valley power consolidation dressed up in altruistic language - enabling massive capital raising while maintaining insider control through a nonprofit structure whose own governance remains opaque.
Random question, is anyone else unable to see a ‘select all’ on their iPhone?<p>I was trying to put all the text into gpt4 to see what it thought, but the select all function is gone.<p>Some websites do that to protect their text IP, which would be crazy to me if that’s what they did considering how their ai is built. Ha