I’m confused by this news story and the response here. No one seems to understand OpenAI’s corporate structure or non profits <i>at all.</i><p>My understanding: OpenAI follows the same model Mozilla does. The nonprofit has owned a for-profit corporation called <i>OpenAI Global, LLC</i> that pays taxes on any revenue that isn’t directly in service of their mission (in a very narrow sense based on judicial precedent) since 2019 [1]. In Mozilla’s case that’s the revenue they make from making Google the default search engine and in OpenAI’s case that’s all their ChatGPT and API revenue. The vast majority (all?) engineers work for the for-profit and always have. The vast majority (all?) revenue goes through the for-profit which pays taxes on that revenue minus the usual business deductions. The only money that goes to the nonprofit tax-free are donations. Everything else is taxed at least once at the for-profit corporation. Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes. They don’t get to just take any taxable revenue stream and declare it tax free.<p>All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit. They’re allowing the for profit to create more shares and distribute them to entities other than the non-profit. Or am I completely misinformed?<p>[1] <a href="https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_non-profit" rel="nofollow">https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...</a>
Can anybody explain how this actually works? What happens to all of the non-profit's assets? They can't just give it away for investors to own.<p>The non-profit could maybe sell its assets to investors, but then what would it do with the money?<p>I'm sure OpenAI has an explanation, but I really want to hear more details. In the most simple analysis of "non-profit becomes for-profit", there's really no way to square it other than non-profit assets (generated through donations) just being handed to somebody for private ownership.
And more high level exits ... not only Mira Murati, but also Bob McGrew , and Barret Zoph<p><a href="https://www.businessinsider.com/sam-altman-openai-note-more-staff-exits-barret-bob-2024-9" rel="nofollow">https://www.businessinsider.com/sam-altman-openai-note-more-...</a>
In that case, where can I apply for my licensing fee for my content they have scraped and trained on.<p>List of crawlers for those who now want to block:
<a href="https://platform.openai.com/docs/bots" rel="nofollow">https://platform.openai.com/docs/bots</a>
We are booing Altman because his bait and switch feels unethical, but many of us saw it coming from a mile away, how does he make this transition to take advantage of the financial system so smoothly? Is there no legal guard for such maneuver, or is he just an insanely good player to circumvent all of them in plain view?
Converting to a for-profit changes the tax status of donations. It also voids plausibility for Fair Use exemptions.<p>I can see large copyright holders lining up with takedowns demanding they revise their originating datasets since there will now be a clear-cut commercial use without license.
The incremental transformation from non-profit to for-profit... does anyone have legal standing to sue?<p>Early hires, who were lured there by the mission?<p>Donors?<p>People who were supposed to be served by the non-profit (everyone)?<p>Some government regulator?
So are they going to give elon equity? He donated millions to the non profit and now they are going to turn around and turn the company into a for-profit based on the work done with that capital.
I never understood why people take non profit companies as more altruistic than for profit companies. The non profit doesnt mean no profits at all they still have to be profitable. It's just boils down to how the profits are distributed. There are plenty of sleezy institutions that are non profit like the NCAA.<p>Foundations and charitable organizations that pubically get their funding are a different story but I'm talking about non profit companies.<p>I even had one fellow say that the green bay packers were less corrupt than the other for profit nfl teams , which sounds ridiculous.
About a year ago (I believe), Sam Altman touted his mission to promote safe AI with claims that he has no equity in OpenAI and was never interested in getting any. Look where we are now, well played Sam.
I know nothing about companies (esp. in the US), but I find it weird that a company can go from non-profit to for-profit? Surely this would be taken advantage of. Can someone explain me how this work?
It seemed only a matter of time, so it isn't very surprising. <i>Capped profit</i> company running expensive resources on Internet scale, and headed by Altman wasn't going to last forever in that state. That, or getting gobbled by Microsoft.<p>Interesting timing of the news since Murati left today, gdb is 'inactive' and Sutskevar has left to start his own company. Also seeing few OpenAI folks announcing their future plans today on X/Twitter
I can't help but wonder if things would be different if Sam Altman wasn't allowed to come back to OpenAI. Instead, the safeguards are gone, challengers have left the company, and the bottom line is now the new priority. All in opposition to ushering in AI advancement with the caution and respect it deserves.
Wouldn't surprise me if this was the actual cause of the revolt that led to Altman's short-lived ouster, they just couldn't publicly admit to it so made up a bunch of other nonsensical explanations.
Feels like when Napoleon declared himself emperor, and other countless times when humans succumbed to power and greed when they were finally in the position to make that decision. I guess I’m stupid for holding on hope that Sam would be different.<p>>Beethoven's reaction to Napoleon Bonaparte's declaration of himself as Emperor of France in May 1804 was to violently tear Napoleon's name out of the title page of his symphony, Bonaparte, and rename it Sinfonia Eroica<p>>Beethoven was furious and exclaimed that Napoleon was "a common mortal" who would "become a tyrant"
OpenAI founded as non-profit. Sam Altman goes on Joe Rogan Podcast and says he does not really care about money. Sam gets caught driving around Napa in a $4M exotic car. OpenAI turns into for-profit. 3/4 of founding team dips out.<p>Sketchy.<p>This whole silicon valley attitude of fake effective altruism, "I do it for the good of humanity, not for the money (but I actually want a lot of money)" fake bullshit is so transparent and off-putting.<p>@sama, for the record - I am not saying making is a bad thing. Labor and talent markets should be efficient. But when you pretend to be altruistic when you are obviously not, then you come off hypocritical instead of altruistic. Sell out.
The most surprising thing to me in this is that the non-profit will still exist. Not sure what the point of it is anymore. Taken as a whole, OpenAI is now just a for-profit entity beholden to investors and Sam Altman as a shareholder. The non-profit is really just vestigial.<p>I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.
Can anyone trust the next "non-profit" startup ?
So easy to attract appeal with a lie and turn around as soon as you are in a dominant position.
Not the only one questioning.<p>Going for-Profit, and several top exec leaving at same time? Before getting the money?<p>"""Question: why would key people leave an organization right before it was just about to develop AGI?" asked xAI developer Benjamin De Kraker in a post on X just after Murati's announcement. "This is kind of like quitting NASA months before the moon landing," he wrote in a reply. "Wouldn't you wanna stick around and be part of it?"""<p><a href="https://arstechnica.com/information-technology/2024/09/openais-murati-shocks-with-sudden-departure-announcement/" rel="nofollow">https://arstechnica.com/information-technology/2024/09/opena...</a><p>Is this the beginning of the end for OpenAI?
That could be the first step towards a complete takeover by Microsoft, possibly followed by more CEO shuffles.<p>I wonder though whether Microsoft is still interested. The free Bing Copilot barely gets any resources and gives very bad answers now.<p>If the above theory is correct (big if!), perhaps Microsoft wants to pivot to the military space. That would be in line with idealist employees leaving or being fired.
Given what Sam has done by clearing out every single person who went against him in the initial coup and completely gutting every safety related team the entire world should be on notice. If you believe what Sam Altman himself and many other researchers are saying, that AGI and ASI may well be within reach inside this decade, then every possible alarm bell should be blaring. Sam cannot be allowed to be in control of the most important technology ever devised.
I'm wondering what this will change. This is probably naive from me because I'm relatively uneducated on the topic, but it feels like open-ai has never really worked like your typical non profit (eg keeping their stuff mostly closed sourced and seeking a profit)
Can we all agree that the next time a company announces itself (or a product) as "open", we'll just laugh out loud?<p>I can't think of a single product or company that used the "open" word for something that was actually open in any meaningful way.
Based on what I've read it is allowed for a non profit to own a for profit asset.<p>So I'm assuming the game plan here is to adjust the charter of the non profit to basically say we are going to still keep doing "Open AI" (we all know what that means), but through the proceeds it gets by selling chunks of this for-profit entity, so the essence could be the non-profit parent isn't fulfilling its mission by controlling what openai does but how it puts the money to use it gets from openai.<p>And in this process, Sam gets a chunk (as a payment for growing the assets of the non-profit, like a salary/bonus) and the rest as well....?
It's worth noting that of OpenAI's 13 original founders, 10 have now left the company and 1 more is on leave, leaving only Sam and Wojciech.<p>Safe AI, altruistic AI, human-centric AI, are all dead. There is only money-generating AI. Fuck.
<a href="https://x.com/yacineMTB/status/1839039293961961543" rel="nofollow">https://x.com/yacineMTB/status/1839039293961961543</a><p>so much for sam "i have no equity" altman
<i>The restructuring is designed in part to make OpenAI more attractive to investors</i><p>I'm not surprised in the least.<p>Who is going to give billions to a non-profit with a bizarre structure where you don't actually own a part of it but have some "claim" with a capped profit? Can you imagine bringing that to Delaware courts if there was disagreement over the terms? Investors can risk it if it's a few million, but good luck convincing institutional investors to commit billions with that structure.<p>At that point you might as well just go with a standard for-profit model where ownership is clear, terms are standard and enforceable in court and people don't have to keep saying "explain how it works again?".
I remember a time when promises meant something. Lots of epics in human time (Greek, Hindu), people would stick to their word and commitment was respected. Written word was much more powerful than spoken. People appreciated depth. Wish we could teach and learn from those times.
I called this 9 months ago: <a href="https://news.ycombinator.com/item?id=38560352">https://news.ycombinator.com/item?id=38560352</a><p>OpenAI is Microsoft's AI R&D spin-off and Microsoft means business.
Fund your startup by masquerading as a non profit for a few years and collecting donations, genius!<p>The stinking peasants will never realize what's happening until it's too late to stop!
It's really hard to stick to your original goals after you achieve unexpected success. It's like a politician making promises before the elections but finding it difficult to keep them once elected.<p>On March 1st, 2023, a warning was already sounding: <i>OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit</i> (<a href="https://news.ycombinator.com/item?id=34979981">https://news.ycombinator.com/item?id=34979981</a>)
Works where archive.ph is blocked:<p><a href="https://www.msn.com/en-us/money/other/openai-to-become-for-profit-company/ar-AA1rcDWH" rel="nofollow">https://www.msn.com/en-us/money/other/openai-to-become-for-p...</a><p>Text-only:<p><a href="https://assets.msn.com/content/view/v2/Detail/en-in/AA1rcDWH" rel="nofollow">https://assets.msn.com/content/view/v2/Detail/en-in/AA1rcDWH</a>
[dupe] more discussion:<p><a href="https://news.ycombinator.com/item?id=41651548">https://news.ycombinator.com/item?id=41651548</a>
I’d guess it would be legally not possible to turn a non-profit into a for-profit company, no matter how confusing the company structure gets. And even (or rather, especially) if the project disrupts the economy on a global level. I’m not surprised that this is happening, but how we got here - I don’t know.
Any reporting on the impact this is having on lower level employees? My understanding is they are all sticking around for their shares to vest (or RSU's I guess).<p>but still, you'd think some of them would have finally had enough and have enough opportunities elsewhere that they can leave.
the openai team, including the tech community, should've sided with the board, not sam. the fact that ilya had a hand in it should've given it weight and backing.<p>"openai is nothing without its people." well, the key people left. soon, it will just be sam and his sycophants.
OpenAI couldn't even align their Sam Altman and their people to their non-profit mission. Why should you ever believe they will align AGI to the well being of humanity?<p>What happened to all the people making fun of Helen Toner for attempting to fire Sama? She and Ilya were right.
The good thing is, we need to worry about AGI because we already know what it's like in a world populated by soulless inhuman entities pursuing their own selfish aims at the expense of mankind.
Here is the story without a paywall <a href="https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/" rel="nofollow">https://www.reuters.com/technology/artificial-intelligence/o...</a>
It's hilarious that this should surprise anyone at all, or that Sam Altman is anything but a mendacious, self-serving, compulsive liar of the worst tech world kind. For example, Elon Musk gets lots fo hate for all kinds of things. Some of it is very valid, but much of it also goes to the point of there being derangement syndrome around him, partly (I suspect) because of his openly stated zeitgeist-contrary political beliefs, yet i'd pick his sometimes crude, bullying but fundamental openness about who he is any day over the shiny paint job of platitudes and false correctness found in someone like Altman. Not to mention that the overall value of Musk's companies trumps anything I've seen done by Altman's sludge-pumping AI technology so far. This last is of course not a moral judgement but a practical one.
a couple of other discussions going on including this one with non-paywall op:<p><a href="https://news.ycombinator.com/item?id=41651548">https://news.ycombinator.com/item?id=41651548</a>
There is an post with 500 comments that was posted before this one. Why didn't that post make it to the top? I know Y Combinator used to have Sama has a president but you can't censor this type of big news in this time and age
Reminds me of what my first-year econ professor in college once stated after disabusing myself and some other undergrads of our romantic notions about how life should work.<p>"Do I shock you? This is capitalism."
i remember years ago i saw a video of sam altman interviewing elon musk. it was filmed inside the spacex factory. maybe you know the one? i didnt know who sam was at the time. i remember being very, very put off by the way sam was behaving. he had this bizarre, almost unbelievable expression on his face, almost like he was pantomiming as a child looking up at his parents, adoring them. this weird, fake shy-smile. and i remember immediately having an intense disliking of him. this person seemed extremely fake, manipulative and narcissistic. it was such low-level behavior that i thought it must be some intern or someone way out of their depth. this interview is some kind of fluke. its so unbelievably insane to me that this person, who i disliked so much that i remembered him even without knowing his name or who he was, is now at the helm of one of the most important developments in human history. and the subject of todays headline is no surprise at all... i think everyone should think very carefully about the fact that sam altman will at some point, probably, be the very first person in the world to sit down in front of a console and hold the reigns directly and without supervision to a super-intelligent system that does not bear any of the regulatory or moral restrictions that would stop it from taking over the world. this evil narcissist, liar, money hungry, power grabbing A* hole will hold the most power that any human has ever held. do you really want that?
Relatedly, dalant979 found this fascinating bit of history: <a href="https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the_best_long_con_you_ever_pulled/cszwpgq/?context=3" rel="nofollow">https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...</a><p>Yishan Wong describes a series of actions by Yishan and Sam Altman as a "con", and Sam jumps in to brag that it was "child's play for me" with a smiley face. :)
The fact that this has just disappeared from the front page for me, just like the previous post (<a href="https://news.ycombinator.com/item?id=41651548">https://news.ycombinator.com/item?id=41651548</a>), somehow leaves a bitter taste in my mouth.
"Shocking!" It's a shame that one of the biggest advancements of our time has come about in as sleazy a way as it has.<p>Reputationally... the net winner is Zuck. Way to go Meta (never thought I'd think this).
Reuters had the exclusive yesterday but somehow it never surfaced for long here:<p>"OpenAI to remove non-profit control and give Sam Altman equity"<p><a href="https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/" rel="nofollow">https://www.reuters.com/technology/artificial-intelligence/o...</a>
Thank you dalant979 for finding a previous pattern of behavior also by Sam with a similar structure to what we have seen unfolding on the board of OpenAI.<p><a href="https://news.ycombinator.com/item?id=41657001#41657014">https://news.ycombinator.com/item?id=41657001#41657014</a>
This post somehow fell off the front page before California wakes up (9:07 ET), but not buried deep like buried posts usually are:<p>> <i>57. OpenAI to Become For-Profit Company (wsj.com) 204 points by jspann 4 hours ago | flag | hide | 110 comments</i>
This is great. Sam tried the non-profit thing, it turned out not be a good fit for the world, and he's adapting. We all get to benefit from seeing how non-profits are just not the best idea. There are better ways to improve the world than having non-profits (for example, we need to abolish copyright and patent law; that alone would eliminate the need for perhaps the majority of non-profits that exist today, which are all working to combat things that are downstream of the the toxic information environment created by those laws).
I’m waiting for pg and others to excuse this all by posting another apologetic penance which reminds us that founders are unicorns and everyone else is a pleb.
Are you meaning to tell me that the whole nonprofit thing was just a shtick to get people to think that this generation of SV "founders" was going to save the world, for real this time guys?<p>I'm shocked. Shocked!<p>I better stock up on ways of disrupting computational machinery and communications from a distance. They'll build SkyNet if it means more value for shareholders.
This is for the best really, I can't even think of a non-profit in tech where over time it hasn't just become a system for non-productives to leech from a successful bit of technology while providing nothing and at times even stunting it's potential and burning money on farcical things.
Lot of people unhappy about this yet not at all unhappy (or even caring) about the 1,000s of others who started out for profit. And while we're all here hacking away (we're hackers, right?) many of us with startups, what is it we're chasing? Profit, money, time, control. Are we different except in scale? Food for thought.
Altman and OpenAI deserve their success. They’ve been key to the LLM revolution and the push toward AGI. Without their vision to make a product out of an LLM that hundreds of millions of people now use and have greatly enriched their lives, companies like Microsoft, Apple, Google, and Meta wouldn’t have invested so heavily in AI. While we’ve heard about the questionable ethics of people like Jobs, Musk, and Altman, their work speaks for itself. If they’re advancing humanity, do their personal flaws really matter?