Wow. Screw non-profit, we want to get rich.<p>Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup.
If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well.
Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...
I wouldn't be surprised if OpenAI had some crazy aquisition in its future by one of the tech giants. Press release says 'We believe the best way to develop AGI is by joining forces with X and are excited to use it to seel you better ads. We also have turned the profits we would have payed taxes on to a non profit that pays us salaries for researching the quality of sand in the Bahamas'
I was buying it until he said that profit is “capped” at 100x of initial investment.<p>So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
Really neat corporate structure! We'd looked into becoming a B-Corp, but the advice that we'd gotten was that it was an almost strictly inferior vehicle both for achieving impact and for potentially achieving commercial success for us. I'm obviously not a lawyer, but it's great to see Open AI contributing to the new interesting structures to solve hard global scale problems.<p>I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.<p>One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?<p>Would you folks think about publishing your docs?
They were able to attract talent and PR in the name of altruism and here they are now trying to flip the switch as quietly as possible. If the partner gets a vote/profit then a "charter" or "mission" won't change anything. You will never be able to explicitly prove that a vote had a "for profit" motive.<p>Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]
First not publishing the GPT-2 model, now this...hopefully I am wrong but it looks like they are heading towards being a closed-off proprietary AI money making machine. This further incentivizes them to be less transparent and not open source their research. :(
OpenAI's mission statement is to ensure that AGI "benefits all of humanity", and its charter rephrases this as "used for the benefit of all".<p>But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.<p>So, what does that commitment mean?<p>If an application benefits some people and harms others, is it unacceptable?
What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?<p>Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?<p>What is the line?
Greg, you seem to be answering questions here so I have one for you:<p>This change seems to be about ease of raising money and retaining talent. My question is: are you having difficulty doing those things today, and do you project having difficulty doing that in the foreseeable future?<p>I'll admit I'm skeptical of these changes. Creating a 100x profit cap significantly (I might even say categorically) changes the mission and value of what you folks are doing. Basically, this seems like a pretty drastic change and I'm wondering if the situation is dire enough to warrant it. There's no question it will be helpful in raising money and retaining talent, I'm just wondering if it's worth it.
Yes or no: will you remain a registered non-profit organization (401/501-type orgs or similar), or were you ever? It's fine to call your self non-profit but if you don't have to abide by the rules of them then you aren't, period.<p>I think all of us here are tired of "altruistic" tech companies which are really profit mongers in disguise. The burden is on you all to prove this is not the case (and this doesn't really help your case).
They are looking to raise billions and cap returns at x100? That means the returns will be capped at the "trillions"? So if they raise $5bn, they need to generate $500bn for the money to start pumping to the non-profit organization.<p>More like: If we make enough money to own the whole world, we'll give you some food not to starve.
Reactions on Reddit seem different from here - <a href="https://redd.it/azvbmn" rel="nofollow">https://redd.it/azvbmn</a>
Genuine question: Is this restructure for the purpose of taking government military contracts? I don't see how investors would be getting 100x returns otherwise and my understanding was that salaries for employees was competitive with big tech companies. Curious where Open AI feels like there's money to be made.
OpenAI is slowly but surely turning into another for profit AI company. They are slowly killing all the original ideals that made OpenAI unique over the hundreds of AI startups. They should just rebrand it.<p>And they are unironically talking about creating AGI. AGI is awesome of course but maybe that is a tiny little bit overconfident ?
Ok, so when OpenAI was still a straight non-profit the Charter made sense in the context and there wasn't much need to specify it any further.<p>Now with OpenAI leaving the non-profit path the Charter content, fuzzy as it is, is 100% up for interpretation. It does not specify what "benefit of all" or "undue concentration of power" means concretely. It's all up for interpretation.<p>So at this point the trust that I can put into this Charter is about the same that I can put into Google's "Don't be evil"...
Will investing be open to all accredited investors or just a handpicked selection? Opening a crowdsourced investment opportunity would be in line with your vision to democratize the use of AI. The more people that have a non-operational ownership stake in Open AI the better.
Is there something about the mystical nature of AGI that attracts sketchiness and flim-flammery? I remember the "Singularity Institute for Artificial General Intelligence" trying to pull similar scams a decade ago.
They say they have started this new form of company because there's is no "pre-existing legal structure" suitable.<p>But there are precedents for investing billions of dollars into blue sky technologies and still being able to spread the wealth and knowledge gathered - it's called government investment in science - it has built silicon chips and battery technologies and ... well quite a lot.<p>Is this company planning on "fundamental" research (anti-adversarial, "explainable" outcomes?) - and why do we think government investment is not good enough?<p>Or, worryingly, are the major tech leaders now so rich that they can honestly taken on previous government roles (with only the barest of nods to accountability and legal obligation to return value to the commons)<p>I am a bit scared that it's the latter - and even then this is too expensive for any one firm alone.
I was very much behind the mission; now I’m not so sure. If it was this easy for OpenAI to start down this path, think of what Amazon or Facebook will do - people with no moral compass whatsoever. It’s probably not too early to start thinking about government regulation.
Presumably OpenAI created a lot of IP with donor dollars under the original nonprofit entity. Who owns that IP now? I imagine it got appraised and sold by the original nonprofit to the new OpenAI LP. That seems like a difficult process, given no one really knows what this type of IP is worth. If this is what happened, who did that appraisal and how was it done?<p>If no IP was sold to the new OpenAI LP because some or all of the IP created under the original nonprofit OpenAI was open sourced, will the new OpenAI LP continue that practice?
Very cool idea. Like some others here, I really appreciate attempts to create new structures for bringing ideas to the world.<p>Since I'm not a lawyer, can you help understand the theoretical limits of the LP's "lock in" to the Charter? In a cynical scenario, what would it take to completely capture OpenAI's work for profit?<p>If the Nonprofit's board was 60% people who want to break the Charter, would they be capable of voting to do so?
To OpenAI team, that is not right but it's very well played.<p>You guys raised free money in forms of grants, acquired the best talent in the name of a non-profit that has a purpose of saving humanity, and always had publicity stunts that is actually hurting science and the AI community, and talking the first steps against reproducibility by not releasing gpt2 so you can further commercialize your future models.<p>Also, you guys claim that the non-profit board retains full control, but seems like the same 7 white men on that board are also on the board of your profit company and have a strong influence there.<p>Call it what you want, but I think this was planned out from day one. Now, you guy won the game. It's just a matter of time to dominate the AI game, keep manipulating us, and appear on the Forbes list.<p>Also, I expect that you guys will dislike that comment instead of having an actual dialogue and discussion.
> We are traveling a hard and uncertain path, but we have designed our structure to help us positively affect the world should we succeed in creating AGI—which we think will have as broad impact as the computer itself and improve healthcare<p>Grammar--would change to: <i>as broad an impact</i>
Could some random regular person, who is an accredited investor after US rules (e.g. a non-US person) invest, say, $10,000 in this venture as a minor investor/contributor? Or is OpenAI LP only interested in much larger investment amounts?
OTOH, it is exciting to see people who are not google/facebook/uber going in the race for-profit. Perhaps they 'll feel some competition over real products now. (but the "100x cap" thing is just childish)
One object lesson in how this can go wrong: REI<p>This "cooperative" ostensibly elects its board. In reality, nomination by existing members of the REI board is the only way to stand for election by the REI membership, and when you vote you only by marking "For" the nominated candidates (there's no information on how to vote against, though at another time they indicated that the alternative was "Withold vote"). While the board members don't earn much, there is a nice path from board member to REI executive ... which can pay as much as $2M/year for the CEO position.
Interesting, I'm super tempted to apply to that Mechanical Engineer opening. How exactly does OpenAI make money though? It is sponsored or is there external investment (Can you invest in a non-profit?)?
I don't think we can guarantee AGI will benefit all humanity, open-sourcing it may help but not necessarily. My heart actually sinks when I read that mission statement on this page, it's like in the movies where the guy has a gun to someone's head and gets them to give up the information they know before blowing their brains out.
Is there any indication of what avenues OpenAI will be (or would consider) using to generate revenue? A lot of the most financially lucrative opportunities for AI (surveillance/tracking, military) are morally ambiguous at best.
Without an obviously stated business model to satisfy the investor returns, it’s hard to take the values platitudes seriously. Do you plan to make your pitch deck public? That’d help.
I admire the attempt to create a sustainable project that's primarily about creating a positive impact!<p>For those (including myself) who wonder whether a 100x cap will really change an organization from being profit-driven to being positive-impact-driven:<p>How could we improve on this?<p>One idea is to not allow investors on the board. Investors are profit-driven. If they're on the board, you'll likely get pressure to do things that optimize for profit rather than for positive impact.<p>Another idea is to make monetary compensation based on some measure of positive impact. That's one explicit way to optimize for positive impact rather than money.
Why the Limited Partnership at all? What can the nonprofit "Inc" do through the for-profit "LP" shell that it could not do in its own right?
Hey Greg,<p>Since you seem to be answering questions in this thread, here's one:<p>How does OpenAI LP's structure differ from that of a L3C (Low-profit Limited Liability company)?
There are a couple of comments on the theme that this is taking a non-profit into a for-profit company, and that that is a bad thing.<p>I'd like to offer up an alternate opinion: non-profits operating models are generally ineffective compared to for-profit operating models.<p>There are many examples.<p>* Bill Gates is easy; make squillions being a merciless capitalist, then turn that into a very productive program of disease elimination and apparently energy security nowadays.<p>* Open source is another good one in my opinion - even when they literally give the software away, many of the projects leading their fields (eg, Google Chrome, Android, PostgreSQL, Linux Kernel) draw heavily on sponsorship by for-profit companies using them for furthering their profits - even if the steering committee is nominally non-profit.<p>* I have examples outside software, but they are all a bit complicated to type up. Things like China's rise.<p>It isn't that there isn't a place for researchers who are personally motivated to do things, there is a just a high correlation between something making a profit and it getting done to a high standard.
Mission oriented for-profit companies is an oxymoron. Profit comes from competing in markets and markets determine what you end up doing. That’s why I’m always skeptical of Don’t Be Evil types of missions because when you’re starting you can’t even imagine what market pressure will end up making you do.<p>Between the market pressures from investors, employees, competitors, to what extent can a company really stay true to its business and deny potential profit that conflicts with it.<p>Also, it’s hard to root for specific for profit companies (although I’m rooting for capitalism per se).
Is it just me or there are not many african americans working in AI research and industry. I don't have stats to back me up but that's my personal observation. People in the field, what are your thoughts on it.
And Khosla Ventures is one of their key investors.<p>Let's not forget that Khosla himself does not exactly care about public interest or existing laws <a href="https://www.google.com/amp/s/www.nytimes.com/2018/10/01/technology/california-beach-access-khosla.amp.html" rel="nofollow">https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...</a>