As smart as the people are at OpenAI they sure do some incredibly stupid things.<p>I guess it proves that this "company" is ran by scientists and not business people, which I guess is reassuring?
OpenAI has been bleeding money for a while [0]. Why would it be so crazy that the board wants heads to roll? The craziest part really is that he literally was allowed to give a keynote like a week ago (a good one), but of course, have they sacked him before... what kind of message that sends. Instead, he got sacked after, ANNOUNCEMENTS WERE MADE!<p>0: <a href="https://futurism.com/the-byte/openai-losing-money-chatgpt" rel="nofollow noreferrer">https://futurism.com/the-byte/openai-losing-money-chatgpt</a>
If this was a division over the safety of models, the safest thing they could do is to release older models as soon as they release a new one.<p>If they release GPT-5 (closed), then they should release GPT-3.5 with its weights, stack, and documentation.<p>Real research can be done to see what safety precautions can be learned from it, as the successors are quite similar. For example, GPT-3.5 is still updated with features as of Dev Day, and gives a quite similar tech stack to the newest models.<p>--> What no one on Earth wants to bet on is a few people in San Francisco figuring safety out, when the rest of the world could be working on that. <--<p>If a model is leaked to the public, you have no time to prepare for safety. The damage is done.<p>If a model is kept forever behind closed doors you do not follow your mission of safe AGI. This only proves that overnight moves could be made that destabilizes who has control over the most powerful models. At least an open release of older models shows your dedication to innovation and safety.<p>I'd bet that since Microsoft has a copy of the stack and weights, and that the US govt has their own Azure instance cloud, the US already has a copy of OpenAI's technology. So its not going to die if OpenAI crumbles. It just goes to two different hands.<p>---<p>And this is just a bare minimum step. Not even considering this means there is a larger problem than ideological division. It is more like a power grab.<p>*As much as people think that releasing models is a safety risk, it might just be the best thing you can do.*<p>---<p>Economical Steps for a release that satisfies safety and investment (don't give away your MOAT):<p>Current Model: GPT-4; Faster/Cheaper option: GPT-3.5; Fully Release GPT-3<p>Current Model: GPT-5; Faster/Cheaper option: GPT-4; Fully Release GPT-3.5<p>Current Model: GPT-6; Faster/Cheaper option: GPT-5; Fully Release GPT-4
So why the spectacle? Why the melodramatic press release and thoughtless communication with investors?<p>I will not shed a tear for Altman or Microsoft, but this fantastic Friday massacre, don't seem to line up with cause.