If this was a division over the safety of models, the safest thing they could do is to release older models as soon as they release a new one.<p>If they release GPT-5 (closed), then they should release GPT-3.5 with its weights, stack, and documentation.<p>Real research can be done to see what safety precautions can be learned from it, as the successors are quite similar. For example, GPT-3.5 is still updated with features as of Dev Day, and gives a quite similar tech stack to the newest models.<p>--> What no one on Earth wants to bet on is a few people in San Francisco figuring safety out, when the rest of the world could be working on that. <--<p>If a model is leaked to the public, you have no time to prepare for safety. The damage is done.<p>If a model is kept forever behind closed doors you do not follow your mission of safe AGI. This only proves that overnight moves could be made that destabilizes who has control over the most powerful models. At least an open release of older models shows your dedication to innovation and safety.<p>I'd bet that since Microsoft has a copy of the stack and weights, and that the US govt has their own Azure instance cloud, the US already has a copy of OpenAI's technology. So its not going to die if OpenAI crumbles. It just goes to two different hands.<p>---<p>And this is just a bare minimum step. Not even considering this means there is a larger problem than ideological division. It is more like a power grab.<p>*As much as people think that releasing models is a safety risk, it might just be the best thing you can do.*<p>---<p>Economical Steps for a release that satisfies safety and investment (don't give away your MOAT):<p>Current Model: GPT-4; Faster/Cheaper option: GPT-3.5; Fully Release GPT-3<p>Current Model: GPT-5; Faster/Cheaper option: GPT-4; Fully Release GPT-3.5<p>Current Model: GPT-6; Faster/Cheaper option: GPT-5; Fully Release GPT-4