On their "OpenAI Charter", they list several basic principles they'll use to achieve the goal of safe AGI, including this one, which I find pretty interesting:<p>>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”<p>If I'm reading that correctly, it means that later on if/when some company is obviously on the cusp of AGI, OpenAI will drop what they're doing and start helping this other company so that there isn't a haphazard race to being the first one, which could result in unsafe AGI. That sounds like a well-intentioned idea that <i>could</i> cause more problems in practice. For instance, if there are multiple companies with almost equal footing, then combining forces with one of them would give a sense of even stricter deadline to the other ones, possibly making the development even less safe.<p>Also, they only mention assisting "value-aligned, safety-conscious" projects, which seems pretty vague. Just seems like they should give (and perhaps have given) more thought into that principle.