" When OpenAI launched, it characterized the nature of the risks - and the most appropriate strategies for reducing them - in a way that we disagreed with. In particular, it emphasized the importance of distributing AI broadly; our current view is that this may turn out to be a promising strategy for reducing potential risks, but that the opposite may also turn out to be true (for example, if it ends up being important for institutions to keep some major breakthroughs secure to prevent misuse and/or to prevent accidents). Since then, OpenAI has put out more recent content consistent with the latter view, and we are no longer aware of any clear disagreements. "<p>Really, really happy to see this being carefully considered. Good job to the Open Philanthropy folks!<p>EDIT: That Slate Star link is amazing: "Both sides here keep talking about who is going to “use” the superhuman intelligence a billion times more powerful than humanity, as if it were a microwave or something."