The important parts from the Twitter thread:<p>>I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.<p>> I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.<p>> These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there.<p>Damn. This is pretty much confirming everyone's fears that Altman is going full steam ahead without as much care for safety anymore.<p>I'm not sure what to do about it though.