Between this article and others that I have read, it's difficult for me to not see the term 'AI Safety' as mere newspeak.<p>Why is this term so vague everywhere it is used?
I somehow missed this:<p>> “Building smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote.<p>Leike clearly did the right thing by resigning, GPT-4o is dangerous and irresponsible. But if that tweet is how OpenAI employees actually think of themselves and their technology...... yeesh.
I don’t know if others have noticed, but GPT-4o doesn’t have the preachiness and moral smugness that earlier GPT models had.<p>The earlier ChatGPT models were very quick to call a request unsafe or unethical and refuse to help.<p>GPT-4o is a breath of fresh air compared to that. If this improvement was a result of people like Leike resigning - then good riddance.
ChatGPT has a several paragraph long hardcoded system prompt teaching it all about how to be mindful of DEI. And chatGPT is not "smarter-than-human." This argument rings of "violent games make kids violent".