> To summarize: The company “in charge” of protecting us from harmful AIs decided to let people use a system capable of engaging in disinformation and dangerous biases so they could pay for their costly maintenance. It doesn’t sound very “value-for-everyone” to me.<p>Ever since the AI hype started this year, one thing that's always really bugged me is talk about "safety" around AI. Everyone is so worried about AI's ability to write fake news and how "dangerous" that can be while forgetting that I can go on fiver, pay someone in India, China, etc. to pump me out article after article of fake news for pennies on the dollar.<p>Also I hate the talk of "oh wow look how harmful the AI is, it made a naughty joke". I think of harm as being mugged, being beaten up, being shot. Harm is not some AI program telling a joke that could potentially offend someone.<p>All you end up with is an AI that is so kneecapped that it's barely useful outside of a select number of use cases. Can't write an article because it might be fake news, can't write an essay because it might be an assignment, can't solve that homework assignment because you might be cheating, I can't ask it to tell me a joke because the joke might be offensive.