Am I the only one who thinks this? That training ChatGPT and other AIs to give answers to certain questions that validates various ideologies will be the next front in the culture war?
"Did you train the model using real world data?"<p>"Yes. We want the model to be useful in real world applications."<p>"Then it is biased. The model is biased because data it was trained on was generated by people and people are biased. There is no such thing as an 'objective' model, just a model that is biased in a different way."
Is it <i>really</i> that different from news publications publishing a story that validates various ideologies? As long as people don't mistake AI text for conscious commentary, I don't think either is more dangerous than the other.
> Am I the only one who thinks this?<p>No, go read posts in r/conservative about ChatGPT. They’re convinced it has a liberal bias. Pretty soon we will have chatbots that reinforce whatever worldview you want to subscribe to.