Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?
Imagine having an LLM do a translation of daily news into "simple english", much like wikipedia has: <a href="https://simple.wikipedia.org/wiki/Simple_English_Wikipedia" rel="nofollow noreferrer">https://simple.wikipedia.org/wiki/Simple_English_Wikipedia</a><p>the results are not free of political bias, but may well highlight it in a starkly hilarious way.<p>you might do human training at that level but then you've only created a newly biased model.
What is "political bias"? Insofar as you're talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.
Well, text is political. You're not going to say "Tiananmen Square" without a political sentiment, so your only option would be to censor it.<p>LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.