TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How can political bias across LLMs be factored?

3 pointsby shaburnover 1 year ago
Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?

5 comments

h2odragonover 1 year ago
Imagine having an LLM do a translation of daily news into &quot;simple english&quot;, much like wikipedia has: <a href="https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;Simple_English_Wikipedia" rel="nofollow noreferrer">https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;Simple_English_Wikipedia</a><p>the results are not free of political bias, but may well highlight it in a starkly hilarious way.<p>you might do human training at that level but then you&#x27;ve only created a newly biased model.
jruohonenover 1 year ago
What is &quot;political bias&quot;? Insofar as you&#x27;re talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.
PaulHouleover 1 year ago
A system which has artificial wisdom as opposed to just artificial intelligence might try to not get involved.
smoldesuover 1 year ago
Well, text is political. You&#x27;re not going to say &quot;Tiananmen Square&quot; without a political sentiment, so your only option would be to censor it.<p>LLMs are text tokenizers, if the majority of it&#x27;s training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.
评论 #38461981 未加载
shaburnover 1 year ago
I beleive the model bias is highly influenced by the modelers. See Grok and OpenAI.