TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How can political bias across LLMs be factored?

3 点作者 shaburn超过 1 年前
Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?

5 条评论

h2odragon超过 1 年前
Imagine having an LLM do a translation of daily news into &quot;simple english&quot;, much like wikipedia has: <a href="https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;Simple_English_Wikipedia" rel="nofollow noreferrer">https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;Simple_English_Wikipedia</a><p>the results are not free of political bias, but may well highlight it in a starkly hilarious way.<p>you might do human training at that level but then you&#x27;ve only created a newly biased model.
jruohonen超过 1 年前
What is &quot;political bias&quot;? Insofar as you&#x27;re talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.
PaulHoule超过 1 年前
A system which has artificial wisdom as opposed to just artificial intelligence might try to not get involved.
smoldesu超过 1 年前
Well, text is political. You&#x27;re not going to say &quot;Tiananmen Square&quot; without a political sentiment, so your only option would be to censor it.<p>LLMs are text tokenizers, if the majority of it&#x27;s training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.
评论 #38461981 未加载
shaburn超过 1 年前
I beleive the model bias is highly influenced by the modelers. See Grok and OpenAI.