TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Will training AI be the next front in the “culture war”?

3 点作者 tus666大约 2 年前
Am I the only one who thinks this? That training ChatGPT and other AIs to give answers to certain questions that validates various ideologies will be the next front in the culture war?

3 条评论

thesuperbigfrog大约 2 年前
&quot;Did you train the model using real world data?&quot;<p>&quot;Yes. We want the model to be useful in real world applications.&quot;<p>&quot;Then it is biased. The model is biased because data it was trained on was generated by people and people are biased. There is no such thing as an &#x27;objective&#x27; model, just a model that is biased in a different way.&quot;
smoldesu大约 2 年前
Is it <i>really</i> that different from news publications publishing a story that validates various ideologies? As long as people don&#x27;t mistake AI text for conscious commentary, I don&#x27;t think either is more dangerous than the other.
评论 #35682602 未加载
breckenedge大约 2 年前
&gt; Am I the only one who thinks this?<p>No, go read posts in r&#x2F;conservative about ChatGPT. They’re convinced it has a liberal bias. Pretty soon we will have chatbots that reinforce whatever worldview you want to subscribe to.
评论 #35682875 未加载
评论 #35682950 未加载