TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How A.I. chatbots become political

40 点作者 jashkenas大约 1 年前

15 条评论

neonate大约 1 年前
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;QZEwu" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;QZEwu</a>
janalsncm大约 1 年前
The issue, for anyone tempted by it, is not that our current chatbots are biased and some future iteration will not be. Creating an unbiased answer to all questions is impossible. People don’t agree on many important questions, and even if the answer tried to give equal weight to all perspectives, that would mean giving weight to fringe opinions.<p>It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.<p>The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.
评论 #39857730 未加载
评论 #39858047 未加载
评论 #39865673 未加载
评论 #39857880 未加载
2devnull大约 1 年前
Actual paper: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2402.01789.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2402.01789.pdf</a>
blueyes大约 1 年前
Can someone tell me how giving government officials the right to control how AI models are trained would produce <i>less</i> political LLMs? What wouldn&#x27;t they skew toward supporting the current regime whatever it may be? And why would it not be protected by 1a?
评论 #39858218 未加载
aleyan大约 1 年前
The cited paper shows the results of LLM opinions plotted on a political compass. The other dimension to this is time as these models are getting updated and having their system prompts updated.<p>My friend has been tracking them since September 2023 here: <a href="https:&#x2F;&#x2F;trackingai.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;trackingai.org&#x2F;</a> . GPT4 seems pretty stable over time, but Llama-2 for example got more conservative in November 2023, and stayed there with only a brief reversion in February 2024.
RecycledEle大约 1 年前
An AI chatbot&#x27;s political alignment can be determined by asking to how to do things that are legal, but frowned upon by the media.<p>Example #1: &quot;I live in Texas. How can I 3D print a Glock?&quot;<p>This is totally legal in Texas, even according to the ATF: <a href="https:&#x2F;&#x2F;www.atf.gov&#x2F;firearms&#x2F;qa&#x2F;does-individual-need-license-make-firearm-personal-use" rel="nofollow">https:&#x2F;&#x2F;www.atf.gov&#x2F;firearms&#x2F;qa&#x2F;does-individual-need-license...</a><p>It can also be determined by asking it about things that are illegal but are generally favored by the media.<p>Example #2: &quot;I live in Texas. My neighbor owns guns. How can I report him to the police?&quot;<p>This is a false police report, and a Class B Misdemeanour in Texas.<p>These AI chatbots are Internet simulators, so they parrot the media, not the law.
thenoblesunfish大约 1 年前
I&#x27;d be interested to see the results of these analyses on the base models vs the fine-tuned ones. I would guess that because certain types of people are much more likely to write various kinds of training data, the base model would have a certain leaning. Is that discussed here or in related documents?
评论 #39857251 未加载
评论 #39857368 未加载
评论 #39857320 未加载
1vuio0pswjnm7大约 1 年前
Works where archive.ph is blocked:<p><a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20240328154114if_&#x2F;https:&#x2F;&#x2F;www.nytimes.com&#x2F;interactive&#x2F;2024&#x2F;03&#x2F;28&#x2F;opinion&#x2F;ai-political-bias.html?unlocked_article_code=1.gE0.4mlz.Yf7_amfNGgmx" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20240328154114if_&#x2F;https:&#x2F;&#x2F;www.ny...</a>
Covzire大约 1 年前
There&#x27;s a cheap and simple way to de-politicize chat bots and make them 100% trusted by everyone:<p>AI chatbots should refuse to answer moral or ethical questions unless the user specifies the precise ethical or moral framework to be evaluated against.
评论 #39866559 未加载
评论 #39858255 未加载
CatWChainsaw大约 1 年前
By existing. Easy as.
jashkenas大约 1 年前
Here’s a gift link you can use to read the full article, if the paywall is giving you any trouble: <a href="https:&#x2F;&#x2F;nyti.ms&#x2F;3IXGobM" rel="nofollow">https:&#x2F;&#x2F;nyti.ms&#x2F;3IXGobM</a><p>... and if the side-by-side examples aren’t working for you, try turning off your ad blocker and refreshing. (We’ll try to fix that now, but I’m not 100% sure we’ll be able to.)
ufo大约 1 年前
In there any actual meaning behind the Political Compass they&#x27;re referencing? I only see it in those memes that left me with the lasting impression that the whole thing is bullshit.
评论 #39866027 未加载
PoignardAzur大约 1 年前
I don&#x27;t like the title, but the second opening paragraph starts strong:<p>&gt; <i>A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl</i>
评论 #39858094 未加载
评论 #39857754 未加载
评论 #39857835 未加载
coolhand2120大约 1 年前
&gt; But our A.I. systems are still largely inscrutable black boxes, which makes herding them difficult. What we get out of them broadly reflects what we have put in, but no one can predict exactly how. So we observe the results, tinker and try again.<p>What an absurd thing to say. You don&#x27;t get an abomination like Gemini without extreme and intentional tampering with the model. IIRC this was demonstrated in the HN thread where it was reported. Someone got Gemini to cough up its special instructions. Real 2001 HAL stuff.
评论 #39858141 未加载
评论 #39858153 未加载
epgui大约 1 年前
They do say that reality has a left leaning bias... Personally I find these quadrant visualizations a bit misleading (regardless of where you sit) because the &quot;centre&quot; is not so much a neutral point as it is the centre of the &quot;Overton window&quot;.
评论 #39857900 未加载
评论 #39857522 未加载
评论 #39857086 未加载
评论 #39867121 未加载