The issue, for anyone tempted by it, is not that our current chatbots are biased and some future iteration will not be. Creating an unbiased answer to all questions is impossible. People don’t agree on many important questions, and even if the answer tried to give equal weight to all perspectives, that would mean giving weight to fringe opinions.<p>It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.<p>The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.
Actual paper:
<a href="https://arxiv.org/pdf/2402.01789.pdf" rel="nofollow">https://arxiv.org/pdf/2402.01789.pdf</a>
Can someone tell me how giving government officials the right to control how AI models are trained would produce <i>less</i> political LLMs? What wouldn't they skew toward supporting the current regime whatever it may be? And why would it not be protected by 1a?
The cited paper shows the results of LLM opinions plotted on a political compass. The other dimension to this is time as these models are getting updated and having their system prompts updated.<p>My friend has been tracking them since September 2023 here: <a href="https://trackingai.org/" rel="nofollow">https://trackingai.org/</a> . GPT4 seems pretty stable over time, but Llama-2 for example got more conservative in November 2023, and stayed there with only a brief reversion in February 2024.
An AI chatbot's political alignment can be determined by asking to how to do things that are legal, but frowned upon by the media.<p>Example #1: "I live in Texas. How can I 3D print a Glock?"<p>This is totally legal in Texas, even according to the ATF:
<a href="https://www.atf.gov/firearms/qa/does-individual-need-license-make-firearm-personal-use" rel="nofollow">https://www.atf.gov/firearms/qa/does-individual-need-license...</a><p>It can also be determined by asking it about things that are illegal but are generally favored by the media.<p>Example #2: "I live in Texas. My neighbor owns guns. How can I report him to the police?"<p>This is a false police report, and a Class B Misdemeanour in Texas.<p>These AI chatbots are Internet simulators, so they parrot the media, not the law.
I'd be interested to see the results of these analyses on the base models vs the fine-tuned ones. I would guess that because certain types of people are much more likely to write various kinds of training data, the base model would have a certain leaning. Is that discussed here or in related documents?
Works where archive.ph is blocked:<p><a href="https://web.archive.org/web/20240328154114if_/https://www.nytimes.com/interactive/2024/03/28/opinion/ai-political-bias.html?unlocked_article_code=1.gE0.4mlz.Yf7_amfNGgmx" rel="nofollow">https://web.archive.org/web/20240328154114if_/https://www.ny...</a>
There's a cheap and simple way to de-politicize chat bots and make them 100% trusted by everyone:<p>AI chatbots should refuse to answer moral or ethical questions unless the user
specifies the precise ethical or moral framework to be evaluated against.
Here’s a gift link you can use to read the full article, if the paywall is giving you any trouble: <a href="https://nyti.ms/3IXGobM" rel="nofollow">https://nyti.ms/3IXGobM</a><p>... and if the side-by-side examples aren’t working for you, try turning off your ad blocker and refreshing. (We’ll try to fix that now, but I’m not 100% sure we’ll be able to.)
In there any actual meaning behind the Political Compass they're referencing? I only see it in those memes that left me with the lasting impression that the whole thing is bullshit.
I don't like the title, but the second opening paragraph starts strong:<p>> <i>A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl</i>
> But our A.I. systems are still largely inscrutable black boxes, which makes herding them difficult. What we get out of them broadly reflects what we have put in, but no one can predict exactly how. So we observe the results, tinker and try again.<p>What an absurd thing to say. You don't get an abomination like Gemini without extreme and intentional tampering with the model. IIRC this was demonstrated in the HN thread where it was reported. Someone got Gemini to cough up its special instructions. Real 2001 HAL stuff.
They do say that reality has a left leaning bias... Personally I find these quadrant visualizations a bit misleading (regardless of where you sit) because the "centre" is not so much a neutral point as it is the centre of the "Overton window".