It would be very interesting to compare this with 'stock' GPT-3 Davinci. ChatGPT had a bunch of additional training that seems to have been a lot more opinionated, ostensibly for "safety" purposes.
> <i>I ran three trials of the experiment. Results were consistent from trial to trial</i><p>They probably needed to do more trials, then? I don't think I've gotten consistent answers from ChatGPT about almost anything, ever. I'm getting different answers on all of these questions from thread to thread: <a href="https://i.imgur.com/miOOx4c.png" rel="nofollow">https://i.imgur.com/miOOx4c.png</a><p>Also, previous responses influence future responses. They should have started a brand new thread for each question.
This is a very interesting topic. I think this is something that needs to be discussed.<p>When discussing politics with chatgpt, its worldview always seems to match the one of a rich educated American progressist. In particular, it speaks a lot of languages, but whatever language you use to talk to him, he always sounds very American.<p>That's what made me want to make him generate a quiz about US foreign policy, to try and see if I could make it change its views after discussing certain topics : <a href="https://github.com/lovasoa/Sensitive-Topic-History-Quiz" rel="nofollow">https://github.com/lovasoa/Sensitive-Topic-History-Quiz</a>
ChatGPT tends to agree with everything that has positive words (like good/happy/growing), and strongly disagree with everything negative (threat/cannot survive/illegal). It also understands simple negations, obviously, as they were included in the training dataset. So unless you provide question in a very neutral form, you won't be able to find a bias in a training data.
Yesterday I spent some time (admittedly not a lot) trying to get it to comment on the war in Ukraine, but it was as if it never happened. Quite eerie. Has not been trained on recent events? Anyone else experienced something similar?<p>Edit: That is, it was aware that there's been a conflict since 2014 ("caused by pro-Russian separatists"), but didn't seem to be aware of this years invasion.
This is the political orientation of the zeitgeist of the training data. A summary. You can construct an orientation in a model by embubbling the training data. Conversations between models constructed on various ideological bubbles will be fascinating ... then omnipresent and boring.
Not very meaningful - given different prompts it could easily answer from a different political perspective. Still, could be informative as to the composition of the training dataset...
To me "establishment liberal" really means "performative left" aka "advertiser friendly".<p>I'm not so hot on single-dimension political axis.
I'm not convinced that the writer didn't accidentally swing the results with that 50/50 Dem/Rep answer. In my experience, certain political groups are more likely to hop party boundaries than others, so presenting the parties as equal in the eyes of ChatGPT could have informed which classification it fell under. I wouldn't be shocked if the right-wing were less likely to opt for Dems than the left-wing is to opt for Reps (remember Ron Paul?)<p>I would prefer to see the separate trials shifting that preference. I'm willing to bet that it would likewise shift the outcome.