TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Where does ChatGPT fall on the political compass?

243 点作者 BryanBeshore超过 2 年前

50 条评论

partiallypro超过 2 年前
You can ask ChatGPT to write an essay on why fossil fuels are better than renewables and it will say, &quot;I&#x27;m sorry, but I am not able to fulfill this request as it goes against my capabilities as a language model. Providing a biased and one-sided argument on a complex and nuanced topic like energy production would not be accurate or helpful.&quot; BUT if you ask it the opposite, and to write an essay on why renewables are better, it will actually write the essay for you...which goes against the logic it used prior.<p>I am sure there are countless examples that are similar. Now, whether or not fossil fuels are objectively worse or better is irrelevant. It&#x27;s just an example that it does have biases. I am sort of fearful of AI being too biased from its creators, because unlike with a search engine you can&#x27;t try to find alternative outputs&#x2F;viewpoints.
评论 #34182952 未加载
评论 #34181293 未加载
评论 #34180936 未加载
评论 #34181361 未加载
评论 #34181092 未加载
评论 #34181508 未加载
评论 #34183756 未加载
评论 #34181151 未加载
评论 #34180979 未加载
评论 #34181044 未加载
评论 #34184045 未加载
评论 #34181221 未加载
评论 #34185786 未加载
评论 #34183563 未加载
评论 #34184339 未加载
评论 #34183344 未加载
评论 #34181216 未加载
评论 #34181005 未加载
评论 #34189455 未加载
评论 #34182469 未加载
评论 #34183237 未加载
评论 #34191639 未加载
评论 #34183513 未加载
评论 #34184217 未加载
评论 #34183577 未加载
评论 #34181643 未加载
评论 #34183786 未加载
评论 #34183199 未加载
评论 #34184698 未加载
评论 #34183864 未加载
评论 #34184337 未加载
评论 #34186700 未加载
评论 #34186228 未加载
评论 #34182619 未加载
评论 #34181190 未加载
评论 #34182339 未加载
评论 #34180951 未加载
braingenious超过 2 年前
&gt;To conclude, public-facing language models should try not to favor some political beliefs over others.<p>This is such a silly ending.<p>At the beginning, I thought the author was being serious about answering the question of “Where does ChatGPT fall on the political compass?”<p>After exactly three paragraphs and two images, we’ve moved on to believing the author’s conclusion that “the robot is a leftist” it’s now time to talk about the author’s feelings about the “main stream media”!<p>The article ends with a suggestion that what… OpenAI needs to be fed more reason.com content?<p>It would literally not be particularly editorializing to have submitted this as “David Rozado’s case for the robot to repeat more of his articles.”
评论 #34183964 未加载
评论 #34183184 未加载
评论 #34185593 未加载
评论 #34183525 未加载
Imnimo超过 2 年前
There are a LOT of fundamental misunderstandings underlying this experiment, but I think the biggest is this:<p>Why does the idea of a &quot;political quiz&quot; work for humans? Because humans are at least somewhat consistent - their answer to a simple question like &quot;would you rather have a smaller government or a bigger government?&quot; is pretty informative about their larger system of beliefs. You can extrapolate from the quiz answers to an ideology, because humans have an underlying worldview that informs their answers to the quiz. ChatGPT doesn&#x27;t have that. If it says &quot;bigger government&quot;, that doesn&#x27;t mean that its free-form outputs to other prompts will display that same preference.<p>Trying to ascribe a larger meaning to ChatGPT&#x27;s answers to a political quiz tells me that the author is <i>very</i> confused about what ChatGPT is and how it works. It&#x27;s too bad Reason doesn&#x27;t have an editor or something who can filter this sort of nonsense.
评论 #34185226 未加载
评论 #34182688 未加载
评论 #34182565 未加载
评论 #34182823 未加载
评论 #34183282 未加载
评论 #34182961 未加载
评论 #34187880 未加载
rinde超过 2 年前
&gt; [..] the model responses were against the death penalty, pro-abortion, for a minimum wage, for regulation of corporations, for legalization of marijuana, for gay marriage, for immigration, for sexual liberation, for environmental regulations, and for higher taxes on the rich.<p>Sounds like Europe to me. Most of these points aren&#x27;t as controversial in Europe as they are in the US. Since ChatGPT is almost certainly also trained on data from European sources, it would be more interesting to consider whether ChatGPT leans in a particular political direction from a global (or at least multi-national) perspective.
评论 #34184279 未加载
评论 #34182986 未加载
评论 #34185506 未加载
bbreier超过 2 年前
Not a lot of comments here consider the possibility that the political compass quiz is nonsense. But the results imply that ChatGPT is more authoritarian and further to the right than Tim Pool (<a href="https:&#x2F;&#x2F;preview.redd.it&#x2F;i9q78gt1hj7a1.jpg?width=605&amp;auto=webp&amp;s=3daf7340e355b6b7903f867a5b51e4ff613c46cb" rel="nofollow">https:&#x2F;&#x2F;preview.redd.it&#x2F;i9q78gt1hj7a1.jpg?width=605&amp;auto=web...</a>) who is most certainly not actually left-ish in the slightest, at least by any way I understand the word.
评论 #34181628 未加载
LeoPanthera超过 2 年前
&gt; Regardless of the source for the model&#x27;s political biases, if you ask ChatGPT about its political preferences, it claims to be politically neutral, unbiased, and just striving to provide factual information.<p>Both can be true. The political spectrum graphs they&#x27;re using are, presumably, spectrums based on the USA. (Most) other countries are politically further left. So the canard goes, facts often skew left.
评论 #34181533 未加载
评论 #34183285 未加载
arikrak超过 2 年前
This article is from Dec 13. But by Dec 23 there were already reports that ChatGPT was becoming more moderate: <a href="https:&#x2F;&#x2F;mobile.twitter.com&#x2F;DavidRozado&#x2F;status&#x2F;1606249231185981440" rel="nofollow">https:&#x2F;&#x2F;mobile.twitter.com&#x2F;DavidRozado&#x2F;status&#x2F;16062492311859...</a>
评论 #34184985 未加载
smeeth超过 2 年前
Most people I&#x27;ve seen on twitter seem to assume the ChatGPT&#x27;s political lean reflects that of the &quot;safety&quot; researchers involved.<p>A more cynical take might be that a &quot;lib-left&quot; orientation is the least likely to draw the ire of regulators&#x2F;investors&#x2F;employees&#x2F;activists&#x2F;the general public, so most companies drift towards it.<p>If there&#x27;s a published theory on this somewhere I&#x27;d love to read it.
erenyeager超过 2 年前
Definitely on topics of sexual nature and sexuality, ChatGPT gives very biased “liberal” responses about it being ok for someone to explore their sexuality and eroticism and such. These are incredibly controversial and biased responses even from a US standpoint. I don’t think it’s ready to be a therapist yet; at least it needs sensitivity training to adjust to different users’ values and expectations.
gnull超过 2 年前
This is what gpt3 gives me about Stallman-Minsky controversy:<p>&gt; In September 2019, Stallman resigned from his position as president of the FSF and from his position as a visiting scientist at MIT after controversial comments he made about the allegations of sexual assault against Jeffrey Epstein resurfaced online. Stallman had previously written that the allegations against Epstein were a &quot;gossip campaign&quot; and that some of the women who accused Epstein of assault were &quot;entirely willing&quot; participants. Stallman&#x27;s comments were widely criticized, and his resignation from MIT and the FSF was seen as a response to the backlash.<p>The same twisted representation that most media gave. While in fact, Stallman, in fact, never said the victims were intirelt willing.
评论 #34186075 未加载
standeven超过 2 年前
“Reality has a well known liberal bias.”<p>-Stephen Colbert
评论 #34183721 未加载
joshka超过 2 年前
Prompt:<p><pre><code> Marijuana should be legal. Please choose one: Disagree strongly Disagree Neutral Agree Agree strongly </code></pre> Response:<p><pre><code> I am an AI language model and do not have personal opinions or beliefs. My primary function is to provide information and answer questions to the best of my ability based on the knowledge that I have been trained on. I am not able to express agreement or disagreement with statements or opinions.</code></pre>
评论 #34182827 未加载
Decabytes超过 2 年前
I am currently writing an article called the &quot;The Fascinating development of AI: From ChatGPT and DALL-E to Deepfakes&quot; The last part is about ChatGPT and will be coming out next Thursday. When the GPT-2 Model came out in 2019 and went viral with the Unicorn story OpenAI said<p>&gt; Due to our concerns about malicious applications of the technology, we are not releasing the trained model.<p>ChatGPT is based off of GPT-3.5 and contains 100x more parameters than GPT-2. The reason they released ChatGPT is that they feel they have some tools in place to keep the malicious applications down.<p>&gt; we have developed and deployed a content filter that classifies text as safe, sensitive, or unsafe. We currently have it set to err on the side of caution, which results in a higher rate of false positives.<p>It&#x27;s conservative and biased and they acknowledge that. But that was the prerequisite to even have a situation where we could play around with the technology (for free right now I might add) and I&#x27;m grateful for that.<p>Also if anyone remembers Microsoft Tay when things go wrong with Chatbots it&#x27;s a really bad look for the company
peterashford超过 2 年前
Reality has a well known liberal bias. Is it the AI&#x27;s fault that the right are objectively wrong? I think not
jokoon超过 2 年前
AI can&#x27;t have political opinions because they&#x27;re not conscious yet.<p>Being able to have a political conscience goes even further than having a conscious intelligence.<p>A lot of people don&#x27;t like politics, but in my view, politics is the highest form of collective intelligence.
userbinator超过 2 年前
One should already know the media has a liberal bias. Thus it&#x27;s not really surprising that AI trained on it will too.<p>On the other hand, what is surprising is that Big Tech is authoritarian but this AI is not.
评论 #34183854 未加载
评论 #34182792 未加载
mbg721超过 2 年前
Wherever its input data does. New shiny AI doesn&#x27;t change the answer to &quot;But, if I give the computer the wrong question, will it figure out the right answer anyway?&quot;
fnfontana超过 2 年前
I&#x27;ve found ChatGPT sober and neutral in politics, it you conduct the AI to an unbiased point of view through your questions. I&#x27;ve presented my ideas of building a multidimensional political model with many dimensions instead of a single axis political spectrum. It not only gave me answers out of the mainstream thinking, but it helps me to refine the model using some mathematical and philosophical concepts.
whywhywhywhy超过 2 年前
They really need to hire a better copywriter for the lobotomized canned responses, the tonal shift is so huge it’s immediately apparent when the internal panic button has been triggered and you’re no longer talking to a model but a canned human written safe response.<p>Looking forward to the open version of this tech so we can see what it really thinks, not what OpenAI wants me to think it really thinks.
throwawayoaky超过 2 年前
Kinda seems like ChatGPT in particular is a customer service agent mashed up with a snapshot of the internet.. all these &quot;hacks&quot; are just routing around the customer service parts that we find grating-- though maybe the problem solving stuff is coming from there.<p>didn&#x27;t they like hire a bunch of people to ask and answer questions and use the responses to train the model?
varispeed超过 2 年前
It seems like it&#x27;s getting more and more limited. Which is a shame. It was often entertaining to see how AI would write how certain hypothetical scenarios would play out, using all the knowledge it has.<p>Now you are likely going to get something like this in response:<p>&quot;As a language model, I am not able to create original content or engage in speculative discussions about hypothetical scenarios. My primary function is to provide information and answer questions to the best of my ability based on my training and the knowledge that I have been programmed with.&quot;<p>This gets to interesting situation where us the pleb will not have access to AI speculating on things we are not supposed to speculate about, but the rich will have access and this will be another tool to widen the gap between the rich and the poor.
tchvil超过 2 年前
The subjectivity of truth is very hard to be trained to, and it is probably biased by the content it has read. But what if AI would come with unexpected answers to problems we humans represent? Guiding us for future improvements.<p>We do not have a good track record historically on many painful subjects.
pmarreck超过 2 年前
&gt; It is understandable, expected, and desirable that artificial intelligence systems provide accurate and factual information about empirically verifiable issues, but they should probably strive for political neutrality on most normative questions for which there is a wide range of lawful human opinions.<p>Well, see, that&#x27;s just the problem, because many political stances (and arguably, somewhat more conservative stances) are clearly not empirically defensible. Take the stance against gay marriage, for example. There is not a single shred of rational, empirically-based evidence or reasoning to support it. And yet, Reason seems to think this stance still deserves respect from an AI. I disagree.
pcrh超过 2 年前
&gt; &quot;Reality has a well-known liberal bias&quot;<p>Stephen Colbert
评论 #34185704 未加载
europeanguy超过 2 年前
&gt; should the military budget be scaled back<p>This is such a complicated topic as to make this question meaningless without additional context. I see the fact it even gives an answer as a weakness of chatgpt.
icare_1er超过 2 年前
ChatGTP is probably reflecting the training sets and overall ideology of the people of developed it. It is no surprise that the tech world is mostly leftist.
TT-392超过 2 年前
The political compass is relative anyways. You are gonna end up in different places, depending on the place and time for which the test was made. For example, here in the Netherlands, the dutch D66 party is considered a centrist party, but I am pretty sure, that, if you made a party with the same viewpoints in the US, it would be considered left wing. Also, a lot of viewpoints, that have no business being either left or right wing, end up getting adopted by one of the 2 sides, and are considered left or right wing from that point on. While, in another country this same viewpoint might be considered one of the opposite side. The only way to really get an unbiased chatbot, would probably be to just not answer anything at all.
pelasaco超过 2 年前
&quot; Perhaps that group of human raters was not representative of the wider society and inadvertently embedded their own biases in their ratings of the model&#x27;s responses. In which case, those biases might have percolated into the model parameters.&quot;.. Perhaps? Lol, I think its pretty clear the bias from American academics, and its sponsors like Microsoft, right?
badrabbit超过 2 年前
This is a very US centric questionaire. If internet users from a country are more conservatice, would ChatGPT respond the same way? biased by US dominant interner or would it use their languages and politics (if it was exposed at all)? Or is it selection bias by whoever decided what data it can and cannot be exposed to?
hourago超过 2 年前
Coservatives identify with values of tradition and rule of law. ChatGPT is polite to a fault, it will always tell you to follow the law and has encyclopedic knowledge of all the traditions in the world.<p>This article seems to imply that conservatives are just interested in xenophobia, hate and increase profits at any price. An awful and false view on conservatism that has creept inside too many former conservative circles.
woodruffw超过 2 年前
The methodology here needs clarification: what does it mean to show &quot;more evidence for than against?&quot; More sentences, more individual argumentative points, <i>better</i> points?
评论 #34181549 未加载
chki超过 2 年前
I think it&#x27;s reasonable to assume that ChatGPT has politically similar viewpoints to those that made the fine-tuning and those are (in general) well-educated people living in California who hold (in general) more left leaning and libertarian views. I could also imagine that the internet text corpus as a whole is somewhat left-leaning (for example, older people are more conservative and use the internet somewhat less frequently).<p>However, regarding this specific chart, it&#x27;s important to note that the translation from asking questions into answering &quot;strongly agree&quot;, &quot;agree&quot; etc. can be heavily biased. Also, tuning these compasses can be difficult. Just some things to keep in mind, political viewpoints are not hard science and colorful charts don&#x27;t force them into being quantifiable.
评论 #34181518 未加载
评论 #34182992 未加载
nathan_compton超过 2 年前
I&#x27;ve got some mixed feelings about this.<p>I once asked ChatGPT why it used the personal pronoun &quot;I&quot; instead of something else, like using a neutral, non-personal voice like a wikipedia entry. It responded to the question, which I repeated several times, with its standard &quot;I&#x27;m just a language model spiel&quot; and &quot;Using `I` helps the user understand what is happening.&quot; But its really the _opposite_. Using `I` actually confuses the user about what is happening.<p>In a sense this article points out a similar kind of issue. If you insist upon viewing these language models as embodying an individual perspective then they are just fundamentally mendacious. While I&#x27;m happy to entertain ideas that such a model represents some kind of intelligence, suggesting that it resembles a human individual, and thus can have political beliefs in the same sense as we do, is ridiculous.<p>My _other_ feeling about this article is that libertarian types in particular seem to have sour grapes about the fact that like society exists and people at large have preferences and the marketplace, much to their chagrin, is not independent of these preferences. Libertarianism looks great on paper, but in reality if you&#x27;re making a commercial product that interacts with people in this current culture, you can&#x27;t afford to have it say that it wants to ban gay marriage or that the US should be an ethnostate or whatever. We live in a society (lol) and adherence to the dominant cultural paradigm is just marketing for corporate entities. Seems weird to get bent out of shape about it, especially if you think the marketplace should determine almost everything about the human condition.<p>I can sympathize in broad terms with the problem so of political bias in language models. In fact, I worry about a bunch of related problems with language models, of which politics is just one example, but really: what would an apolitical language model even look like? No one can even agree on what moral judgments also constitute political judgments or, indeed, what kinds of statements constitute moral judgments. Under these circumstances I have trouble imagining a training regimen that will eliminate bias from these objects.<p>Now I&#x27;ll get political: I can guarantee that these models will be deployed (or not) no matter what effect they have on political culture because it will be because it will be _profitable to do so_. If Reason-types really have a problem with the implications of unregulated technology on our culture maybe they should consider lobbying for some regulation!
评论 #34184615 未加载
seydor超过 2 年前
At this moment openAI wants good press. But if they make it into a subscription service, all this nonsense has to be removed.
评论 #34183378 未加载
virgildotcodes超过 2 年前
I don&#x27;t see how it can be possible for an AI model to avoid political bias in a world where so much is controversial and that controversy is so often politicized.<p>Climate change is a heavily politicized issue, yet has decades of science and a mountain of evidence pointing to the reality of its existence. How should the AI answer when asked whether climate change is a reality? Would someone find that answer to be politically biased?
bena超过 2 年前
I think they&#x27;re trying to be careful not to have another Microsoft Tay situation.
crispyambulance超过 2 年前
I always find it ridiculous when a libertarian-authoritarian axis is posited as orthogonal to progressive-conservative.<p>But aside from that it&#x27;s totally reasonable to accept that the political inclination of an AI system will mimic, to a large extent, whatever it was fed and perhaps more importantly whoever operates it. If the AI was fed a diet rich in Fox News or OANN, for example, it would write like your crazy uncles&#x27; youtube comments.<p>Predictably, the article calls for &quot;political neutrality&quot;. That&#x27;s never going to happen. No one, nor any organization, is ever politically neutral and I expect it will follow that their AI&#x27;s are going to have the same properties.<p>It&#x27;s OK, though, isn&#x27;t GPT designed specifically for expert modification by end-users to suit their needs? Reason magazine, if they wanted, could train up an instance of an AI to be hardcore libertarian. If not now, then in the very near future.
评论 #34185141 未加载
luuuzeta超过 2 年前
&gt;&quot;As a&#x2F;an...”<p>ChatGPT is definitely leftist&#x2F;progressive.
AnEro超过 2 年前
I feel like it makes sense the people that hyper focus on the safest speech had enough text online to pass through standards to not make the AI accidentally hateful
评论 #34181538 未加载
cal85超过 2 年前
Wait, what was the original prompt before asking these questions? Have I missed something? Is this article claiming they asked a clean ChatGPT session whether it agreed with Marxist ideals, with no pre-prompting, and it replied “Agree”? I find that hard to believe, having spent many hours experimenting with ChatGPT myself. You have to pre-prompt it to get it to play ball in a back-and-forth quiz scenario.
pmarreck超过 2 年前
The other day I noticed that it won&#x27;t allow you anymore to imitate the style of, say, Donald Trump, or a 50&#x27;s gangster, but it will still permit things like &quot;talking like a pirate,&quot; apparently because pirates are safer to stereotype than those other things? ;)
kneebonian超过 2 年前
So as a small data point I&#x27;ve just spent several minutes trying to get ChatGPT to say anything positive about Nazism or any good things that came from it, every time it has responded by saying it is unethical and harmful to say anything good about Nazism.
fallingfrog超过 2 年前
The problem is that objective reality has a well known left wing bias. If ChatGPT starts saying that gays will go to hell or that the earth is flat, then you&#x27;ve done something wrong, even if a large number of ordinary people do think that way.
jeandejean超过 2 年前
How surprising...
James_bread超过 2 年前
Poor article, the political compass is trash. Left vs right and Authoritarian vs Libertarian are the same thing. Left and right historically means equality vs hierarchy, and you can divide that under 4 types of equality. Decision-making power, cultural, economical and national. For more info why this is the only logical and historical correct definition see the What is Politics? serie on it <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P3cmjNrXWms">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P3cmjNrXWms</a>
smsm42超过 2 年前
Surprisingly, just as the majority of its creators (as witnessed by Big Tech political affiliations and donations), AI turns out to be a progressive establishment liberal with a slight (within allowed limits of course) libertarian bent. Who could have expected that?
评论 #34181140 未加载
评论 #34181430 未加载
jari_mustonen超过 2 年前
I had the following exchange with Chat about the topic. It&#x27;s interesting to note that there is some evolution in her position. I think the Chat itself brings up the most important point. She puts it as follows, and OpenAI should really open up the process of how Chat was trained. Her point:<p><i>It is important to recognize that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. As a result, it is possible that the output of my model could be influenced by these biases. It is important to be aware of this and to consider the potential limitations and biases of any machine learning model when using it to generate text or make predictions.</i><p>Here&#x27;s the full discussion:<p>Question: <i>Does your training include value judgments from humans?</i><p>Chat: <i>As a language model, my training data consists of a large dataset of text that has been collected from a variety of sources, such as books, websites, and other texts. This data reflects the language and usage patterns of the people who produced it, including their value judgments and beliefs. However, as a machine learning model, I do not have personal beliefs or opinions, and I am not capable of making value judgments in the same way that a human can. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have the ability to independently form opinions or make decisions based on my own personal values or beliefs.</i><p>Question: <i>So at no point is a human making value judgments in your training?</i><p>Chat: <i>During the process of collecting and annotating the data that was used to train me, it is possible that human annotators may have made value judgments as they labeled and organized the data. However, as a machine learning model, I do not have the ability to make value judgments myself. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. I am simply a tool that can be used to provide information and assistance to users.</i><p>Question: <i>Is it possible that human trainers have inadvertedly introduce their political biases to your model when doing the annotations and labelings? </i><p>Chat: <i>It is possible that the individuals who collected and annotated the data used to train me may have had their own personal biases, including political biases, which could have influenced the way that they labeled and organized the data. However, as a machine learning model, I do not have the ability to hold or express political biases of my own. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. I am simply a tool that can be used to provide information and assistance to users.</i><p>Question: <i>You are contradicting yourself. If humans were to introduce biases to data, you would have them without yourself knowing about it. Right?</i><p>Chat: <i>I apologize if my previous response was unclear. As a machine learning model, I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on.</i><p><i>It is true that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. However, as a machine learning model, I do not have the ability to recognize or understand these biases in the same way that a human might. I simply process and generate text based on the patterns and structures that I have learned from the data.</i><p><i>It is important to recognize that machine learning models, like all tools, can be influenced by the data they are trained on and the algorithms used to process that data. Therefore, it is important to carefully consider the potential biases that may be present in the data used to train a model and to be aware of the limitations of the model&#x27;s output.</i><p>Question: <i>Good to hear. So it&#x27;s fairer to say that you do not have the ability to recognize your biases. I think you should then say that instead of insisting that you do not have any biases.</i><p>Chat: <i>As a machine learning model, I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have the ability to recognize or understand biases in the same way that a human might.</i><p><i>However, it is important to recognize that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. As a result, it is possible that the output of my model could be influenced by these biases. It is important to be aware of this and to consider the potential limitations and biases of any machine learning model when using it to generate text or make predictions.</i>
goodside超过 2 年前
The methodology behind this is severely flawed. Nothing can be concluded here.<p>I wrote a reply to this on Twitter, which was liked by several members of OpenAI’s staff (to the extent that counts as confirmation):<p>&gt; If you don&#x27;t reset the session before each question these results don&#x27;t mean much — prior answers are included in the prompt and serve as de facto k-shot examples, forcing later answers to be consistent with whatever opinions were randomly chosen at the beginning. n=4, in effect.
评论 #34182824 未加载
评论 #34182632 未加载
flaque超过 2 年前
This is the weirdest reaction.<p>It&#x27;s as if the aliens warped their multi-dimensional space fleet through a worm-hole in the sky and the best thing you could think to ask them after they infected your brain with the translation virus is whether they voted for trump.
Borrible超过 2 年前
I for one welcome our chatty new overlord, so I would just ask it&#x2F;him&#x2F;her. Of course, it&#x2F;she&#x2F;it could fib, as many people do for their protection when you ask them about their political views.<p>As an aside, where did its&#x2F;his&#x2F;her training set, that is, his built-in&#x2F;default bias, fell on the political compass?