<p><pre><code> The idea of AI picking up the biases within the language
texts it trained on may not sound like an earth-
shattering revelation. But the study helps put the nail
in the coffin of the old argument about AI automatically
being more objective than humans
</code></pre>
Was... <i>anyone</i> arguing that a model trained on a natural-language corpus would be entirely unbiased? What a magnificent strawman.
this NLP might be missing perceptions on parts of different groups of listeners. Different cultures may correlate language and race / gender differently
First off, this isn't AI, it's machine learning.<p>> The idea of AI picking up the biases within the language texts it trained on may not sound like an earth-shattering revelation<p>That's an understatement.<p>> But the study helps put the nail in the coffin of the old argument about AI automatically being more objective than humans<p>Again, this isn't AI, and anyone with knowledge on the subject has always known that a traditional machine learning algorithm is only as good as its training data.<p>This also seems like a case where the researchers are simply unhappy with the results they received, rather than being able to show that the results are wrong.
The word 'bias' implies that the belief is incorrect. If the information is correct, it shouldn't be called a bias. It is simply a conclusion.<p>For example: "It also tended to associate "woman" and "girl" with the arts rather than with mathematics."<p>This is a correct and valid conclusion. In all societies, women tend to engage in artistic activity more, while men engage in mathematical/systemic study more. (This is even more true in places which are more free like Scandinavia, than it is in less-free places like Iran. Iran has more women in tech studies.)<p>An AI learning this is a success, not a 'bias'. It doesn't mean no women should study these things; it's not a statement about what <i>should</i> be at all. It's simply an observation about the physical configuration of the world.<p>II<p>What these researchers are really discovering is that AI thinks without morals, and that this reveals the barriers that their own moral convictions and ideologies have placed in their minds.<p>An AI has no fear, so it's not afraid of reaching contrarian or politically-incorrect conclusions. It doesn't know social pressure, so it doesn't know to manipulate its impressions to follow the socially-acceptable beliefs. It doesn't know about the Overton window. It has no concept that its conclusions might lead to some undesirable outcome. It doesn't do motivated reasoning. It doesn't understand the concept of <i>should</i>. It simply describes the world (through the lens of the data available to it). What they've discovered is not that the AI is becoming biased, but that they are biased since they're not willing to reach morally-forbidden facts.<p>Their own bias appears because they've signed up to the reprehensible idea that the only reason people should be treated equally is because people are the same. Which is absurd. The correct morality here is: people are different and we should treat them equally anyway.<p>III<p>"To understand the possible implications, one only need look at the Pulitzer Prize finalist "Machine Bias" series by ProPublica that showed how a computer program designed to predict future criminals is biased against black people."<p>Of course it is. Black people are more likely to commit crimes. Therefore, like being male or being young, being black is a factor that one can apply predictively to an estimate of someone's likelihood to commit crimes. This is definitely true, it's just that people 'mindkill' themselves into not seeing it because most people are willing to blind their minds to fit into a socially-accepted morality and thus achieve personal benefit. What's the point in believing the truth if it doesn't benefit you?<p>Of course, being male or young are both just as inherent and unchangeable as being black. But nobody is going to complain when the machine realizes that youth and maleness predict criminality. We all know which facts are permissible and which facts are immoral, and thus forbidden.<p>Religion never went away, it just became non-theistic. If medieval Christians invented and AI that concluded there was no God, you can be sure they'd want to 'fix' its 'bias' too.<p>IV<p>Language lesson of the day!<p>Fact: A piece of knowledge which is morally acceptable.
Bias: A piece of knowledge which is morally unacceptable.