TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How a Machine Learns Prejudice

37 pointsby matthbergover 8 years ago

7 comments

KODeKarnageover 8 years ago
&gt; In June 2015, for example, Google’s photo categorization system identified two African Americans as “gorillas.”<p>&gt; Law enforcement officials have already been criticized, for example, for using computer algorithms that allegedly tag black defendants as more likely to commit a future crime, even though the program was not designed to explicitly consider race.<p>Consider these two examples from the article. One is when the machine failed in a particular undesirable fashion. The other is when the machine &quot;worked^&quot; but in an undesirable fashion. (^That is, overall it might have worked well, but the bias in errors wasn&#x27;t sufficiently explored.)<p>Modelling &quot;undesirable fashion&quot;, which is a dynamic, subjective social construct, is far more difficult than either of the tasks originally set in the examples.<p>In the first example, how many images of people were confused with non-humans? I can&#x27;t believe this was the only failed result. The only reason this particular example was problematic was that these specific people were specifically identified as gorillas.<p>No problem if they were identified as a car, or a chess piece, or a satellite. Gorillas though, that&#x27;s a special failure. We all know why. And we know why the error occurred. If the machine sees predominantly white people and gorillas, then &quot;people&quot; are white and &quot;almost people&quot; but with darker pixels is &quot;gorilla&quot;. It is human prejudice in interpreting the results that made the error an issue.
andrewclunnover 8 years ago
There seems to be a bit of explaining away all bias as the result of human creators here. If an algorithm trained on real world data displayed some traditional bias, would we assume such a cause and dismiss it? One could just as easily make this argument to dismiss all correlation based evidence. Yes the code &#x2F; methodology matters, but we&#x27;d best get ready for deep learning algorithms to confront us with some uncomfortable truths.
评论 #13306447 未加载
h4nkosloover 8 years ago
There is literally no support in the article for the contention that &quot;artificial intelligence picks up bias from human creators&quot; as opposed to making correct inferences from reality. All of the examples they provide, modulo blurry tank pix, are of the latter.
randyrandover 8 years ago
Oh no! the computer is making uncomfortable findings that we don&#x27;t like. The computer must be wrong!
评论 #13306672 未加载
评论 #13306740 未加载
评论 #13306491 未加载
dominotwover 8 years ago
whats with shitty scientificamerican stories on HN today.
评论 #13306703 未加载
kahrkunneover 8 years ago
&quot;racist computers&quot; is the funniest trend since &quot;sexist babies&quot;<p>It should be obvious that if an ML algorithm is trained on sufficient real-world data, it won&#x27;t be racist - it&#x27;ll just not be politically correct
评论 #13306663 未加载
h4nkosloover 8 years ago
The interesting thing is that literally all of the inferences we are &quot;worried&quot; about computers making were social consensuses not too long ago, with ample first-order evidence to support them even in the absence of fancy machine learning algorithms.