> In June 2015, for example, Google’s photo categorization system identified two African Americans as “gorillas.”<p>> Law enforcement officials have already been criticized, for example, for using computer algorithms that allegedly tag black defendants as more likely to commit a future crime, even though the program was not designed to explicitly consider race.<p>Consider these two examples from the article. One is when the machine failed in a particular undesirable fashion. The other is when the machine "worked^" but in an undesirable fashion. (^That is, overall it might have worked well, but the bias in errors wasn't sufficiently explored.)<p>Modelling "undesirable fashion", which is a dynamic, subjective social construct, is far more difficult than either of the tasks originally set in the examples.<p>In the first example, how many images of people were confused with non-humans? I can't believe this was the only failed result. The only reason this particular example was problematic was that these specific people were specifically identified as gorillas.<p>No problem if they were identified as a car, or a chess piece, or a satellite. Gorillas though, that's a special failure. We all know why. And we know why the error occurred. If the machine sees predominantly white people and gorillas, then "people" are white and "almost people" but with darker pixels is "gorilla". It is human prejudice in interpreting the results that made the error an issue.
There seems to be a bit of explaining away all bias as the result of human creators here. If an algorithm trained on real world data displayed some traditional bias, would we assume such a cause and dismiss it? One could just as easily make this argument to dismiss all correlation based evidence. Yes the code / methodology matters, but we'd best get ready for deep learning algorithms to confront us with some uncomfortable truths.
There is literally no support in the article for the contention that "artificial intelligence picks up bias from human creators" as opposed to making correct inferences from reality. All of the examples they provide, modulo blurry tank pix, are of the latter.
"racist computers" is the funniest trend since "sexist babies"<p>It should be obvious that if an ML algorithm is trained on sufficient real-world data, it won't be racist - it'll just not be politically correct
The interesting thing is that literally all of the inferences we are "worried" about computers making were social consensuses not too long ago, with ample first-order evidence to support them even in the absence of fancy machine learning algorithms.