Previous discussion from a couple days ago.<p><a href="https://news.ycombinator.com/item?id=28415582" rel="nofollow">https://news.ycombinator.com/item?id=28415582</a>
This happened to both Google Photos and Flickr too. Which makes it an inexcusable mistake to make in 2021 - how are you not testing for this?<p>Google Photos in 2015: <a href="https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/" rel="nofollow">https://www.wired.com/story/when-it-comes-to-gorillas-google...</a><p>Flickr in 2015: <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/news/flickr-s-auto-tagging-feature-goes-awry-accidentally-tags-black-people-apes-10264144.html" rel="nofollow">https://www.independent.co.uk/life-style/gadgets-and-tech/ne...</a>
I've been trying to avoid controversy lately, but hey, here's one to downvote.<p>Have we considered AI and ML as a general brain replacement is a failed idea? That we humans feel we are so smart we can recreate or exceed millions of year evolution of a human brain?<p>I'd never call AI a waste, it's not. But getting it to do human things just may be.<p>Even a child can tell the difference between a human of any color and an ape. How many billions have been spent trying, and failing, to exceed the bar of the thoughts of a human child?
Is that a result of a skewed training set or are people really hard to tell apart from gorillas if there are no obvious tells like large difference in brightness of different areas of the face?
The video features white and black men. It seems like concluding the algorithm is calling black men primates is the same kind of error people are accusing the algorithm/Facebook of. i.e. The reason you think it's racist is because you assume it's talking about black people specifically suggesting you think the word is more apt to describe black people.<p>Primates and humans are similar labels. This was almost certainly not intentional. Video classifiers are going to make mistakes - sometimes crude or offensive ones. I don't get outrage over labeling errors like this. Facebook should fix the issue - but they shouldn't apologize. It only encourages grievance seekers.
This happens because there are no black people of consequence in the ML pipeline. In my previous company Everytime we built a new model, a bunch of us would test it. Being the only black person in the company, I often found some very odd things and we would correct it before shipping.<p>I understand that fb is a much bigger scale, but all the reason to have a much more diverse set of eyes to test their models before they go live.<p>If you want to avoid this, hire more black people, seriously.
I worked for another computer vision company, Clarifai that had the same issue. One of the employees noticed it and we retrained the model before it became public.
I think the negative reaction is reasonable. Clearly, if a human did this it would a problem, so why should it be acceptable for an automated system to do the same thing? The fact that it is unintentional doesn't negate the fact that it's an embarrassing mistake.<p>On the other hand, imagine a world where these labels were applied by a massive team of humans instead of a deep learning algorithm. At Facebook's scale, would the photos end up with more or less racist labels on average over time? My guess is that the model does a better job, but this is just another example of why we should be wary about trusting ML systems with important work.
AI is not the problem here. AI just notices stuff. It's the lack of even amateur hour emotional intelligence in the product managers who deploy systems like this IMO.
I don't like these stories. It always trends towards the most inflammatory arguments, those being inherint bias and unconscious racism put upon our technology. Real issues in those topics aside, are any articles like this doing anything but feeding flames and generating ad revenue?<p>Instead, I want to talk about pareidolia. Humans are social creatures. We have evolved to identify others of our kind and read their expressions. This was important to us, as we evolved alongside gorilla analogues as well, and the few of us that couldn't discern one face from another didn't usually last long.<p>I think we're trying to place too much of a human expectation onto these machines. I think that human features and primate features are strikingly similar, and it's our specialized brains that let us so easily discern. Yes, with enough data and training we could have more accurate models, but we can't cry foul everytime an algorithm doesn't behave like a human does.<p>Reference: <a href="https://www.reddit.com/r/Pareidolia/" rel="nofollow">https://www.reddit.com/r/Pareidolia/</a>
I wonder if AIs are good at distinguishing individual gorillas, etc. I'd never really thought about the problem of classification being harder (perhaps) than identification if you see what I mean.
I feel that 0-failure-rate expectations from technology will keep us from progressing as a species.<p>Facebook disabled Thai-to-English translation back in April because it translated the queen as “slut” and it’s been disabled since.<p>Maybe we should learn to accept non-fatal errors from applications instead of forcing things to stop entirely.<p>I find it ridiculous that my Photos app suggests I change monkey to “lemur” while I have plenty of photos of monkeys and zero of lemurs.
Who takes the fall when an AI screws up?<p>If you shine enough light on it, apparently the brand does. If a human were to do this, the company would immediately fire the employee and cut all ties with them. But as the article points out, 'fixing' an AI mistake isn't really a fix at all:<p>> [Google] said it was "appalled and genuinely sorry", though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla".
The AI is very honest and innocent, it doesn't know what political correctness is. I've heard stories of parents whose kids would also mislabel a black human as a gorilla.
I don't really think the world needs AI right now. One can argue that the AI is making an innocent mistake and that calling an AI or ML (or it's improper training, however that works) "racist" is overblown rhetoric as people are here, but I think all of that aschews the actual issue. The problem is that AI and ML are primarily used for decision making, like in recommendation engines. These little gadgets that provide recommendations may be fairly low-stakes, but are theoretically proof-of-concepts for future applications like policing, fighting terrorism, or human trafficking. If you get it wrong there, the consequences are devastating. If people don't raise the flag about how wildly wrong the AI is now, then there will inevitably be a false confidence to use it for the aforementioned applications (and there are plenty of examples of how this has already happened).
Maybe the algo or the training set or something else was racist, maybe it wasn't. But if you code something that labels people slurs, you've messed something up. Like, you need to be 99.999999% sure you're not throwing out slurs or your whole project is failing spectacularly. And then you have to apologize to the 0.0000001% , which is still probably like 10 people if half the planet uses your site. How do you get there? I don't know. I guess it'd help if you could be 99.999999% sure you weren't looking at a human face before using another label. Like, bias towards humans in a big big way. Heck, the pre-test probability that your algo is looking at a person is probably much higher than the one from your training set if you're facebook. Or maybe you drop primates from your training set. I guess in that case you'll misidentify some primates as people-- which is kind of the flipside of the same problem technically but oh so much more acceptable.