TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Facebook apology as AI labels black men 'primates'

169 pointsby lindenstarkover 3 years ago

21 comments

TOMDMover 3 years ago
Previous discussion from a couple days ago.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28415582" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28415582</a>
评论 #28441816 未加载
simonwover 3 years ago
This happened to both Google Photos and Flickr too. Which makes it an inexcusable mistake to make in 2021 - how are you not testing for this?<p>Google Photos in 2015: <a href="https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;when-it-comes-to-gorillas-google-photos-remains-blind&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;when-it-comes-to-gorillas-google...</a><p>Flickr in 2015: <a href="https:&#x2F;&#x2F;www.independent.co.uk&#x2F;life-style&#x2F;gadgets-and-tech&#x2F;news&#x2F;flickr-s-auto-tagging-feature-goes-awry-accidentally-tags-black-people-apes-10264144.html" rel="nofollow">https:&#x2F;&#x2F;www.independent.co.uk&#x2F;life-style&#x2F;gadgets-and-tech&#x2F;ne...</a>
评论 #28440153 未加载
评论 #28440181 未加载
评论 #28440060 未加载
评论 #28440127 未加载
评论 #28440873 未加载
评论 #28440404 未加载
评论 #28448741 未加载
评论 #28440064 未加载
silisiliover 3 years ago
I&#x27;ve been trying to avoid controversy lately, but hey, here&#x27;s one to downvote.<p>Have we considered AI and ML as a general brain replacement is a failed idea? That we humans feel we are so smart we can recreate or exceed millions of year evolution of a human brain?<p>I&#x27;d never call AI a waste, it&#x27;s not. But getting it to do human things just may be.<p>Even a child can tell the difference between a human of any color and an ape. How many billions have been spent trying, and failing, to exceed the bar of the thoughts of a human child?
评论 #28440625 未加载
评论 #28440908 未加载
评论 #28440690 未加载
评论 #28441841 未加载
scotty79over 3 years ago
Is that a result of a skewed training set or are people really hard to tell apart from gorillas if there are no obvious tells like large difference in brightness of different areas of the face?
评论 #28440013 未加载
评论 #28440001 未加载
评论 #28440112 未加载
评论 #28439989 未加载
评论 #28440397 未加载
评论 #28439994 未加载
评论 #28439996 未加载
ALittleLightover 3 years ago
The video features white and black men. It seems like concluding the algorithm is calling black men primates is the same kind of error people are accusing the algorithm&#x2F;Facebook of. i.e. The reason you think it&#x27;s racist is because you assume it&#x27;s talking about black people specifically suggesting you think the word is more apt to describe black people.<p>Primates and humans are similar labels. This was almost certainly not intentional. Video classifiers are going to make mistakes - sometimes crude or offensive ones. I don&#x27;t get outrage over labeling errors like this. Facebook should fix the issue - but they shouldn&#x27;t apologize. It only encourages grievance seekers.
评论 #28440317 未加载
评论 #28440157 未加载
评论 #28440729 未加载
firefoxdover 3 years ago
This happens because there are no black people of consequence in the ML pipeline. In my previous company Everytime we built a new model, a bunch of us would test it. Being the only black person in the company, I often found some very odd things and we would correct it before shipping.<p>I understand that fb is a much bigger scale, but all the reason to have a much more diverse set of eyes to test their models before they go live.<p>If you want to avoid this, hire more black people, seriously.
评论 #28440637 未加载
评论 #28440629 未加载
评论 #28440794 未加载
yanlezeilerover 3 years ago
I worked for another computer vision company, Clarifai that had the same issue. One of the employees noticed it and we retrained the model before it became public.
评论 #28440210 未加载
root_axisover 3 years ago
I think the negative reaction is reasonable. Clearly, if a human did this it would a problem, so why should it be acceptable for an automated system to do the same thing? The fact that it is unintentional doesn&#x27;t negate the fact that it&#x27;s an embarrassing mistake.<p>On the other hand, imagine a world where these labels were applied by a massive team of humans instead of a deep learning algorithm. At Facebook&#x27;s scale, would the photos end up with more or less racist labels on average over time? My guess is that the model does a better job, but this is just another example of why we should be wary about trusting ML systems with important work.
评论 #28441374 未加载
评论 #28440530 未加载
varelseover 3 years ago
AI is not the problem here. AI just notices stuff. It&#x27;s the lack of even amateur hour emotional intelligence in the product managers who deploy systems like this IMO.
Cycl0psover 3 years ago
I don&#x27;t like these stories. It always trends towards the most inflammatory arguments, those being inherint bias and unconscious racism put upon our technology. Real issues in those topics aside, are any articles like this doing anything but feeding flames and generating ad revenue?<p>Instead, I want to talk about pareidolia. Humans are social creatures. We have evolved to identify others of our kind and read their expressions. This was important to us, as we evolved alongside gorilla analogues as well, and the few of us that couldn&#x27;t discern one face from another didn&#x27;t usually last long.<p>I think we&#x27;re trying to place too much of a human expectation onto these machines. I think that human features and primate features are strikingly similar, and it&#x27;s our specialized brains that let us so easily discern. Yes, with enough data and training we could have more accurate models, but we can&#x27;t cry foul everytime an algorithm doesn&#x27;t behave like a human does.<p>Reference: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Pareidolia&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Pareidolia&#x2F;</a>
评论 #28440361 未加载
评论 #28440464 未加载
评论 #28440384 未加载
评论 #28440414 未加载
评论 #28440483 未加载
评论 #28440316 未加载
评论 #28440387 未加载
评论 #28440426 未加载
评论 #28440438 未加载
评论 #28440318 未加载
dd444fgdfgover 3 years ago
Humans are primates. The AI is correct. Does it classify white men and Asians as primates too? If not, that&#x27;s a bug.
OneEyedRobotover 3 years ago
I wonder if AIs are good at distinguishing individual gorillas, etc. I&#x27;d never really thought about the problem of classification being harder (perhaps) than identification if you see what I mean.
istillwritecodeover 3 years ago
Google: hold my beer
Paraestheticover 3 years ago
ooof, thats uncomfortable
jimjimjimover 3 years ago
nothing important in the world should RELY on a AI&#x2F;nn&#x2F;ml.
评论 #28440680 未加载
sonicgggover 3 years ago
Wow, what else? Did it also label them as &quot;Homo Sapiens&quot;?
q-rewsover 3 years ago
I feel that 0-failure-rate expectations from technology will keep us from progressing as a species.<p>Facebook disabled Thai-to-English translation back in April because it translated the queen as “slut” and it’s been disabled since.<p>Maybe we should learn to accept non-fatal errors from applications instead of forcing things to stop entirely.<p>I find it ridiculous that my Photos app suggests I change monkey to “lemur” while I have plenty of photos of monkeys and zero of lemurs.
smoldesuover 3 years ago
Who takes the fall when an AI screws up?<p>If you shine enough light on it, apparently the brand does. If a human were to do this, the company would immediately fire the employee and cut all ties with them. But as the article points out, &#x27;fixing&#x27; an AI mistake isn&#x27;t really a fix at all:<p>&gt; [Google] said it was &quot;appalled and genuinely sorry&quot;, though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word &quot;gorilla&quot;.
评论 #28440091 未加载
userbinatorover 3 years ago
The AI is very honest and innocent, it doesn&#x27;t know what political correctness is. I&#x27;ve heard stories of parents whose kids would also mislabel a black human as a gorilla.
评论 #28440171 未加载
评论 #28440159 未加载
kodahover 3 years ago
I don&#x27;t really think the world needs AI right now. One can argue that the AI is making an innocent mistake and that calling an AI or ML (or it&#x27;s improper training, however that works) &quot;racist&quot; is overblown rhetoric as people are here, but I think all of that aschews the actual issue. The problem is that AI and ML are primarily used for decision making, like in recommendation engines. These little gadgets that provide recommendations may be fairly low-stakes, but are theoretically proof-of-concepts for future applications like policing, fighting terrorism, or human trafficking. If you get it wrong there, the consequences are devastating. If people don&#x27;t raise the flag about how wildly wrong the AI is now, then there will inevitably be a false confidence to use it for the aforementioned applications (and there are plenty of examples of how this has already happened).
throwthereover 3 years ago
Maybe the algo or the training set or something else was racist, maybe it wasn&#x27;t. But if you code something that labels people slurs, you&#x27;ve messed something up. Like, you need to be 99.999999% sure you&#x27;re not throwing out slurs or your whole project is failing spectacularly. And then you have to apologize to the 0.0000001% , which is still probably like 10 people if half the planet uses your site. How do you get there? I don&#x27;t know. I guess it&#x27;d help if you could be 99.999999% sure you weren&#x27;t looking at a human face before using another label. Like, bias towards humans in a big big way. Heck, the pre-test probability that your algo is looking at a person is probably much higher than the one from your training set if you&#x27;re facebook. Or maybe you drop primates from your training set. I guess in that case you&#x27;ll misidentify some primates as people-- which is kind of the flipside of the same problem technically but oh so much more acceptable.
评论 #28440225 未加载