I'm glad to have kept reading to the author's conclusion:<p>> As a hybrid approach, you could produce a large number of inferred sentiments for words, and have a human annotator patiently look through them, making a list of exceptions whose sentiment should be set to 0. The downside of this is that it’s extra work; the upside is that you take the time to actually see what your data is doing. And that’s something that I think should happen more often in machine learning anyway.<p>Couldn't agree more. Annotating ML data for quality control seems essential both for making it work, and building human trust.
> There is no trade-off. Note that the accuracy of sentiment prediction went up when we switched to ConceptNet Numberbatch. Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.<p>The big conclusion here after all that code buildup does not logically follow. All it shows is that one new word embedding, trained by completely different people for different purposes with different methods on different data using much fancier semantic structures, outperforms (by a small and likely non-statistically-significant degree) an older word embedding (which is not even the best such word embedding from its batch, apparently, given the choice to not use 840B). It is entirely possible that the new word embedding, trained the same minus the anti-bias tweaks, would have had still superior results.
> Some people expect that fighting algorithmic racism is going to come with some sort of trade-off.<p>Um, that's because we know it comes with trade-offs once you have the most optimal algorithm. See for instance <a href="https://arxiv.org/pdf/1610.02413.pdf" rel="nofollow">https://arxiv.org/pdf/1610.02413.pdf</a>. If your best performing algorithm is "racist" (for some definition of racist") you are mathematically forced to make tradeoffs if you want to eliminate that "racism".<p>Of course, defining "racism" itself gets extremely tricky because many definitions of racism are mutually contradictory (<a href="https://arxiv.org/pdf/1609.05807.pdf" rel="nofollow">https://arxiv.org/pdf/1609.05807.pdf</a>).
To oversimplify, I think the training set is something like:<p>Italian restaurant is good.<p>Chinese restaurant is good.<p>Chinese government is bad.<p>Mexican restaurant is good.<p>Mexican drug dealers are bad.<p>Mexican illegal immigrants are bad.<p>And hence the word vector works as expected and the sentiment result follows.<p>Update:<p>To confirm my suspicion, I tried out an online demo to check distance between words in a trained word embedding model using word2vec:<p><a href="http://bionlp-www.utu.fi/wv_demo/" rel="nofollow">http://bionlp-www.utu.fi/wv_demo/</a><p>Here is an example output I got with Finnish 4B model (probably a bad choice since it is not English):<p>italian, bad: 0.18492977<p>chinese, bad: 0.5144626<p>mexican, bad: 0.3288326<p>Same pairs with Google News model:<p>italian, bad: 0.09307841<p>chinese, bad: 0.19638279<p>mexican, bad: 0.16298543
Just thinking out loud here...<p>It seems to me that if you wanted to root out sentiment bias in this type of algorithm, then you would need to adjust your baseline word embeddings dataset until you have sentiment scores for the words "Italian", "British", "Chinese", "Mexican", "African", etc that are roughly equal, without changing the sentiment scores for all other words. That being said, I have no idea how you'd approach such a task...<p>I don't think you could ever get equal sentiment scores for "black" and "white" without biasing the dataset in such a manner that it would be rendered invalid for other scenarios (e.g., giving a "dark black alley" a higher sentiment than it would otherwise have). "Black" and "white" is a more difficult situation because the words have different meanings outside of race/ethnicity.
Does this mean the text examples the AI learns from are biased and as such it learns to be biased too?<p>So it's not giving us objetive decisions, but a mirror. Not so bad either.
I think that the bias problem they are highlighting is very important. That said, I'm wondering if they really didn't try (like the title suggests) or if they choose this approach on purpose because it highlights the problem.<p>To explain what happened here: They trained a classifier to predict word sentiment based on a sentiment lexicon. The lexicon would mostly contain words such as adjectives (like awesome, great, ...). They use this to generalize to all words using word vectors.<p>The way word vectors work is that words that frequently occur together are going to
be closer in vector space. So what they have essentially shown is that in common crawl and google news names of people with certain ethnicities are more likely to occur near words with negative sentiment.<p>However, the sentiment analysis approach they are using amplifies the problem in the worst possible way. They are asking their machine learning model to generalize from training data with emotional words to people's names.
It would be interesting to use the Uber/Lyft dataset of driver and passenger ratings to do an analysis like this.<p>For any such analysis there are a great many confounds, both blatant and subtle. Finding racism everywhere could be because overt racism is everywhere, or it could be confirmation bias. It could even be both! That's the tricky thing about confirmation bias—one never knows when one is experiencing it, at least not at the time.
I've heard a lot about racism in AI, but looking at the distributions of sentiment score by name, a member of any race would rationally be more worried about simply having the wrong name. Has there been any work done on that?
> fighting algorithmic racism<p>Reminds me of how Google Photos couldn't differentiate between a black person & a monkey, so they've excluded that term from search altogether.<p>While the endeavour itself is good, fixes are sometimes hilariously bad or biased (untrue)
Maybe, you know, humans are simply not Chinese rooms.<p>Recently there was an article about recognition of bullshit: <a href="https://news.ycombinator.com/item?id=17764348" rel="nofollow">https://news.ycombinator.com/item?id=17764348</a><p>To me the article brought great insight - I realized that humans do not just pattern match. They also seek understanding, which I would define as an ability to give a representative example.<p>It is possible to give somebody a set described by arbitrarily complex conditions while the set itself is empty. Take any satisfiability problem (SAT) with no solution - this is a set of conditions on variables, yet there is no global solution to these.<p>So if you were a Chinese room and I would train you on SAT problems, by pure pattern matching, you would be willing to give solutions to unsolvable instances. It is only when you actually understand the meaning behind conditions you can recognize that these arbitrary complex inputs are in fact just empty sets.<p>So perhaps that's the flaw with our algorithms. There is no notion of I understand the input. Perhaps it is understandable, because understanding (per above) might as well be NP-hard.
This is an interesting result:<p>> Note that the accuracy of sentiment prediction went up when we switched to ConceptNet Numberbatch.<p>> Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.<p>I wonder if this could be extended to individual names that have strong connotations with people because of the fame of some particular person, like "Barack", "Hillary", "Donald", "Vladimir", or "Adolf", or if removing that sort of bias is just too much to expect from a sentiment analysis algorithm.
Where I grew up, there is a majority group with fair skin, later(possibly incorrectly) attributed to the fact that they worked in the fields less. The minority group is darker skinned. If you train any reasonable machine learning model on any financial data, it will pick up on the discrepancy. If it did not I would say it is a flawed model. But that is more a sign that people should avoid such models.
How to make a program that does what you asked it to do, and then add arbitrary fudge factors as the notion strikes you to "correct" for the bogeyman of bias.<p>Suppose sentiment for the name Tyrel was better than for Adolf. Would that indicate anti-white bias? Suppose the name Osama has really poor sentiment. What fudge factor do you add there to correct for possible anti-Muslim bias? Suppose Little Richard and Elton John don't have equal sentiment. Is the lower one because Little Richard is black, or because Elton John is gay?<p>What we have been seeing lately is an effort to replace unmeasurable bias that is simply assumed to exist and to be unjust and replace it with real bias, encoded in our laws and practices, or in this case, in actual code.
Setting aside blatant shock behaviors... If the other side, the audience, were less sensitive and not looking for the next micro-outrage, wouldn't ML chatbots evolve more pro-social values by positive reinforcement?<p><i>It takes two to Tango</i> .. the average audience behavior isn't blameless for the impact of its response. Also, how an AI decides interprets an ambiguous response as being desirable or not is really interesting.