TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Making a racist AI without really trying (2017)

229 点作者 spatten超过 6 年前

16 条评论

asploder超过 6 年前
I&#x27;m glad to have kept reading to the author&#x27;s conclusion:<p>&gt; As a hybrid approach, you could produce a large number of inferred sentiments for words, and have a human annotator patiently look through them, making a list of exceptions whose sentiment should be set to 0. The downside of this is that it’s extra work; the upside is that you take the time to actually see what your data is doing. And that’s something that I think should happen more often in machine learning anyway.<p>Couldn&#x27;t agree more. Annotating ML data for quality control seems essential both for making it work, and building human trust.
评论 #18043942 未加载
评论 #18043681 未加载
评论 #18043875 未加载
gwern超过 6 年前
&gt; There is no trade-off. Note that the accuracy of sentiment prediction went up when we switched to ConceptNet Numberbatch. Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.<p>The big conclusion here after all that code buildup does not logically follow. All it shows is that one new word embedding, trained by completely different people for different purposes with different methods on different data using much fancier semantic structures, outperforms (by a small and likely non-statistically-significant degree) an older word embedding (which is not even the best such word embedding from its batch, apparently, given the choice to not use 840B). It is entirely possible that the new word embedding, trained the same minus the anti-bias tweaks, would have had still superior results.
评论 #18044239 未加载
评论 #18043996 未加载
lalaland1125超过 6 年前
&gt; Some people expect that fighting algorithmic racism is going to come with some sort of trade-off.<p>Um, that&#x27;s because we know it comes with trade-offs once you have the most optimal algorithm. See for instance <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1610.02413.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1610.02413.pdf</a>. If your best performing algorithm is &quot;racist&quot; (for some definition of racist&quot;) you are mathematically forced to make tradeoffs if you want to eliminate that &quot;racism&quot;.<p>Of course, defining &quot;racism&quot; itself gets extremely tricky because many definitions of racism are mutually contradictory (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1609.05807.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1609.05807.pdf</a>).
评论 #18044298 未加载
评论 #18045485 未加载
paradite超过 6 年前
To oversimplify, I think the training set is something like:<p>Italian restaurant is good.<p>Chinese restaurant is good.<p>Chinese government is bad.<p>Mexican restaurant is good.<p>Mexican drug dealers are bad.<p>Mexican illegal immigrants are bad.<p>And hence the word vector works as expected and the sentiment result follows.<p>Update:<p>To confirm my suspicion, I tried out an online demo to check distance between words in a trained word embedding model using word2vec:<p><a href="http:&#x2F;&#x2F;bionlp-www.utu.fi&#x2F;wv_demo&#x2F;" rel="nofollow">http:&#x2F;&#x2F;bionlp-www.utu.fi&#x2F;wv_demo&#x2F;</a><p>Here is an example output I got with Finnish 4B model (probably a bad choice since it is not English):<p>italian, bad: 0.18492977<p>chinese, bad: 0.5144626<p>mexican, bad: 0.3288326<p>Same pairs with Google News model:<p>italian, bad: 0.09307841<p>chinese, bad: 0.19638279<p>mexican, bad: 0.16298543
EB66超过 6 年前
Just thinking out loud here...<p>It seems to me that if you wanted to root out sentiment bias in this type of algorithm, then you would need to adjust your baseline word embeddings dataset until you have sentiment scores for the words &quot;Italian&quot;, &quot;British&quot;, &quot;Chinese&quot;, &quot;Mexican&quot;, &quot;African&quot;, etc that are roughly equal, without changing the sentiment scores for all other words. That being said, I have no idea how you&#x27;d approach such a task...<p>I don&#x27;t think you could ever get equal sentiment scores for &quot;black&quot; and &quot;white&quot; without biasing the dataset in such a manner that it would be rendered invalid for other scenarios (e.g., giving a &quot;dark black alley&quot; a higher sentiment than it would otherwise have). &quot;Black&quot; and &quot;white&quot; is a more difficult situation because the words have different meanings outside of race&#x2F;ethnicity.
评论 #18043799 未加载
评论 #18044132 未加载
评论 #18044026 未加载
k__超过 6 年前
Does this mean the text examples the AI learns from are biased and as such it learns to be biased too?<p>So it&#x27;s not giving us objetive decisions, but a mirror. Not so bad either.
评论 #18043560 未加载
评论 #18044001 未加载
评论 #18043781 未加载
ma2rten超过 6 年前
I think that the bias problem they are highlighting is very important. That said, I&#x27;m wondering if they really didn&#x27;t try (like the title suggests) or if they choose this approach on purpose because it highlights the problem.<p>To explain what happened here: They trained a classifier to predict word sentiment based on a sentiment lexicon. The lexicon would mostly contain words such as adjectives (like awesome, great, ...). They use this to generalize to all words using word vectors.<p>The way word vectors work is that words that frequently occur together are going to be closer in vector space. So what they have essentially shown is that in common crawl and google news names of people with certain ethnicities are more likely to occur near words with negative sentiment.<p>However, the sentiment analysis approach they are using amplifies the problem in the worst possible way. They are asking their machine learning model to generalize from training data with emotional words to people&#x27;s names.
评论 #18048936 未加载
评论 #18044846 未加载
User23超过 6 年前
It would be interesting to use the Uber&#x2F;Lyft dataset of driver and passenger ratings to do an analysis like this.<p>For any such analysis there are a great many confounds, both blatant and subtle. Finding racism everywhere could be because overt racism is everywhere, or it could be confirmation bias. It could even be both! That&#x27;s the tricky thing about confirmation bias—one never knows when one is experiencing it, at least not at the time.
travisoneill1超过 6 年前
I&#x27;ve heard a lot about racism in AI, but looking at the distributions of sentiment score by name, a member of any race would rationally be more worried about simply having the wrong name. Has there been any work done on that?
评论 #18043964 未加载
practice9超过 6 年前
&gt; fighting algorithmic racism<p>Reminds me of how Google Photos couldn&#x27;t differentiate between a black person &amp; a monkey, so they&#x27;ve excluded that term from search altogether.<p>While the endeavour itself is good, fixes are sometimes hilariously bad or biased (untrue)
评论 #18044156 未加载
评论 #18043446 未加载
评论 #18043540 未加载
评论 #18043428 未加载
js8超过 6 年前
Maybe, you know, humans are simply not Chinese rooms.<p>Recently there was an article about recognition of bullshit: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17764348" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17764348</a><p>To me the article brought great insight - I realized that humans do not just pattern match. They also seek understanding, which I would define as an ability to give a representative example.<p>It is possible to give somebody a set described by arbitrarily complex conditions while the set itself is empty. Take any satisfiability problem (SAT) with no solution - this is a set of conditions on variables, yet there is no global solution to these.<p>So if you were a Chinese room and I would train you on SAT problems, by pure pattern matching, you would be willing to give solutions to unsolvable instances. It is only when you actually understand the meaning behind conditions you can recognize that these arbitrary complex inputs are in fact just empty sets.<p>So perhaps that&#x27;s the flaw with our algorithms. There is no notion of I understand the input. Perhaps it is understandable, because understanding (per above) might as well be NP-hard.
评论 #18048960 未加载
评论 #18044655 未加载
elihu超过 6 年前
This is an interesting result:<p>&gt; Note that the accuracy of sentiment prediction went up when we switched to ConceptNet Numberbatch.<p>&gt; Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.<p>I wonder if this could be extended to individual names that have strong connotations with people because of the fame of some particular person, like &quot;Barack&quot;, &quot;Hillary&quot;, &quot;Donald&quot;, &quot;Vladimir&quot;, or &quot;Adolf&quot;, or if removing that sort of bias is just too much to expect from a sentiment analysis algorithm.
abenedic超过 6 年前
Where I grew up, there is a majority group with fair skin, later(possibly incorrectly) attributed to the fact that they worked in the fields less. The minority group is darker skinned. If you train any reasonable machine learning model on any financial data, it will pick up on the discrepancy. If it did not I would say it is a flawed model. But that is more a sign that people should avoid such models.
gumby超过 6 年前
Please add 2017 to title
b6超过 6 年前
How to make a program that does what you asked it to do, and then add arbitrary fudge factors as the notion strikes you to &quot;correct&quot; for the bogeyman of bias.<p>Suppose sentiment for the name Tyrel was better than for Adolf. Would that indicate anti-white bias? Suppose the name Osama has really poor sentiment. What fudge factor do you add there to correct for possible anti-Muslim bias? Suppose Little Richard and Elton John don&#x27;t have equal sentiment. Is the lower one because Little Richard is black, or because Elton John is gay?<p>What we have been seeing lately is an effort to replace unmeasurable bias that is simply assumed to exist and to be unjust and replace it with real bias, encoded in our laws and practices, or in this case, in actual code.
swingline-747超过 6 年前
Setting aside blatant shock behaviors... If the other side, the audience, were less sensitive and not looking for the next micro-outrage, wouldn&#x27;t ML chatbots evolve more pro-social values by positive reinforcement?<p><i>It takes two to Tango</i> .. the average audience behavior isn&#x27;t blameless for the impact of its response. Also, how an AI decides interprets an ambiguous response as being desirable or not is really interesting.
评论 #18043794 未加载