TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Approaching fairness in machine learning

65 pointsby optimaliover 8 years ago

9 comments

Eridrusover 8 years ago
The biggest issues of bias&#x2F;fairness in ML are not to do with the algorithms or results, but the underlying data.<p>A trivial example would be: what if you trained a classifier to predict whether a person would be re-arrested before they went to trial? Some communities are policed more heavily so you would tend towards reinforcing the bias that exists and provide more ammunition to those arguing for further bias in the system, a feedback loop if you would.<p>Or what if some protected group needs a higher down payment because the group is not well understood enough so that you can&#x27;t distinguish between those who will repay your loans and who won&#x27;t? Maybe educational achievement is a really good predictor on one group, but less effective on another. Is it fair to use the protected class (or any information correlated with it) when it is essentially machine-enabled stereotyping?<p>Recently it has been noted that NLP systems trained on large corpuses of text tend to exhibit society&#x27;s biases where they assume that nurses are women and programmers are men. From a statistical perspective this correlation is there, but we tend to be more careful about how we use this information than a machine. We wouldn&#x27;t want to use this information to constrain our search for people to hire to just those that fulfil our stereotypes, but a machine would. This paper has some details on such issues: <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1606.06121" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1606.06121</a><p>I don&#x27;t think there are any easy solutions here, but I think it&#x27;s important to be aware that data is only a proxy for reality and fitting the data perfectly doesn&#x27;t mean you have achieved fair outcomes.
评论 #12448796 未加载
评论 #12447862 未加载
评论 #12449788 未加载
tlbover 8 years ago
Another recent paper on this topic: <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1606.08813v3.pdf" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1606.08813v3.pdf</a>. It shows how naive lending algorithms can skew against minority groups simply because there is less data available about them, even if their expected repayment rate is the same.<p>It can be self-reinforcing. Imagine some new demographic group of customers appears, and without any data you make some loans to them. The actual repayment rate will be low, not because that group has a worse distribution than other groups, but simply because you couldn&#x27;t identify the lowest-risk members. A simplistic ML model would conclude that the new group is more risky.<p>Of course, smart lenders understand that in order to develop a new customer demographic they need to experiment by lending, with the expectation that their first loans will have high losses, but that in the long run learning about how to identify the low-risk people from that demographic is worthwhile. And they correct for the fact that the first cohort was accepted blind when estimating overall risk for the group.
评论 #12446856 未加载
评论 #12447935 未加载
评论 #12448027 未加载
评论 #12447663 未加载
评论 #12447211 未加载
fatdogover 8 years ago
What is fairness but political accountability?<p>There is an old joke about how people use statistics like a drunk uses a lamp post: for support and not for illumination. Given this, we can expect people to use AI like everything else in statistics, to support the agenda of whoever is operating it while defraying negative personal accountability for the results, because artificial intelligence. It&#x27;s just an obfuscated and sophisticated version of, &quot;Computer says no.&quot;<p>The alternative is the near future headline, &quot;AI confirms racists, sexists, on to something.&quot;
评论 #12448045 未加载
wyagerover 8 years ago
Everyone suggesting that we ought to legislate that machines must be illogical&#x2F;suboptimal is missing the point.<p>If machine learning algorithms are unfairly discriminating against some group, then they are making sub-optimal decisions and costing their users money. This is a self-righting problem.<p>However, a good machine learning algorithm may uncover statistical relationships <i>that people don&#x27;t like</i>; for example, perhaps some nationalities have higher loan repayment rates. In these cases, the algorithm is not at odds with reality; the angsty humans are. If some people want to force machines to be irrational, they should at least be honest about their motivations and stop pretending it has a thing to do with &quot;fairness&quot;.
评论 #12448388 未加载
评论 #12448942 未加载
评论 #12448670 未加载
yummyfajitasover 8 years ago
After studying this issue, and learning a lot more about learning and optimization, I&#x27;ve come to the conclusion that the best solution [1] is probably explicit racial&#x2F;sexual&#x2F;other special interest group quotas.<p>Specifically, we should train a classifier on non-Asian minorities. We should train a different classifier on everyone else. Then we should fill our quotas from the non-Asian minority pool and draw from the primary pool for the rest of the students.<p>As this blog post describes, no matter what you do you&#x27;ll reduce accuracy. But every other fairness method I&#x27;ve seen reduces accuracy both <i>across</i> special interest groups and also <i>within</i> them. Quotas at least give you the best non-Asian minorities and also the best white&#x2F;Asian students.<p>Quotas also have the benefit of being simple and transparent - any average Joe can figure out exactly what &quot;fair&quot; means, and it&#x27;s also pretty transparent that some groups won&#x27;t perform as well as others and why. In contrast, most of the more complex solutions obscure this fact.<p>[1] Here &quot;best&quot; is within the framework of requiring a corporatist spoils system. I don&#x27;t actually favor such a system, but I&#x27;m taking the existence of such a spoils system as given.
评论 #12448767 未加载
drivingmenutsover 8 years ago
Either you allow an algorithm to be ruthlessly fair, or you introduce bias and never get the problem solved correctly, because someone, somewhere, will still find a way to gripe about the amount of bias when, inevitably, it goes against them, or is perceived to be against them due to lack of knowledge. Then you wind up bikeshedding over the bias and not the actual problem.
rubyfanover 8 years ago
I am actually optimistic on Big Data&#x27;s effect in equality.<p>Small data is actually kind of the problem. When you have limited ability to process data or limited data density then your segmentation ability is limited to small data like state, county, zip code, credit score, whether you own a home, etc.<p>Big data processing, big bad ML algorithms and the ubiquity of data is making advanced segmentation available that allows us to make arguably more equitable outcomes.
drpgqover 8 years ago
Bayes and discrimination law doesn&#x27;t seem like good partners.
denzil_correaover 8 years ago
&gt; As a result, the advertiser might have a much better understanding of who to target in the majority group, while essentially random guessing within the minority.<p>If this is the case, then it should be detected and ML should NOT be used for the minority class. There are many classifiers out there which work on one-class problems.