TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Bias detectives: the researchers striving to make algorithms fair

57 pointsby onuralpalmost 7 years ago

8 comments

local_yokelalmost 7 years ago
It&#x27;s worth pointing out that the original ProPublica investigation was conducted by journalists unskilled in statistics and machine learning. There was a convincing rebuttal posted by the actual scientists involved, which is of course ignored since &quot;racist AI&quot; is the kind of headline that&#x27;s just too golden to abandon.<p><a href="http:&#x2F;&#x2F;www.uscourts.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;80_2_6_0.pdf" rel="nofollow">http:&#x2F;&#x2F;www.uscourts.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;80_2_6_0.pdf</a>
评论 #17365964 未加载
评论 #17365318 未加载
kyleperikalmost 7 years ago
The purpose of Machine Learning is to generalize on a large scale. I like to think of it as the equivalent of someone who has years of experience in a particular area. Someone who has been doing something for years has seen so much, that they can know in an instant what a situation is based on clues and generalizations. It wouldn&#x27;t be fast if it wasn&#x27;t generalizing.<p>If you want to claim you know what fair is in any given situation, then go and hardcode all your own fair rules, because you aren&#x27;t going to find &quot;fairness&quot; in machine learning.
评论 #17364697 未加载
评论 #17364790 未加载
Sol-almost 7 years ago
When I was reading through some algorithmic fairness literature some time ago, I came back a bit frustrated because as the article mentions, the fairness definitions are mutually incompatible (though some seem more plausible than others) and it&#x27;s not really a problem that can be fully solved on a technical level. The only flicker of hope was that a perfect classifier can, by some definitions, be considered fair, so at least you have something to work with - if your classifier discriminates by gender or other attributes, you should at least make it good enough to back up its bias by delivering perfect accuracy (at which point you can investigate why inherit differences between groups seem to exist).<p>It&#x27;s good that some Computer Science researchers are ready to work in such politicized fields though, it&#x27;s definitely necessary. I find it admirable because I personally wouldn&#x27;t enjoy those discussions.
评论 #17365060 未加载
评论 #17367354 未加载
kgwgkalmost 7 years ago
In summary: “You can’t have it all. If you want to be fair in one way, you might necessarily be unfair in another definition that also sounds reasonable.”
评论 #17366153 未加载
评论 #17364243 未加载
aldanoralmost 7 years ago
Note that there&#x27;s not only a potential selection bias problem, but a feedback issue as well. For instance, if the algorithm is biased toward assigning higher criminal activity risk to black people, the black people will be more likely to be checked and, as a consequence, the future versions of such algorithms will be even more biased in the same direction. Debiasing in such situations is a very tough endeavour.
评论 #17364283 未加载
评论 #17364497 未加载
thisisitalmost 7 years ago
Isn&#x27;t the whole point of machine algorithms to find the best spot between variance and bias? In which case, every algorithms will have some bias.<p>IMO, the focus should instead be on not overselling algorithms as being infallible rather something which will have some bias and needs overriding time to time. If a system is fully automated without checks and balances we might have serious problems. A good non-ML example was discussed couple of days ago on HN where a person was terminated by a machine without much oversight:<p><a href="https:&#x2F;&#x2F;idiallo.com&#x2F;blog&#x2F;when-a-machine-fired-me" rel="nofollow">https:&#x2F;&#x2F;idiallo.com&#x2F;blog&#x2F;when-a-machine-fired-me</a>
评论 #17363802 未加载
andrewlee224almost 7 years ago
Aren&#x27;t the algorithms already reasonably fair? The researchers are just trying to get them to be politically correct?
评论 #17364485 未加载
评论 #17363961 未加载
评论 #17364126 未加载
评论 #17364720 未加载
paulus_magnus2almost 7 years ago
This will be interesting. Most actions we people and ALL actions corporations take are optimised for maximal self &#x2F; personal gain and not for justice (which is hard or impossible to define). This is the basis of neoliberalism. It will be interesting to see where pressure points will emerge and who &#x2F; how negotiations will progress.