TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Bias detectives: the researchers striving to make algorithms fair

57 点作者 onuralp将近 7 年前

8 条评论

local_yokel将近 7 年前
It&#x27;s worth pointing out that the original ProPublica investigation was conducted by journalists unskilled in statistics and machine learning. There was a convincing rebuttal posted by the actual scientists involved, which is of course ignored since &quot;racist AI&quot; is the kind of headline that&#x27;s just too golden to abandon.<p><a href="http:&#x2F;&#x2F;www.uscourts.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;80_2_6_0.pdf" rel="nofollow">http:&#x2F;&#x2F;www.uscourts.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;80_2_6_0.pdf</a>
评论 #17365964 未加载
评论 #17365318 未加载
kyleperik将近 7 年前
The purpose of Machine Learning is to generalize on a large scale. I like to think of it as the equivalent of someone who has years of experience in a particular area. Someone who has been doing something for years has seen so much, that they can know in an instant what a situation is based on clues and generalizations. It wouldn&#x27;t be fast if it wasn&#x27;t generalizing.<p>If you want to claim you know what fair is in any given situation, then go and hardcode all your own fair rules, because you aren&#x27;t going to find &quot;fairness&quot; in machine learning.
评论 #17364697 未加载
评论 #17364790 未加载
Sol-将近 7 年前
When I was reading through some algorithmic fairness literature some time ago, I came back a bit frustrated because as the article mentions, the fairness definitions are mutually incompatible (though some seem more plausible than others) and it&#x27;s not really a problem that can be fully solved on a technical level. The only flicker of hope was that a perfect classifier can, by some definitions, be considered fair, so at least you have something to work with - if your classifier discriminates by gender or other attributes, you should at least make it good enough to back up its bias by delivering perfect accuracy (at which point you can investigate why inherit differences between groups seem to exist).<p>It&#x27;s good that some Computer Science researchers are ready to work in such politicized fields though, it&#x27;s definitely necessary. I find it admirable because I personally wouldn&#x27;t enjoy those discussions.
评论 #17365060 未加载
评论 #17367354 未加载
kgwgk将近 7 年前
In summary: “You can’t have it all. If you want to be fair in one way, you might necessarily be unfair in another definition that also sounds reasonable.”
评论 #17366153 未加载
评论 #17364243 未加载
aldanor将近 7 年前
Note that there&#x27;s not only a potential selection bias problem, but a feedback issue as well. For instance, if the algorithm is biased toward assigning higher criminal activity risk to black people, the black people will be more likely to be checked and, as a consequence, the future versions of such algorithms will be even more biased in the same direction. Debiasing in such situations is a very tough endeavour.
评论 #17364283 未加载
评论 #17364497 未加载
thisisit将近 7 年前
Isn&#x27;t the whole point of machine algorithms to find the best spot between variance and bias? In which case, every algorithms will have some bias.<p>IMO, the focus should instead be on not overselling algorithms as being infallible rather something which will have some bias and needs overriding time to time. If a system is fully automated without checks and balances we might have serious problems. A good non-ML example was discussed couple of days ago on HN where a person was terminated by a machine without much oversight:<p><a href="https:&#x2F;&#x2F;idiallo.com&#x2F;blog&#x2F;when-a-machine-fired-me" rel="nofollow">https:&#x2F;&#x2F;idiallo.com&#x2F;blog&#x2F;when-a-machine-fired-me</a>
评论 #17363802 未加载
andrewlee224将近 7 年前
Aren&#x27;t the algorithms already reasonably fair? The researchers are just trying to get them to be politically correct?
评论 #17364485 未加载
评论 #17363961 未加载
评论 #17364126 未加载
评论 #17364720 未加载
paulus_magnus2将近 7 年前
This will be interesting. Most actions we people and ALL actions corporations take are optimised for maximal self &#x2F; personal gain and not for justice (which is hard or impossible to define). This is the basis of neoliberalism. It will be interesting to see where pressure points will emerge and who &#x2F; how negotiations will progress.