TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

92 pointsby JeanMarcSabout 4 years ago

9 comments

Tarq0nabout 4 years ago
The article is a lot better than the headline lets on. In particular:<p>&gt; Crucially, she said, we should place blame on the human individuals behind the creation and use of the algorithm rather than reify the technology as being the main driver.<p>&gt; “Systems and algorithms are human-made, and do exactly what they’ve been instructed to do,” she said. “They can act as an easy and sort of cowardly way to take the blame off yourself. What’s important to understand is that often algorithms can use historical data without any proper interpretation of the context surrounding that data. With that pattern, you can only expect social problems like institutional racism to increase. The result is a sort of feedback loop that increases social problems that already exist.”<p>As someone in an adjacent institution this affair has been frustrating to watch. In my opinion black-box algorithms have no place in government, and for this reason it&#x27;s critical for the government to develop the expertise to develop and understand models themselves, so they aren&#x27;t reliant on the trade secrets of contractors.<p>Interestingly though the administration resigned over this affair, it does not look like they&#x27;ll suffer much over it in the upcoming election this wednesday. The responsible prime minister is likely to get a fifth term if things go as the polls are predicting.
评论 #26478382 未加载
评论 #26479942 未加载
urtieabout 4 years ago
Just to get you an idea of how wrongheaded this entire thing was, I have a personal anecdote relating to this shitstorm.<p>Back in autumn 2013 I got a letter stating that I had received too much in child care benefits, with a request to pay it back. On the back of said letter was the text &#x27;If you are still entitled to child care benefits and want to settle this amount against upcoming entitlements, you do not have to do anything&#x27;. Given as I was still entitled to benefits, I proceeded to not do anything. A few weeks later I got a reminder, once again stating that if I wanted to settle, I wouldn&#x27;t have to do a thing. The reminder was followed by a final notice with opportunity to pay, once again with the exact same text on the back of the letter. So, obviously, I once again did nothing.<p>Then we got a &#x27;Dwangbevel in naam des konings&#x27;, i.e. a writ of execution in the name of the king. Red envelope. No added text. So I had to pay up. Now, as you may well guess, the tax service had already also started settling against the benefits that I was still entitled to! This prompted me to write a strongly worded letter, asking the tax service in no unclear language what I should have done differently, and requesting them to not dock me for that particular money twice. Literally the only reaction to that letter from the tax service was in silently restituting to me the double payment.<p>Now, I&#x27;m a reasonably well paid and well educated software developer, who just happens to hold two nationalities. I am perfectly sure that I would not even have had a reminder had I just had the Dutch nationality.<p>I could have paid in the first place and not settled against upcoming benefits; I was just lazy. However, imagine you&#x27;re scraping by on minimum wage and are put in such a situation. There is no other way you could have acted than I did, but you also would not have been able to pay up to the writ of execution. And there would have been no recourse!<p>Add to that the fact that in many cases, the people hadn&#x27;t even actually been paid too much, as I had, but merely been tagged as possibly fraudulent and put in to the system for reclaiming paid benefits, so the government could take a stance of being hard against benefits fraud while figuring out if any fraud had taken place in the first place.
gambitingabout 4 years ago
This paragraph, holy shit - I have to wonder if whoever wrote it(the algorithm) can look at themselves in the mirror. Or if they just think it&#x27;s all for the greater good and it&#x27;s mostly targetting unwashed masses so it&#x27;s all fine.<p>&quot;In one of the more egregious examples of the lack of humanity in the authorities’ approach, a report from Trouw revealed that the tax office had baselessly applied the mathematical Pareto principle to their punishments, assuming without evidence that 80 percent of the parents investigated for fraud were guilty and 20 percent were innocent. &quot;
评论 #26479305 未加载
评论 #26479396 未加载
Glavnokomanabout 4 years ago
&quot;inherently discriminatory because they took variables such as whether someone had a second nationality into account&quot;. Which is not necessarily wrong. If people with a second nationality are statistically more prone to cheating the system this is a valid variable to determine the probability that they actually cheat and not made a mistake. What would be wrong though is to blindly take that probability for the fact. But it is really hard to say what actually is going on there cause the article is full of emotions and carries close to zero information.
评论 #26478793 未加载
评论 #26480490 未加载
评论 #26479458 未加载
评论 #26479499 未加载
评论 #26479634 未加载
34679about 4 years ago
&quot;Henk and Ingrid pay for Mohammed and Fatima.”<p>I really hope that at some point, the world starts talking about how Mohammed and Fatima ended up there. This is a part of what happens when you spend two decades bombing villages.
评论 #26478912 未加载
评论 #26481926 未加载
评论 #26479702 未加载
评论 #26483133 未加载
bobdupneuabout 4 years ago
For decades the Netherlands have been helping corporations and the super rich evade taxes at a cost of several hundred billion Euros to the global economy per year.<p>How petty to now attack the poor in such a cowardly manner...
LorenPechtelabout 4 years ago
It sounds to me like the real problem is what appears to be a screening tool being used as evidence of guilt.<p>For other examples: field drug test kits--they really only say &quot;this might be drugs&quot;, but the prosecutors use jail to try to get people to plead guilty rather than promptly doing a proper test.<p>Also, the ubiquitous breathalyzer--it&#x27;s actually coded into law as proof in many places but it&#x27;s not. There is an inherent biological flaw that makes the readings inaccurate.
FpUserabout 4 years ago
This will only stop when people in the government will be actually punished for the suffering they&#x27;ve caused. Which is mostly never.
JangoSteveabout 4 years ago
Adding this to my running list of algorithms encoding and amplifying systemic bias, like:<p>* A hospital AI algorithm discriminating against Black people when providing additional healthcare outreach by amplifying racism already in the system. <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;d41586-019-03228-6" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;d41586-019-03228-6</a><p>* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European&#x2F;white males. <a href="https:&#x2F;&#x2F;www.nejm.org&#x2F;doi&#x2F;full&#x2F;10.1056&#x2F;NEJMsa1507092" rel="nofollow">https:&#x2F;&#x2F;www.nejm.org&#x2F;doi&#x2F;full&#x2F;10.1056&#x2F;NEJMsa1507092</a><p>* When the dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. <a href="https:&#x2F;&#x2F;jamanetwork.com&#x2F;journals&#x2F;jamadermatology&#x2F;article-abstract&#x2F;2688587" rel="nofollow">https:&#x2F;&#x2F;jamanetwork.com&#x2F;journals&#x2F;jamadermatology&#x2F;article-abs...</a><p>* When Google&#x27;s hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being &quot;African American English&quot;. <a href="https:&#x2F;&#x2F;fortune.com&#x2F;2019&#x2F;08&#x2F;16&#x2F;google-jigsaw-perspective-racial-bias&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fortune.com&#x2F;2019&#x2F;08&#x2F;16&#x2F;google-jigsaw-perspective-rac...</a><p>* When Amazon&#x27;s AI recruiting tool inadvertantly filtered out resumes from women. <a href="https:&#x2F;&#x2F;www.reuters.com&#x2F;article&#x2F;us-amazon-com-jobs-automation-insight&#x2F;amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G" rel="nofollow">https:&#x2F;&#x2F;www.reuters.com&#x2F;article&#x2F;us-amazon-com-jobs-automatio...</a><p>* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, Black first time offender than for an older white repeat felon. <a href="https:&#x2F;&#x2F;www.propublica.org&#x2F;article&#x2F;machine-bias-risk-assessments-in-criminal-sentencing" rel="nofollow">https:&#x2F;&#x2F;www.propublica.org&#x2F;article&#x2F;machine-bias-risk-assessm...</a> And here&#x27;s some good news though:<p>* When police wrongfully arrested a person based on faulty facial recognition match using grainy security camera footage, without any due diligence, asking for an alibi, or any other investigation. <a href="https:&#x2F;&#x2F;www.npr.org&#x2F;2020&#x2F;06&#x2F;24&#x2F;882683463&#x2F;the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig" rel="nofollow">https:&#x2F;&#x2F;www.npr.org&#x2F;2020&#x2F;06&#x2F;24&#x2F;882683463&#x2F;the-computer-got-it...</a><p>* When the above is compounded for people of color according to studies which show that facial recognition systems misidentify dark-skinned women 40x more often than for light-skinned men. <a href="http:&#x2F;&#x2F;news.mit.edu&#x2F;2018&#x2F;study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212" rel="nofollow">http:&#x2F;&#x2F;news.mit.edu&#x2F;2018&#x2F;study-finds-gender-skin-type-bias-a...</a>. Another study showed false positives can be 10x to 100x more frequent for Asian and African American faces compared to Caucasian. <a href="https:&#x2F;&#x2F;www.nist.gov&#x2F;news-events&#x2F;news&#x2F;2019&#x2F;12&#x2F;nist-study-evaluates-effects-race-age-sex-face-recognition-software" rel="nofollow">https:&#x2F;&#x2F;www.nist.gov&#x2F;news-events&#x2F;news&#x2F;2019&#x2F;12&#x2F;nist-study-eva...</a><p>* When an algorithm blocked kidney transplants for Black patients. <a href="https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;how-algorithm-blocked-kidney-transplants-black-patients&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wired.com&#x2F;story&#x2F;how-algorithm-blocked-kidney-tra...</a><p>* When clinical algorithms include “corrections” for race which directly raise the bar for the need for interventions in people of color, such that they then receive less clinical screening, less surveillance, less diagnoses, and less treatment for everything, including cancer, organ transplants, birth interventions, urinary and blood, bone, and heart disease. <a href="https:&#x2F;&#x2F;www.nejm.org&#x2F;doi&#x2F;10.1056&#x2F;NEJMms2004740" rel="nofollow">https:&#x2F;&#x2F;www.nejm.org&#x2F;doi&#x2F;10.1056&#x2F;NEJMms2004740</a>
评论 #26480812 未加载
评论 #26481144 未加载