This scandal and the issues at the dutch tax office runs much deeper than just this one set of issues, they also maintained illegal lists of 'potential fraudulent people' on which 100's of thousands of people that had done nothing wrong, such as having the 'wrong' surname.<p>It's rotten to the core and to date there have been zero consequences for the perps and the damage continues to pile up. Our government has fallen over this and - surprise - the exact same players were re-elected without taking any responsibility for any of this.
By reading, seems that the algorithm is just a scapegoat used in the title.
This is racial discriminations dating way back, purposely introduced in the algorithm, and supported for 6 years.<p>> Having dual nationality was marked as a big risk indicator, as was a low income.<p>>In 2020, Trouw and another Dutch news outlet, RTL Nieuws revealed that the tax authorities also kept secret blacklists of people for two decades, which tracked both credible and unsubstantiated “signals” of potential fraud. Citizens had no way of finding out why they were on the list or defending themselves.<p>>An audit showed that the tax authorities focused on people with “a non-Western appearance,” while having Turkish or Moroccan nationality was a particular focus. Being on the blacklist also led to a higher risk score in the child care benefits system.
> Having dual nationality was marked as a big risk indicator<p>Dual nationality indeed has its ups and downs. FWIW, our children have dual nationality, and on top of that, are all bilingual.<p>On enrolling them in their first school the school insisted on tweaking the enrollment data we'd submitted to list the locally-spoken language second, and so put the non-local language first, as "mother tongue" (and yes, that's exactly how it's described round here <sigh>). This meant the school would get extra money from the government from a pot of money intended to support migrant kids with language issues.<p>The first time this happened we pushed straight back and asked to get this corrected (all our kids were born locally and speak the local language and dialect like everyone else round here), but the school wasn't having it. They couldn't understand why we didn't want to support the school getting extra money.<p>What did we learn from this? If the incentives and/or system are broken by design, it can get very hard for anyone wanting to get the system corrected later.
> The Dutch tax authorities now face a new €3.7 million fine from the country's privacy regulator.<p>How does this work? Wouldn't the government be fining the government and paying itself with the citizens tax revenues?<p>Also, this doesn't seem to serve as a warning at all. Unless all the politicians and civil servants were removed from their job, barred from public positions and given hefty fines then this is actually a clear sign encouraging more of this behavior from governments. Until there are clear consequences enacted by the populace, governments will continue this kind of behavior.
In the documentary <i>Alone Against the State</i>, five mothers who became victims of the Dutch childcare benefits scandal share the chilling stories of how their lives and that of their children were destroyed by a state system deprived of humanity.<p>It is a Dutch documentary, unfortunately without English subtitles:<p><a href="https://www.2doc.nl/documentaires/series/2doc/2021/alleen-tegen-de-staat.html" rel="nofollow">https://www.2doc.nl/documentaires/series/2doc/2021/alleen-te...</a>
This really isn't about "algorithms", so much as it is about xenophobia and ethnic discrimination. The "algorithm", is just another plausible deniability cover for institutionalized discrimination against various minorities and the poor who are least able to fight back.<p>Those in charge, know exactly what they are doing when they create their "blacklists" and "targets". They know the names, they have the bias and prejudices, and they want to target those minority groups. The poorer and less able to fight back, the better. They know the type of damage they are doing to those peoples lives, and sadistically enjoy inflicting it.<p>It's only when they get caught, do they start running around and pulling out excuses from their orifices, and "it was the algorithm" is just one of them.
Reminds me of this: <a href="https://www.bbc.co.uk/news/business-56718036" rel="nofollow">https://www.bbc.co.uk/news/business-56718036</a>
700 Post office franchisees made criminals by faulty accounting software.
> Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators<p>This is the real story here. The algorithm is incidental.
I'm dutch and this really is disgusting. But you have to understand that the Dutch population has no real problem with this. The exact same political parties were elected a few months after this scandal. It's just democracy really.
Around here we have a law where any decision taken by the public sector with the help of a computer program must (upon request) be explained in plain language, the calculations detailed step by step.<p>Of course, this indirectly prohibits the use of the kind of programs that seem to have been used in this case, since instead of an algorithm, they use a neural network which is a black box even to his trainer (or architect).<p>Sadly, I hear that our own tax authorities have started to use them <i>anyway</i>, but then they have long had this tendency of consider themselves "special", and that laws didn't apply as much to them...
Another issue that contributed to this scandal is that the rules for benefits (not just social benefits) are often such that you have to request it up front at the beginning of the year, but if at the end of the year it turns out that your situation changed midway and you no longer had the (full) right to the benefit, you have to pay it back along with a fine. It's really easy to not be aware of this.
I don't understand the article.<p>Suppose they had an algorithm for computing a risk score. Ok, so someone is determined to have a "high risk score". That doesn't mean that they've done anything wrong; at most, it means prioritizing them for closer scrutiny. That could still be discriminatory (e.g. focus on Turkish/Moroccan immigrants), but in itself - it's not supposed to lose anyone their benefits.<p>Also, the article says:<p>> Citizens had no way of ... defending themselves.<p>Why could they not sue the tax authorities in court, asking the court to order the tax authorities to withdraw their payment demands? If there was no evidence of them having evaded taxes, I mean.
This is utterly bonkers. The discrimination sounds completely blatant and is it actually saying any case flagged by the algorithm was considered and treated as proven fraud rather than just raised for investigation?!? That’s absurd.
That's not really an algorithm usage issue but an administrative ones. Really. The issue does not arise from an ineffective/wrong system but for the absence of quick, powerful and effective human backup.<p>A small example: automated procedures, if well designed tend to be really effective *in means* witch means they are good. BUT they tend to have hard to spot (or even evident, but no one care up front) loopholes and corner cases that makes not them but their use a nightmare. In those case the solution is simple: a human that pick all issues not automatically treated.<p>The *administrative* issue is this: those who choose automation must know it can fail and so must be prepared for a normal and regular and effective backup.<p>Personally if a day my country tax agency ask me 100k euros for a clear error I feel no specific scandal, anyone can makes error, but I expect to have a phone number to call where someone pick the call quickly and in little time we can sort out the issue. It's not a life-or-death thing, and it's complex, issues normally happen and that's is. If I can't call because no one answer or they answer but they are powerless or there is no quick solution than it's not the erroneous claim the scandal, but the absence of means to sort it out rapidly.
The perils of false positives and making decisions based on probability. These governments are punishing innocents and hoping the guilty is among them. Horrifying.
Sounds a bit like Australia's illegal Robodebt scheme:<p><a href="https://en.wikipedia.org/wiki/Robodebt_scheme" rel="nofollow">https://en.wikipedia.org/wiki/Robodebt_scheme</a><p>The Australian system was simpler than the Dutch system, comparing social benefits paid with averaged tax-income from the ATO. It was a stupidly wrong calculation, the averaged income is much higher in people with inconsistent income than it actually is, so people were hounded by debt collectors for nonexisting debt. A$1.2 billion false debt for a nation of 25 million people. A good number of associated suicides.<p>Of course, there was no real accountability, with the system having been introduced by the previous government who is now in opposition. Money has been paid back but everybody involved still has their job.
> "An audit showed that the tax authorities focused on people with “a non-Western appearance,” while having Turkish or Moroccan nationality was a particular focus".<p>This has nothing to do with algorithms, mates. It had to do with how it wasn't tested and monitored. It appears that it was just rolled out live.<p>> "Fraud prediction and predictive policing based on profiling should just be banned"<p>And here are the usual luddists pandering on the confused using wrong arguments. Sigh.
From what the article spelled out, using algorithms isn't precisely the risk, but using them in a completely horrendously unpracticed and uninformed way seems to be.
> Chermaine Leysner’s life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000.<p>A student can get €100,000 for child care?<p>That is amazing.<p>For me, that is the most surprising thing in the entire article.
Born and raised in EU South, I grew up looking up to Northern Europe with regards to their efficiency, values, etc.<p>This is news I'd expect to hear from a backwater country I'd never heard off. But I'm afraid it is just yet one data point of how wrong I was and how bad things have gone in the heart of EU and maybe the whole western world.
I'm on welfare in france, and sometimes I wonder if the government is going to find some excuse to stop giving me money.<p>Could be "not applying to enough job offers", while most employers are always complaining about candidates not being skilled enough, which is why unemployment is so high.
When are we going to bite the bullet and require governmental licensing for algorithms? I know HN hates the idea because we're all too busy using algorithms to ruin people's lives for profit, but it needs to happen and it will happen sooner or later.
If I calculate correctly, Dutch child care allowance is around $2000 per month, which seems generous. What is the average child care allowance in other EU countries?
"Risks of using algorithms"? Clearly the Enlightenment has ended, because this is a headline that would only make sense in the age of the Inquisition.
Nadine Dorries (UK Secretary of State for Digital, Culture, Media and Sport) was right, we should simply get rid of algorithms. They are clearly harmful /s<p>[0] - <a href="https://www.huffingtonpost.co.uk/entry/nadine-dorries-microsoft-algorithm-twitter_uk_62331aaee4b0d39357c37f9c" rel="nofollow">https://www.huffingtonpost.co.uk/entry/nadine-dorries-micros...</a>
> That’s not good enough, argues Renske Leijten, a Socialist member of the Dutch parliament<p>Exactly, and especially in the public sector on the EU level this could be even vastly more damaging. The warning were loud enough and people didn't listen or care, so it is probably necessary that many will have to feel it before anything will change. This is probably too small scale, even if thousands are affected.
I hate how the correlation between income/country of origin and fraud probability is discarded as „racist“ by at least Amnesty International.<p>Maybe we should first do a correlation and causality analysis and see what it yields instead of closing our eyes because of PC?