TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Suspicion Machine

29 pointsby dthalover 1 year ago

8 comments

denton-scratchover 1 year ago
There&#x27;s something schizophrenic about constructing a system for choosing who to investigate, while simultaneously trying to avoid discrimination.<p>The entire purpose of the chooser system is to discriminate between people; they want to investigate only those people likely to be cheating. If they really want to avoid discrimination, then they chould be choosing who to investigate using a straw-poll.<p>They have laws against certain kinds of discrimination, e.g. on the basis of race or gender. If those facts are used as input to the chooser, then race- and gender-discrimination is inevitable. There&#x27;s not usually any protection against discrimination for e.g. being short, or having red hair, or speaking with a regional accent; I have no idea how such characteristics are correllated with cheating on welfare claims.
评论 #37555161 未加载
rnkover 1 year ago
This is a real problem. These algorithms are a way for us in the west to experience social credit type scores like we read about from China. I&#x27;m sure there&#x27;s someone here who was unfortunate to have a name that overlapped in some way with an identified &quot;terrorist&quot;. Don&#x27;t forget that when you buy an airplane ticket, there&#x27;s that always slightly worrisome option to &quot;add your special id number if you are incorrectly listed as a terrorist&quot;, whatever they call that. The inability to sue to identify the problem or correct it is a real loss of autonomy and freedom. I&#x27;ve always wondered what the impact would be if I ran into that. And also how come the &#x27;terrorist&#x27; can&#x27;t just find out someone&#x27;s excuse-me number? I put terrorist in quotes not because there aren&#x27;t any real terrorists, but because it is such a fraught identification, subjective, there must be mistakes.
评论 #37532986 未加载
评论 #37532449 未加载
0wisover 1 year ago
I am not sure the data model is the problem here. I feel like the journalist tried really hard to make a case against scoring (which I do not like either), but overlooked the fact that the whole system in which its embedded is bad. The case should not be on the technology but on its usage.<p>It’s already looking like a bad journalism piece in the first part :<p>” Being flagged for investigation can ruin someone’s life, and the opacity of the system makes it nearly impossible to challenge being selected for an investigation, let alone stop one that’s already underway. One mother put under investigation in Rotterdam faced a raid from fraud controllers who rifled through her laundry, counted toothbrushes, and asked intimate questions about her life in front of her children.”<p>Here the problem is not the algorithm, its the investigators.<p>Another ethical problem for me : the system of flagging in whole relied partly on anonymous tips from neighbors. I am not an expert but I feel more at ease about a system that rely on a selection algorithm + randomness than delation.<p>I think the problem was the processes around the algorithm not its existence itself. The journalist seems to assume during the whole piece that the algorithm will become the main&#x2F;only way to identify fraudsters. If its the case, it’s terribly wrong because how are you training your algorithm then ?<p>Most of the time, the piece try to put the reader in an emotional state of fear and anger and is not at all doing any analysis, while faking it using a lot of numbers and graphs.<p>Sorry for the long rant but I am surprised that this came from Wired which I consider quite good on tech topics, and that its on HN 2nd page.<p>I am against government scoring and algorithms for legal &#x2F; police cases precisely because it can be badly used by powerful people.<p>Am I the only one to feel that its not a good article ?
评论 #37533310 未加载
0xDEAFBEADover 1 year ago
The algorithm described in this article seems very bad. But I would argue that ML risk scores can, in principle, be better than human judgment.<p>Humans seem more subject to bias than algorithms are. Algorithms only look at data, but humans are additionally vulnerable to stereotypes and prejudices from society.<p>Furthermore, using an algorithm gives voters an opportunity to have a debate regarding how best to approach a problem like welfare fraud.<p>Human judgment relies on bureaucrats who are often biased and unaccountable. It&#x27;s infeasible for voters to audit every decision made by a human bureaucrat. Replacing the bureaucrat with an algorithm and inviting voters to audit the algorithm seems a heck of a lot more feasible.<p>I give the city of Rotterdam a lot of credit for the level of transparency they demonstrated in this article. If they want to be successful with algorithmic risk scores, I think they should increase the level of transparency even further. Run an open contest to develop algorithms for spotting welfare fraud. Give citizens or representatives information about the performance characteristics of various algorithms, and let them vote for the algorithm they want.<p>In the same way politicians periodically come up for re-election, algorithms should periodically come up for re-election too. Inform voters how the current algorithm has been performing, and give them the option to switch to something different.
评论 #37533113 未加载
lozengeover 1 year ago
Isn&#x27;t the Dutch language requirement, which is codified as an eligibility criteria, already intended to create an underclass of residents?<p>I think it is morally justifiable as a residency requirement, but not justifiable to let people live there without being able to receive government support.<p>I think it&#x27;s a situation where the government want to be racist or at least xenophobic, the citizens agree, but the law prevents them. Accenture was drafted in to get around the law.
评论 #37532960 未加载
friend_and_foeover 1 year ago
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;9Ibjn" rel="nofollow noreferrer">https:&#x2F;&#x2F;archive.ph&#x2F;9Ibjn</a>
评论 #37532284 未加载
croesover 1 year ago
Seems like it uses a simple equation:<p>Poor = suspicious
nicbouover 1 year ago
This terrifies me.<p>Algorithms give the rank and file the option to defer all accountability to a machine. The algorithms make mistakes. No one gets blamed or fired for trusting it in the first place.