TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The dangers of letting algorithms make decisions in law enforcement

87 pointsby smilabout 10 years ago

16 comments

okasakiabout 10 years ago
&gt; Employees at the center referred her to the online system. Uncomfortable with the technology, she asked for help with the online forms and was refused.<p>Seems like it was humans that failed her. I&#x27;m not sure what algorithms have to do with this.
评论 #9559481 未加载
评论 #9559406 未加载
评论 #9561660 未加载
irl_zebraabout 10 years ago
I take from this that there needs to be some flexibility in how the results of the algorithms are applied. I also find the first example in the article unconvincing as a real mistake. It states that Robert McDaniel had:<p>&gt; a misdemeanor conviction and several arrests on a variety of offenses—drug possession, gambling, domestic violence<p>then it seems like it&#x27;s calling the algorithms a mistake that<p>&gt; branded Robert McDaniel a likely criminal<p>Maybe I&quot;m just sheltered, but a history of arrests, drug possession, and domestic violence tell me that the person is probably a criminal (though whether that rises to the level of bring one of Chicago&#x27;s top 420 criminals I can&#x27;t say).
评论 #9561428 未加载
评论 #9561438 未加载
jqmabout 10 years ago
How does the number of people negatively affected by algorithm compare to the number of people who would be negatively affected by human processor?<p>I mean, someone could probably write hundreds of similar articles about negative interactions with callous or incompetent human officials. Having dealt with at least an average number of DMV type officials over the years, I can&#x27;t see that machines could do a whole lot worse.<p>I do agree with several points of the article though. Let the algorithms be open to public critique. This is democracy and it should lead to improvement (eventually). And of course there should always be recourse to human intervention.
DanielBMarkhamabout 10 years ago
I&#x27;m not sure if most folks really understand the nightmare we&#x27;re setting ourselves up for. It&#x27;s the domestic policy equivalent of drone warfare.<p>The western legal system was built and functions inherently on the precondition that it&#x27;s people who use, administer, and maintain it. There&#x27;s a lot of slack and human interpretation built into the process, and no laws are constructed such that they are enforced in a mechanical fashion. In addition, there&#x27;s the premise that the folks doing the work of enforcing the laws are virtually the same as those being policed. Finally, severely unjust or unpopular laws are many times ignored by both the population and the enforcers.<p>All of that goes away with machine application of criminal&#x2F;administrative law. The system was not built for this.
评论 #9559630 未加载
评论 #9560559 未加载
评论 #9562421 未加载
pdkl95about 10 years ago
&quot;Decision-making algorithms are politics played out at a distance, generating a troubling amount of emotional remove.&quot;<p>This is absolutely key. Adding distance[1] between the point where a decision is made and where the consequences of that decision are realized make it harder for any feedback from those consequences to affect the person making the decision. This makes the decisions worse (from lack of information) <i>and</i> the implementation worse (error must be much larger before the feedback from that error reaches the decision maker).<p>You see this effect in many areas. An obvious example is the law enforcement mentioned the article (or military), where &quot;just following orders&quot; to the modern variant of &quot;just following an algorithm&quot; end up causing problems.<p>A more interesting example might be the existence of the derivatives market and the invention of increasingly-exotic financial instruments. A bank giving someone a loan has some fairly well-known possible behaviors, and is (probably) close enough to allow feedback between the parties for things like capitalism to work (if you don&#x27;t like the bank&#x27;s behavior, you let them know that isn&#x27;t acceptable by refinancing at a different bank). On the other hand, bad decisions bundled up and hidden in collateralized debt obligations sheltered these bad decisions until the problem blew up and introduced the world to the phrase &quot;too big to fail&quot;.<p>A very interesting discussion of this problem - focused on how this kind of distance relates to human <i>honesty</i> (and rationalization) - is this RSA Animate featuring Dan Ariely: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XBmJay_qdNc" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XBmJay_qdNc</a><p>[1] measured in either number-of-hops or time
Makenabout 10 years ago
Using algorithms for support decision making and and putting a bad UI barrier between the users and the managers are two different things.<p>Anyway, public administrations should indeed make publish how their algorithms work in order to ensure they are reflecting the official policies.
bayesianhorseabout 10 years ago
This is not a problem of &quot;algorithms&quot; but rather of stupid policies. A programming manager at Google would have been fired if he had put such obvious errors in PageRank (or whatever they call it these days).<p>Algorithms and data can only improve effectiveness of these systems and agencies. However, their use has been combined with drastic funding cuts. These cuts and the resulting malfunctions aren&#x27;t exactly a fundamental problem with data science.
评论 #9559395 未加载
评论 #9560095 未加载
评论 #9560082 未加载
brohoolioabout 10 years ago
I&#x27;ve been called as asked to take a survey about my interactions with an employee at a company I do business with. I could tell that the survey as constructed would not capture my actual concerns with the business processes and would instead reflect poorly on the employee that I did business with. The failure of the system would end up being used to mark the employee down even though he did a good job with the constrains he had.<p>It&#x27;s unfortunate that these sorts of automated processes are ending up targeting edge cases, like things that should be covered by the ADA.
j2kunabout 10 years ago
In the CS community a new (sub)subfield has emerged called &quot;Fairness, Accountability, and Transparency in Machine Learning&quot; (FATML). It&#x27;s a young research topic, but I find it quite interesting.<p><a href="http:&#x2F;&#x2F;www.fatml.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.fatml.org&#x2F;</a>
nitwit005about 10 years ago
This seems to make the false premise that you need computers to make decisions algorithmicly. If someone writes out a set of hard rules as to who can apply for a welfare program, the result will be the same if a human or machine makes the determination.<p>Long before computers existed, people complained about &quot;rigid bureaucracy&quot;, which is effectively a complaint that government or business employees stuck to a process (an algorithm) that had some problems.
6d0debc071about 10 years ago
I have sat across from a call centre in a government office and listened to the workers running through the written version of algorithms. I&#x27;ve spoken to people who worked there and heard how crushing it was to know that someone was getting screwed but to be totally unable to do anything about it because the policy dictated their reaction. And I&#x27;ve worked with charities and listened to the other end of those phone calls; people screaming that their kids are going to be taken away because their benefits have been delayed and they can&#x27;t afford to feed them.<p>The underlying assumption of this piece seems to be that turning decision making over to algorithms reduces positive discretion. But the humans in these situations frequently have no more discretion than the machine does, and inefficiency also has a human cost. It seems false to me to pretend that what these algorithms are doing, at least in terms of the majority of their immediate effect, is qualitatively different.<p>What you&#x27;re losing when you encode something as an algorithm is the insight that you get from having humans in the loop. Intuition; the things that people haven&#x27;t thought to measure yet. That&#x27;s the weakness in any statistical technique - you need a human to lend numbers relevance; to say what is important to know the relationships of; otherwise they&#x27;re just a sequence of events.<p>But you need to start off with a system that leverages human strengths in order for that criticism to make sense. Human judgement only has an advantage in a system designed to use the different sorts of value that it offers. If your call centre worker is not truly responsible for the outcome of the call, and if you don&#x27;t regularly attempt to get feedback from them to inform policy decisions, then it makes no difference if they are replaced by a machine. They were being treated as one to begin with, and the value that they added to the organisation by virtue of being human; of having professional judgement; was being thrown away anyway.<p>All this does, in a lot of cases, is make existing flaws more obvious.<p>The exception I can think of to this is the criminal justice system, where there are examples of positive discretion. However, there are also examples of negative discretion there. There are many stupid laws on the books, and selectively enforcing those laws allows you to screw, more or less, whoever you want. It&#x27;s not surprising that a system that would mechanically implement those laws would produce undesirable outputs, it&#x27;s just that it&#x27;s finally being applied to people who have the power to say something about it, (and, perhaps, have their concerns taken seriously enough to alter policy.)<p>For all that there is a loss in the case of the criminal justice system, there is also a gain: Encoding something as an algorithm makes the flaws in the process more apparent.
评论 #9561116 未加载
EdwardCoffinabout 10 years ago
This reminds me of the terrifying epistolary short story Computers Don&#x27;t Argue [1] by Gordon R. Dickson<p>[1] online here: <a href="http:&#x2F;&#x2F;www.dave.rainey.net&#x2F;calendars&#x2F;dystopias&#x2F;process3.html" rel="nofollow">http:&#x2F;&#x2F;www.dave.rainey.net&#x2F;calendars&#x2F;dystopias&#x2F;process3.html</a>
godisdadabout 10 years ago
See also: <a href="http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Therac-25" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Therac-25</a>
zbyabout 10 years ago
compare and contrast this with Tim O’Reilly essay proposing Algorithmic Regulation <a href="http:&#x2F;&#x2F;beyondtransparency.org&#x2F;chapters&#x2F;part-5&#x2F;open-data-and-algorithmic-regulation&#x2F;" rel="nofollow">http:&#x2F;&#x2F;beyondtransparency.org&#x2F;chapters&#x2F;part-5&#x2F;open-data-and-...</a>
davidgerardabout 10 years ago
This sort of thing is why Smart Contracts are actually the worst idea.
soup10about 10 years ago
I&#x27;ll take the algorithms any day of the week. The sooner we remove assholes from administering the law, the better.