This is a pattern we see time and time again: companies hiding behind "we can't tell you how it works, it's a trade secret, you just have to trust us that it does." And these companies land exclusive government contracts. Police drug tests, electronic voting machines, and now police face recognition.<p>We need to demand transparency of any company that receives this kind of special treatment, and require them to disclose statistical analysis of their solution at a minimum. How often is it wrong? How was it verified? If they can't do that, then no deal, and no taxpayer funded boondoggle.
Three of the cases cited are incidents where state organizations are buying AI, which then benefits that state organization at the expense of its citizens: People were arrested others got less benefits.<p>These systems are black boxes. The software companies have a financial incentive to sell them. The programmers have a financial incentive to get the customer what they <i>want</i> not what is honest and true. If this software meant the customer would have to pay out <i>more</i> benefits, how many states would buy it?<p>The same thing happens when the product is <i>not</i> AI. AI is a product. Manufacturers of the product are liable. The product should be open to investigation.<p>When Ford Pintos were killing people, the Ford Pinto could be examined. The 737MAX can be examined.<p>AI can't be examined. The decisions it makes can't even be <i>explained</i> a lot of times.<p>Companies are using AI as a shield. Someone here said the other day that they think people are actually making a lot of the decisions Google makes and saying it was Algorithmic absolves them of the responsibility to explain the decision.<p>This is not a good way forward. You can't say, "I don't know <i>why</i> the machine is hurting people."<p>It's hurting people. Shut it off.
"Algorithmic Decision System" is not a good term. Some of the systems that are described this way do not make decisions, strictly speaking. For example, facial recognition systems don't make decisions- they make <i>identifications</i>. They are classifiers, yes? Planners, game-playing algorithms, decision trees and decision lists, etc, those are systems that are commonly thought of as making "decisions"- but those are very rarely the subject of scrutiny of AI systems these days.<p>Take for instance a system that is used to determine whether a person is in risk of recidivism. The system will cough up some number, probably a float from 0 to 1. The number will be _interpreted_ as a probability that the person will recidivate. Then, based on this _interpetation_ a decision will be made by the person or persons using the system, whether to treat the person as having a high risk of recidivism or not. The system hasn't decided anything at that point- it's the person using the system that has made a decision.<p>The matter is complicated somewhat by the existence of systems that <i>incorporate</i> AI algorithms in a more general automated decision process. For example, self-driving cars use image recognition algos to identify objects in their path but navigation decisions are not taken by the image recognition algos! However I'd wager that those kinds of integrated systems are not what most people think of when they speak of "algorithms" making "decisions". But I may well be wrong.
The title is misleading -- few or none of the examples given in this article, as far as I can tell, use AI or machine learning. They're just "automated systems" in the sense of computer systems that execute regular business rules. For example, the system at issue in the K.W. v. Armstrong case discussed in the article wasn't an AI system, it was just a pretty amateurish ad-hoc Excel spreadsheet.<p>The report quoted in the article (the 2019 AI Now "Litigating Algorithms" report) also shares the same basic problem, making no serious attempt to distinguish between AI and non-AI systems.