One of my big concerns is whether the algorithms are institutionalizing racism, with legal decisions that make it impossible to challenge these.<p>After all, the algorithm has been trained on information about recidivism that was collected in a world where racism skews arrest rates, conviction rates, and sentencing.[1] That means the algorithm is almost certainly baking in a racial bias. Now, I'm sure they aren't foolish enough to put "race" in as one of the input factors, but other correlated factors will allow the algorithm to continue to enforce this racism, but now with legal immunity.<p>[1] Do I really need to footnote this? <a href="http://www.huffingtonpost.com/kim-farbota/black-crime-rates-your-st_b_8078586.html" rel="nofollow">http://www.huffingtonpost.com/kim-farbota/black-crime-rates-...</a> is one source that addresses all of these, but there are many, many other sources.
> These algorithmic outputs inform decisions about bail, sentencing, and parole. Each tool aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources.<p>It's really not clear to me that much is gained from having very precise decisions made about bail and sentencing. Trying to predict the future is a fool's errand, whether a judge does it or a computer. It'd be better to just set fair, uniform standards (particularly for bail where bail should be granted presumptively unless unique circumstances are present).<p>Unfortunately, using machine learning for sentencing is just the tip of the iceberg. "Scientism" is rife in the criminal justice system. The U.S. Sentencing Guidelines, for example, are utter gibberish. Sentences are calculated to the month using complex formulas: <a href="http://www.ussc.gov/guidelines/2016-guidelines-manual/2016-chapter-4" rel="nofollow">http://www.ussc.gov/guidelines/2016-guidelines-manual/2016-c...</a>.<p>> The total points from subsections (a) through (e) determine the criminal history category in the Sentencing Table in Chapter Five, Part A.<p>> (a) Add 3 points for each prior sentence of imprisonment exceeding one year and one month.<p>> (b) Add 2 points for each prior sentence of imprisonment of at least sixty days not counted in (a).<p>> (c) Add 1 point for each prior sentence not counted in (a) or (b), up to a total of 4 points for this subsection.<p>> (d) Add 2 points if the defendant committed the instant offense while under any criminal justice sentence, including probation, parole, supervised release, imprisonment, work release, or escape status.<p>> (e) Add 1 point for each prior sentence resulting from a conviction of a crime of violence that did not receive any points under (a), (b), or (c) above because such sentence was treated as a single sentence, up to a total of 3 points for this subsection.<p>But it's not like this is based on an empirical statistical model correlating sentences with recidivism or deterrence effects. It's classic scientism, believing that an algorithmic sentence based on completely arbitrary rules is somehow better than an arbitrary sentence handed out by human judgment.
>While this can be the fastest route, the GPS’s algorithm does not concern itself with factors important to truckers carrying a heavy load, such as the 43’s 1,300-foot elevation drop over four miles with two sharp turns.<p>I know this is somewhat off topic but the lack of advanced options for GPS routing is such a PITA. It would be trivial to add check-boxes for things like:<p>"I'm towing a trailer, don't make me take dumb lefts across multiple lanes, avoid clustterfawks and don't make me take unnecessary turns"<p>"Yes I'm wealthy enough to afford an iphone, that doesn't mean I want you to send me through a $5 bridge toll"<p>"I'm taking a road-trip, send me on a route that uses ten fewer roads even if it takes twenty more minutes, I don't want to have to look for a turn every 10min"<p>I know tons of options aren't good for the UI but just hiding all that stuff behind an "advanced preferences" menu or something would be nice.<p>Just a simple tie in to a weather API that increases the cost of route features that are a PITA in snow would be nice (no I don't want to stop on a downhill to take a >90deg left across 40mph traffic in snow thank you very much).
I wonder if the algorithm is evidence-based, and learns from the results of prior decisions.<p>In which case it is possible it would eventually discover that in the USA incarceration is very strongly linked to recidivism. It follows that the algorithm might refuse to incarcerate many convicts.<p>Which is arguably exactly what the algorithm should do, namely what politicians will/can not: employ evidence to advance the methods and improve the outcomes of the criminal justice system.
Playing devils advocate I'd say that the main problem with employing AI would be that it will how bad decisions humans make. This is exactly the kind of case where computers, just relying on hard data would do much better than humans.<p>The article complains that AI is a black box for the defendant. How is this any different from judge's brain? You can't peek into his mind to figure out what is behind the decision. Judge can give some justifications, but you won't know if those are the real reasons or if the decision is just mostly based on defendants skin color, socioeconomic background or clothing.
based on some studies I've seen, human judges are terrible due to basic human nature, so I'd like to see something happen to make things more objective. Of course, the algo would have to be open, as opposed to the proprietary software being used now.<p>For example just having your sentencing be before lunch or at the end of the day results in harsher punishment simply due to the judge being hungry or tired.<p>Same goes for other basic things like women getting lighter sentences, minorities harsher sentences, etc.
Instead of asking to stop its use, why not ask that it be "supervised" up to the point where its results beat the average judge in a given area of law?
AI is only able to keep performing as it has learned from the data set. In other words, it keeps the status quo within a few percent of an intended target. Unless given the clear go ahead to just keep learning, at which point they seem to fail at random
I wonder if it were possible to get a copy of the software so as to figure out what the best possible responses to a presentencing interview would be in order to get the most favorable computation.<p>Like - Yes, I'm very social. I play bridge, take my kids to soccer, I'm a member of the PTA. Yes, I exercise. I have a weight set at home, ride my bicycle, play racquetball at the gym. No sir, I don't do drugs, never touched them. No sir, I don't drink either. Yes sir, I do have a degree, two in fact!<p>Just in case you might need it at a presentencing interview, of course.
Using traditional machine learning techniques for this purpose is a non-starter and completely unacceptable. Neural networks are just a black box, and don't produce an inspectable justification or reasoning. The best you can say is that the model correctly predicts recidivism in X% of cases for some sample.<p>I'm not opposed to using other algorithmic methods, but the algorithm needs to be transparent. Though that would be difficult to do outside of some pretty tightly controlled parameters. We can't currently make a system that can take into account arbitrary facts about the case and weigh ethical implications.
Even if the software was statistically perfect and somehow immune to our cultural biases, it should still not be used. An individual stands before the judge, not a statistical demographic group. The probability of an imaginary statistical individual's recidivism is irrelevant; what is relevant is the current state of that individual. Relying on statistical models is both lazy and unfair
"Simple rules for complex decisions" Use machine learning techniques to allow humans to make complex decisions: <a href="https://arxiv.org/abs/1702.04690" rel="nofollow">https://arxiv.org/abs/1702.04690</a>
Could also lead to a type of self fulfilling prophesy, by ignoring the 'individual' in the decision making process, thusly groups being targeted will learn over time that their personal efforts to reform are a waste of time.