TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Training Computers to Find Future Criminals

90 pointsby nigrioidalmost 9 years ago

23 comments

Houshalteralmost 9 years ago
I could not disagree more with these comments. Psychologists are just now starting to study the phenomenon of &quot;algorithm aversion&quot;, where people irrationally trust human judgement far more than algorithms. Even after watching an algorithm do far better in many examples.<p>The reality is humans are far worse. We are biased by all sorts of things. Unattractive people were found to get twice as long sentences as attractive ones. Judges were found to give much harsher sentences right before lunch time, when they were hungry. Doing interviews was found to decrease the performance of human judges, in domains like hiring and determining parolle. As opposed to just looking at the facts.<p>Even very simple statistical algorithms far outperform humans in almost every domain. As early as 1928, a simple statistical rule predicted recidivism better than prison psychologists. They predict the success of college students, job applicants, outcomes of medical treatment, etc, far better than human experts. Human experts never even beat the most basic statistical baseline.<p>You should never ever trust human judges. They are neither fair nor accurate. In such an important domain as this, where better predictions reduce the time people spend in prison and crime, there is no excuse not to use them. Anything that gets low risk people out of prison is good.<p>I believe that any rules that apply to algorithms should apply to humans too. We are algorithms too after all. If algorithms have to be blind to race and gender, so should human judges. If economic information is bad to use, humans should be blind to it also. If we have a right to see why an algorithm made a decision the way it did, we should be able to inspect human brains to. Perhaps put judges and parolle officers in an MRI.
评论 #12120096 未加载
评论 #12120641 未加载
评论 #12120920 未加载
评论 #12121313 未加载
评论 #12120516 未加载
评论 #12121022 未加载
评论 #12120186 未加载
评论 #12120077 未加载
评论 #12121986 未加载
评论 #12120407 未加载
评论 #12121959 未加载
imhalmost 9 years ago
I think the whole idea here is frightening and unjust. We are supposed to give all people equal rights. What people might do is irrelevant. A person whose demographic&#x2F;conditional expectation is highly criminal should be given an equal opportunity to rise above it, else they might see the system is rigged against them and turn it into a self-fulfilling prophecy.
评论 #12120056 未加载
moconnoralmost 9 years ago
&quot;between 29 percent and 38 percent of predictions about whether someone is low-risk end up being wrong&quot;<p>Wouldn&#x27;t win a Kaggle contest with that error rate. What&#x27;s not disclosed is the percent of predictions about whether someone is high-risk ending up being wrong. These are the ones society should be worried about.<p>And these are the ones that are, if such a system is put into practice, impossible to track. Because all the high-risk people are locked up. The socio-political fallout of randomly letting some high-risk people free to validate the algorithm makes this inevitable.<p>This leaves us in a situation where political pressure is <i>always</i> towards reducing the number of people classified as low-risk who then re-offend. Statistical competence is not prevalent enough in the general population to prevent this.<p>TL;DR our society is either not well-educated enough or is improperly structured to correctly apply algorithms for criminal justice.
评论 #12123817 未加载
brillenfuxalmost 9 years ago
The nonchalance of these people is what really terrifies me.<p>They just laugh any valid criticism off and start using the references &quot;ironically&quot; themselves.<p>I don&#x27;t understand how they can do that; do they not have a moral compass? are they psychopaths?
评论 #12124267 未加载
sevenlessalmost 9 years ago
The entire concept of using statistical algorithms to &#x27;predict crime&#x27; is wrong. It&#x27;s just a kind of stereotyping.<p>What needs to happen is a consideration of the social-justice outcomes if &#x27;profiling algorithms&#x27; become widely used. Just as in any complicated system, you cannot simply assume reasonable looking rules will translate to desirable emergent properties.<p>It is ethically imperative to aim to eliminate disparities and social inequalities between races, even if, and this is what is usually left unsaid, <i>judgments become less accurate in the process</i>.<p>Facts becoming common knowledge can harm people, even if they are true. Increasingly accurate profiling will have bad effects at the macro scale, and keep marginalized higher-crime groups permanently marginalized. If it were legal to use all the information to hand, it would be totally rational for employers to discriminate against certain groups on the basis of a higher group risk of crime, and that would result in those groups being marginalized even further. We should avoid this kind of societal positive feedback loop.<p>If you accept that government should want to avoid a segregated society, where some groups of people form a permanent underclass, you should avoid any algorithm that results in an increased differential arrest rate for those groups, <i>even if that arrest rate is warranted by actual crimes committed</i>.<p>&quot;The social norm against stereotyping, including the opposition to profiling, has been highly beneficial in creating a more civilized and more equal society. It is useful to remember, however, that <i>neglecting valid stereotypes inevitably results in suboptimal judgments</i>. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is costless is wrong. The costs are worth paying to achieve a better society, but denying that the costs exist, while satisfying to the soul and politically correct, is not scientifically defensible. Reliance on the affect heuristic is common in politically charged arguments. The positions we favor have no cost and those we oppose have no benefits. We should be able to do better.&quot;<p><pre><code> –Daniel Kahneman, Nobel laureate, in Thinking, Fast and Slow, chapter 16</code></pre>
评论 #12121682 未加载
评论 #12122036 未加载
评论 #12122025 未加载
评论 #12121511 未加载
andrewaylettalmost 9 years ago
I like the proposal from the EU that automated decisions with a material impact must firstly come with a justification -- so the system must be able to tell you <i>why</i> it came out with the answer it gave -- and must have the right of appeal to a human.<p>The implementation is the difficult bit, of course, but as a principle, I appreciate the ability to sanity-check outputs that currently lack transparency.
评论 #12121082 未加载
Smerityalmost 9 years ago
As someone who does machine learning, this absolutely terrifies me. The &quot;capstone project&quot; of determining someone&#x27;s probability of committing a crime by their 18th birthday is beyond ridiculous. Either the author of the article hyped it to the extreme (for the love of everything that&#x27;s holy, stop freaking hyping machine learning) or the statistician is stark raving mad.<p>The fact that he does this for free is also concerning, primarily as I doubt this has any level of auditing behind it. The only thing I agree with him on is that black box models are even worse as they have even worse audit issues. Given the complexities in making these predictions and the potentially life long impact they might have, there is such a desperately strong need for these systems to have audit guarantees. It&#x27;s noted that he supposedly shares the code for his systems - if so, I&#x27;d love to see it? Is it just shared with the relevant governmental departments who likely have no ability to audit such models? Has it been audited?<p>Would you trust mission critical code that didn&#x27;t have some level of unit testing? Some level of code review? No? Then why would you potentially destructively change someone&#x27;s life based on that same level of quality?<p>&gt; &quot;[How risk scores are impacted by race] has not been analyzed yet,&quot; she said. &quot;However, it needs to be noted that parole is very different than sentencing. The board is not determining guilt or innocence. We are looking at risk.&quot;<p>What? Seriously? Not analyzed? The other worrying assumption is that it isn&#x27;t used in sentencing. People have a tendency to seek out and misuse information even if they&#x27;re told not to. This was specifically noted in another article on the misuse of Compas, the black box system. Deciding on parole also doesn&#x27;t mean you can avoid analyzing bias. If you&#x27;re denying parole for specific people algorithmically, that can still be insanely destructive.<p>&gt; Berk readily acknowledges this as a concern, then quickly dismisses it. Race isn’t an input in any of his systems, and he says his own research has shown his algorithms produce similar risk scores regardless of race.<p>There are so many proxies for race within the feature set. It&#x27;s touched on lightly in the article - location, number of arrests, etc - but it gets even more complex when you allow a sufficiently complex machine learning model access to &quot;innocuous&quot; features. Specific ML systems (&quot;deep&quot;) can infer hidden variables such as race. Even location is a brilliant proxy for race as seen in redlining[1]. It does appear from his publications that they&#x27;re shallow models - namely random forests, logistic regression, and boosting[2][3][4].<p>FOR THE LOVE OF EVERYTHING THAT&#x27;S HOLY STOP THROWING MACHINE LEARNING AT EVERYTHING. Think it through. Please. Please please please. I am a big believer that machine learning can enable wonderful things - but it could also enable a destructive feedback loop in so many systems.<p>Resume screening, credit card applications, parole risk classification, ... This is just the tip of the iceberg of potential misuses for machine learning.<p>Edit: I am literally physically feeling ill. He uses logistic regression, random forests, boosting ... standard machine learning algorithms. Fine. Okay ... but you now think the algorithms that might get you okay results on Kaggle competitions can be used to predict a child&#x27;s future crimes?!?! WTF. What. The actual. ^^^^.<p>Anyone who even knows the hello world of machine learning would laugh at this if the person saying it wasn&#x27;t literally supplying information to governmental agencies right now.<p>I wrote an article last week on &quot;It&#x27;s ML, not magic&quot;[5] but I didn&#x27;t think I&#x27;d need to cover this level of stupidity.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Redlining" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Redlining</a><p>[2]: <a href="https:&#x2F;&#x2F;books.google.com&#x2F;books&#x2F;about&#x2F;Criminal_Justice_Forecasts_of_Risk.html?id=Jrlb6Or8YisC&amp;printsec=frontcover&amp;source=kp_read_button&amp;hl=en#v=onepage&amp;q&amp;f=false" rel="nofollow">https:&#x2F;&#x2F;books.google.com&#x2F;books&#x2F;about&#x2F;Criminal_Justice_Foreca...</a><p>[3]: <a href="https:&#x2F;&#x2F;www.semanticscholar.org&#x2F;paper&#x2F;Developing-a-Practical-Forecasting-Screener-for-Berk-He&#x2F;6999981067428dafadd10aa736e4b5c293f89823" rel="nofollow">https:&#x2F;&#x2F;www.semanticscholar.org&#x2F;paper&#x2F;Developing-a-Practical...</a><p>[4]: <a href="https:&#x2F;&#x2F;www.semanticscholar.org&#x2F;paper&#x2F;Algorithmic-criminology-Berk&#x2F;226defcf96d30cf0a17c6caafd60457c9411f458" rel="nofollow">https:&#x2F;&#x2F;www.semanticscholar.org&#x2F;paper&#x2F;Algorithmic-criminolog...</a><p>[5]: <a href="http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;ml_not_magic.html" rel="nofollow">http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;ml_not_magic.html</a>
评论 #12119750 未加载
评论 #12120024 未加载
评论 #12119875 未加载
评论 #12120026 未加载
评论 #12119899 未加载
评论 #12121308 未加载
ccvannormanalmost 9 years ago
&gt;Risk scores, generated by algorithms, are an increasingly common factor in sentencing. Computers crunch data—arrests, type of crime committed, and demographic information—and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole.<p>So, eventually a robot police officer will arrest someone for having the wrong profile.<p>&gt;Berk wants to predict at the moment of birth whether people will commit a crime by their 18th birthday, based on factors such as environment and the history of a new child’s parents. This would be almost impossible in the U.S., given that much of a person’s biographical information is spread out across many agencies and subject to many restrictions. He’s not sure if it’s possible in Norway, either, and he acknowledges he also hasn’t completely thought through how best to use such information.<p>So, we&#x27;re not sure how dangerous this will be, or how Minority Report thoughtcrime will work, but we&#x27;re damned sure we want it, because it&#x27;s the future and careers will be made?<p>This is a very scary trend in the U.S. Eventually, if you&#x27;re born poor&#x2F;bad childhood, you will have even <i>less</i> of a chance of making it.
评论 #12119421 未加载
kriroalmost 9 years ago
Predictive policing is quite the buzz word these days. IBM (via SPSS) is one of the big players in the field. The most common use case is burglary, I suspect because that&#x27;s somewhat easy (and also directly actionable). You rarely find other use cases in academic papers (well I only browsed the literature a couple of times preparing for related projects).<p>The basic idea is sending more police patrols to areas that are identified as high thread and thus using your available resources more efficiently. The focus in that area is more on objects&#x2F;areas than on individuals so you don&#x27;t try to predict who&#x27;s a criminal but rather where they&#x27;ll strike. It sounds like a good enough idea in theory but at least in Germany I know that research projects for predictive policing will be scaled down due to privacy concerns even if the prediction is only area and not person based (noteworthy that that&#x27;s usually mentioned by the police as a reason why they won&#x27;t participate in the research). I&#x27;m not completely sure and only talked to a couple of state police research people but quite often the data also involves social media in some way and that&#x27;s the major problem from what I can tell.
评论 #12120801 未加载
peterbonneyalmost 9 years ago
Here&#x27;s something I really dislike about all the coverage I&#x27;ve seen about these &quot;risk assessment algorithms&quot;: There is absolutely no discussion of the magnitude of the distinctions between classifications. Is &quot;low risk&quot; supposed to be (say) 0.01% likelihood of committing another crime and &quot;high risk&quot; (say) 90%? Or is &quot;low risk&quot; (say) 1% vs. &quot;high risk&quot; of (say) 3%?<p>Having worked on human some predictive modeling of &quot;bad&quot; human events (loan defaults) my gut says it&#x27;s more like the latter than the former, because prediction of low-frequency human events is <i>really</i> hard, and, well, they&#x27;re by definition infrequent. If that suspicion is right, then the signal-noise ratio is probably too poor to even consider using them in sentencing, and that&#x27;s <i>without</i> considering the issues of bias in the training data, etc.<p>But there is never enough detail provided (on either side of the debate) for me to make an informed assessment. It&#x27;s just a lot of optimism on one side and pessimism on the other. I&#x27;d really love to see some concrete, testable claims without having to dive down a rabbit hole to find them.
conjecturesalmost 9 years ago
What is Berk&#x27;s model? How well does it do across different risk bands? What variables are fed into it in the states where it is used? How does prediction success vary across types of crimes, versus demographics within crime?<p>This article treats ML like a magic wand, which it isn&#x27;t. There&#x27;s not enough information to make a judgement on whether the tools are performing well or not, or whether that performance, or lack of it, is based on discrimination.<p>Where we do have information it is worrying:<p>&quot;Race isn’t an input in any of his systems, and he says his own research has shown his algorithms produce similar risk scores regardless of race.&quot;<p>What?!? The appropriate approach would be to include race as a variable, fit the model, and then marginalise out race when providing risk predictions. Confounding is mentioned but the explanation of how it is dealt with, without doing the above isn&#x27;t given - just a (most likely false) reassurance.
anupshindealmost 9 years ago
This is like machine introduced bias&#x2F;racisim&#x2F;castism... we need a new term for that.. and its based on statistically induced pseudo-sciences many times similar to astrology. This is the kind of AI everyone should be afraid of.
评论 #12119937 未加载
评论 #12120021 未加载
acdalmost 9 years ago
Would the following be common risk factors for a child becoming future criminal? Would it not be cheaper for society to invest in this risk children early on rather than dealing with their actions as an adult? Minority report. What are your observations for risk factors? Has there been Social science any interviews of prisoners and their background feed into classification engines?<p>Classification ideas: * Bad parents not raising their child * Living in a poor neighbourhood with lots of crime * Going to a bad school * Parents who are workaholics. * Single parent * Parent who is in jail
nlalmost 9 years ago
For those who haven&#x27;t read it, Propublica article on this is even better (and scarier): <a href="https:&#x2F;&#x2F;www.propublica.org&#x2F;article&#x2F;machine-bias-risk-assessments-in-criminal-sentencing" rel="nofollow">https:&#x2F;&#x2F;www.propublica.org&#x2F;article&#x2F;machine-bias-risk-assessm...</a>
phazeliftalmost 9 years ago
It might be a better idea to first train computers to define criminality objectively, because most people cannot.
评论 #12121014 未加载
Digit-Alalmost 9 years ago
I find this really interesting. I think what most people seem to be missing is the wider social context. Think about this. If you exclude white collar financial crime, pre-meditated murder, and organised crime - most other crimes are committed by the socially disadvantaged. So, if the algorithm identifies an area where crime is more likely to be committed, instead of being narrow minded and just putting more police there to arrest people, why not instead try to institute programs to raise the socioeconomic status of the area?<p>People are just concentrating on the crime aspect, but most crime is just a symptom of social inequality.
评论 #12124102 未加载
mc32almost 9 years ago
The main question should be, like with autonomous vehicles, is does this system perform better than people (however you want to qualify that)? If so, it&#x27;s better than what we have.<p>Second, even if it&#x27;s proven better (fewer false positives, less unduly biased results) it can be improved continuously.<p>There is a danger in that people may not like the results because if we take this and diffuse it, has the potential to shape people&#x27;s behavior in unintended ways (gaming), on the other hand this system has the potential for objectivity when identifying white collar crime, that is surfacing it better.
justaaronalmost 9 years ago
gee, what could possibly go wrong, Mr. Phrenologist?<p>SOMEONE seems to have viewed Minority Report as an Utopia rather than Dystopia, I&#x27;m afraid.
DisgustingRobotalmost 9 years ago
I&#x27;m curious how good an algorithm would be at identifying future white collar criminals. What would the risk factors be for things like insider trading, political corruption, or other common crimes?
liberal_artsalmost 9 years ago
consider the (fictional) possibility that an AI will be<p>&quot; actively measuring the populace&#x27;s mental states, personalities, and the probability that individuals will commit crimes &quot; <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Psycho-Pass" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Psycho-Pass</a><p>AI may be worth the trade-off if violent crime can be almost eliminated.<p>or consider (non-fiction): body-language&#x2F;facial detection at airports; what if they actually start catching terrorists?
评论 #12121897 未加载
jamesromalmost 9 years ago
What is bloomberg&#x27;s MO with these near unreadable articles?
评论 #12119325 未加载
评论 #12120098 未加载
评论 #12119518 未加载
niels_olsonalmost 9 years ago
Can someone just go ahead and inject a blink tag so we can get the full 1994 experience? Oh, my retina...
评论 #12119436 未加载
Dr_tldralmost 9 years ago
I know, it&#x27;s almost as if they don&#x27;t consider you the sole and undisputed arbiter of the limits of technology in creating social policy. What a bunch of psychopaths!
评论 #12123453 未加载
评论 #12120086 未加载
评论 #12123450 未加载