They literally named it Skynet. They have an evil sense of humor.<p>Actually using machine learning to detect terrorists isn't a terrible idea. But you are going to get an error rate, and probably a high one in the noisy real world. Maybe only 50% of the people you detect are actually terrorists. Maybe it's even worse than that. We can't even test it because there is no validation set and unreliable labels.<p>The reasonable thing to do with that information, would be to surveil them further, search their house, or arrest them. Not assassinate them without a trial.<p>And the more I read the details, the more alarmed I am. The 50% figure I used above may have been way too high. The base rate of terrorists way too low and they have very little data to begin with.
Big data analysis + mass surveillance is a frightening prospect. Of course you can train software to look for 'terrorists', but you could also train it to look for:<p>- whistleblowers<p>- minority groups (e.g. gay people, particular religious beliefs, political affiliation)<p>- political dissidents<p>- journalists whose behaviour changes<p>- personal vulnerabilities (affairs, mental health issues etc)<p>Think what authoritarian governments could (and are) doing with this capability.
This story is stupid.<p>I'm sorry. No fan of the NSA, but the premise behind it is completely ridiculous.<p>There is zero evidence of the repeatably asserted idea that the list this tool generates is any kind of kill list.<p>It's a tool that generates indicators of people that may be worth looking at when trying to find couriers. That's a very <i>specific</i> subgroup of terrorists, and I find it entirely unsurprising that a journalist would be falsely flagged as journalists have statistically unusual travel habits (Clearly labeling him as "member of Al Qaida" is unjustified by this evidence though).<p>Also, criticizing the NSA on their knowledge of statistics seems unwise. The NSA is many things, but "bad at Math" isn't one of them.<p>Read the information yourself, and come to your own conclusions.
The thing that stood out for me is "Somewhere between 2,500 and 4,000 people have been killed by drone strikes in Pakistan since 2004".<p>WHAT!? That is so wrong! This stinks and we're making fuss about mathematics. Just read that sentence again.
This story is hugely important. It gets to a deep question everyone here should ask themselves -- beyond immediate concerns of salary, equity, and learning value, is my specific work making the world a better place or a worse place?<p>This story goes to the core of ethics in engineering.<p>It's all of 9 hours old and has 341 points -- yet it's already off of the front page where nobody will see it. You could check HN literally every day and still easily miss this story.<p>Meanwhile, the front page is full of unimportant links to obscure tech trivia, many of which have less than 20 points.<p>We know that HN automatically penalizes submissions containing certain words, including "NSA", in the title. Certain prolific HN users have also said that they "automatically flag" submissions they consider "political".<p>But I really think HN would be a better place if the front page cycled out a little slower, if stories like this were not suppressed, if they got at least one day's worth of attention and discussion.
"Obliterated a wedding because people where celebrating by shooting in the air."<p>This is the definition of evil.<p>How and why is the Pakistani government allowing this program to operate within their territories?
Oh come on "machine learning algorithm may be killing thousands.." just put a "may be" and your BS assertion becomes more plausible? By looking at this, people are having and idea that attack drones (it is rich that article puts one picture of it in the beginning) are loaded with such software and killing by looking at the result of a classifier. Or some super computer gives you a name and says "exterminate".<p>Apparently these tools only gives operators some clues and save their time. If it is a false positive, probably they just ignore it. Of course, obtaining the information is a different story.
I was wondering exactly how the NSA trained models to detect terrorists, and was surprised by the level of detail in this article. So the NSA performs classification using a Random Forest and about 80 input features? Huh. That actually sounds a little too similar to a Kaggle contest for my liking, but I wasn't expecting that much quasi-technical information anyway.<p>It seems a little silly to write this whole article based on a few powerpoint slides from years ago. Even as an amateur practitioner, I can see several obvious things that could be changed about the presented methods. I'm sure what the NSA is doing now in this regard is much further along than what was portrayed in those documents. Whether or not they should be doing it is a discussion for others to have. But it is interesting to learn a little more about the details of truly high stakes machine learning.
Reminds me of this dialog from Spectre (<a href="http://www.imdb.com/title/tt2379713/" rel="nofollow">http://www.imdb.com/title/tt2379713/</a>):<p>> Have you ever had to kill a man, Max? Have you? To pull that trigger, you have to be sure. Yes, you investigate, analyze, assess, target. And then you have to look him in the eye. And you make the call. And all the drones, bugs, cameras, transcripts, all the surveillance in the world can't tell you what to do next.
> ... possibly resulting in their untimely demise.<p>I get increasingly annoyed by these kinds of euphemisms in articles discussing US actions. As if it was something humorous.<p>Should be something like: ... possibly resulting in them getting assassinated by US military.
We're killing people in a country were not even at war with over statistics. Not evidence, but statistics. People who statistical might have something against us. Since when has this been grounds for killing someone? This is utterly terrifying.<p>Just imagine if someone decided to do this to us.
How does Pakistan feel about all this?<p>This is approaching Auschwitz levels of evil.
They don't actually care if the person is innocent. The point is to intimidate their population. They use this metadata to fit into the system of law so they can cover their ass.
The argument was that in the US, they "only" collected metadata. No real phone calls. No big deal...<p>The power of their use of "only" metadata in Pakistan is frightening..
The founding fathers put due process into the constitution / bill of rights because it's a fundamental check on government bureaucracy run amok, and prevents physical harm from coming to people as the result of a capricious executive whim.<p>Now, we learn that the executive has excused themselves from following this process (which applies to Americans, some of which these victims may be, even when they are not in America).<p>The farther open we pull the NSA lid, the more revolting discoveries we find.<p>The individuals behind these decisions should be identified, and tried in the court of law.
"Under the random selection of a tiny subset of less than 0.1 percent of the total population, the density of the social graph of the citizens is massively reduced, while the "terrorist" cluster remains strongly interconnected."<p>There is no indication in the slides that the feature calculations are done on the smaller subgraph. The above comment doesn't hold if the calculations are done on the entire graph and then the 100k are sampled for training.
I see lots of doubt about the ridiculous nature of this program described in this article. People saying it's not possible.<p>Are you sure? <a href="https://theintercept.com/2014/02/10/the-nsas-secret-role/" rel="nofollow">https://theintercept.com/2014/02/10/the-nsas-secret-role/</a>
The US, UK response to terrorism is so disproportional, odious and sinister one wonders whether this is infant a 'response' or something far more evil cooking below the surface.<p>This kind of technology necessitates mass surveillance, and offers you little more beyond the faint possibility that crime can be predicted. This is bogus science.<p>If you let your mind entertain the idea you can predict crime, then you are already at the thresh hold of a stifling surveillance state. And why are we even assassinating people. What happens to due process? One by one all these fundamental principles are set aside, massive and dubious mass surveillance infrastructures are being built, doublespeak is comically rampant.<p>One can begin to understand Snowden's urgent need to act, but what about the moral compass of all the people in the NSA and who support the US and UK security apparatus? People need to act. Surely they cannot suddenly subscribe to values we have aggressively demonized for more than 100 years.<p>Who would have thought that the US and UK are now the rogue states and other countries who are far more secure and sure about their politics need to begin to isolate themselves from these dangerous totalitarian instincts.
The more I read about US activities abroad, the more I realize Chomsky's right. Are sovereign countries supposed to tolerate a certain level of drone strikes by the USA (4500 people in Pakistan killed in the last decade) based on -- now revealed -- machine learning hunches?
There are two issues here, firstly some dramatic overfitting of the model, and secondly that all-too-familiar garbage in, garbage out fact, which is even more relevant when dealing with data.
They automated the decision to kill people and called it Skynet? That sounds awfully familiar. When can we expect assistance from the future to take the NSA people out?
I feel infuriated while reading that they named it Skynet.
They are clearly the ones creating terror, that's so absurd!
I can't believe that they do not have the understanding that this very act is what propagates wars and keep them going on forever.
As it cannot pass as ignorance, I take as malicious the people who feed this system.
A related approach that identified suspects based on their banking history was outlined in SuperFreakonomics. The chapter is reproduced in this review:<p><a href="https://www.goodreads.com/review/show/90622011" rel="nofollow">https://www.goodreads.com/review/show/90622011</a>
Hmm:<p><i>Behavior-based Analytics: - Low [cell phone] use, incoming calls only - Frequent handset changes - Frequent detach / power-down</i><p><i>Visits to other countries - overnight trips - permanent move</i><p>I guess that makes me an extremist militant, also.
Even before reading TFA that headline stands out like poke in the eye. An algorithm has no agency - moral or otherwise. Algorithms don't kill people, people kill people.
If that system skill kills enough terrorists to make them scared and limit inflow if new recruits into that system than why not? Obviously rest of the population of Pakistan pays their price, but what they expect if they can't (or really, don't want to) shut down terrorist activity in their own country themselves?
It's ignorant to assume any targets revealed by this program wouldn't be given a human intelligence analyst to verify accuracy before risking millions of dollars on a predator strike and the potential risk it was bad information. Enough said.
So yes, a state would not be much of a state without some form of military. But why do we need separate states?<p>Edit: this is a genuine question about the practicality of a concept (the "nation-state") that was invented during the Gutenberg era of the printing press, and how such a concept has become impractical in the age of instantaneous international interconnectivity and economic globalization.<p>Downvoters: How are arbitrary divisions drawn on a map anything but counterproductive to synergy and efficiency in the age of globalization?