I don't really buy the comparison that what CERT did is similar to a university-sponsored DDoS. I think a better parallel is the Dan Egerstad case. He ran a Tor exit node and analyzed all the plaintext traffic leaving the exit nodes. He ended up collecting a ton of sensitive usernames and passwords. He tried to contact some of these people by e-mail but they ignored him. So he posted a bunch of these passwords on his blog. He was promptly arrested (and eventually released). At that time the security community was outraged that an obviously well-intentioned researcher was being harassed by the police for doing his job. The response is a lot different now for reasons I don't really understand.<p>I do wish both sides would acknowledge this is a tricky issue. On the one hand, if I run a tor exit node or relay, it is my node and it seems like I'm allowed to do with it as I please. At the same time, it also seems obviously unethical (maybe illegal?) to be harvesting passwords off an exit node or to dole out vigilante justice to Tor users I don't like.<p>One other thing to keep in mind here is that SEI is a DoD funded center. It may be nominally affiliated with CMU, but all their money comes either from the DoD or external grants awarded to the researchers at SEI. So CMU the private research university and SEI the DoD-funded research center have very different obligations to the public. It's important not to conflate the two.<p>The big question is this: what are our responsibilities as security researchers, especially when we're working on "live" software systems? Green seems to be suggesting some form of a review board which pre-approves experiments on live targets. Maybe this is what we need, but be careful what you wish for though. The bad guys don't have review boards.
<i>> But there's also a view that computer security research can't really hurt people, so there's no real reason for sort of ethical oversight machinery in the first place.</i><p>Worse: there's a view that people who get owned "deserved it." Our industry, and its academic attachments, have a really strange vindictive streak towards those who it should be looking out for. (Which is not to say that those people should be looking out for people swapping child porn--but what about the thousands and thousands of people who were <i>not</i>?)
It would have been more ethical if the university had not blocked the "researchers" from disclosing the vulnerability at Black Hat. (Though even then they were not following responsible disclosure practices). The fact that Tor had to guess what the vulnerability was and the "researchers" still have not released their paper is unethical and probably illegal.
Seems like more research needs to go into preventing traffic confirmation attacks: <a href="https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack/" rel="nofollow">https://blog.torproject.org/blog/tor-security-advisory-relay...</a><p>"A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her."<p>Interesting technical problem. They patched it, obviously, but similar attacks are still possible. It does say more research needs to be done, when that post was published. Obviously the method they used to send and receive signals from one side to the other doesn't work anymore, but statistical methods presumably do. Sort of like this:<p><a href="https://mice.cs.columbia.edu/getTechreport.php?techreportID=556&format=pdf" rel="nofollow">https://mice.cs.columbia.edu/getTechreport.php?techreportID=...</a><p>Seems like a very difficult problem to solve.
What is so surprising here? The DOD is the largest funder of research grants in the US.
Pretty much every university is doing research for a US agency from cyber security to lasers for missile defense.
I find it very hard to believe that this is the firs time a university was conducting computer security research on live targets.
From their website, I get that CMU/SEI/CERT works with both DHS and DoD.[0] Although I don't see anything specific about the FBI, it's not too much of a stretch. As DHS has grown and evolved since 9/11, distinctions between police and military have weakened. A decade ago, CERT would have been carefully shielded through parallel construction.<p>In my opinion, this is a wakeup call for the Tor Project. The attack would have been obvious if they'd been tracking the requisite circuit parameters. Ironically enough, it strikes me that the Tor network needs something like CERT for detecting attacks.<p>[0] <a href="https://www.cert.org/about/" rel="nofollow">https://www.cert.org/about/</a>
The article raises the issue for computer security but computer science is used in many other fields where it could have ethical implications. Self-driving cars is on top of my mind, but for sure other applications has issue too. So I agree with his point and should be extended.
I think we have to assume that if a government can hack it, they will try. Perhaps it's sad that a university will help them but I'd also to be assumed that they're going to be trying it in some way.
I'm willing to bet that the NSA has started to hook into the Tor network and add in their own nodes, which monitor the traffic. Unless it's not possible to snoop in on data.
The response by Patio11 regarding how this was acceptable penetration testing was beyond stupid.<p>Just because you are univesity researcher does not means you can take money and then attack some random company and say LOL JK just doing "Research". Universities have enormous computing power / resources available via various means to do research. Just because I have access to a thousand node cluster does not means I can randomly launch DDOS attack against some company and then claim "Research". This is equivalent to those youtube videos where at the end they justify assault and other egregious behaviour claiming "Social experiment" or "Prank".
I still remember the researchers working with Facebook on some social science project or the discussion on that guy tweeting about airplane security, so the response from HN on this case baffles me somewhat.
None of this should be much of a surprise.<p>There has always been the possibility of bad actors being involved with Tor. In addition, the Tor software is complicated enough that there undoubtedly <i>will</i> be bugs in it.<p>This is "you bet your life" serious. However, both the architecture and the implementation of software must be <i>perfect</i> for that to succeed. It's pretty easy for one bug to mean "game over".<p>People using Tor just don't have a chance when it comes to dealing with the NSA, FSB, GCHQ or any similar state actors. Even allowing for inevitable government bureaucracy and incompetence, the disparity in resources can just be staggering. A big agency can easily, easily afford to devote 100 full time people to one high value target. Those are not odds I'd like to bet against.<p>In the bigger picture, the NSA doesn't give a rats ass about either Silk Road or about child pornography (at least I hope they don't). Which is why an "academic institution" was enlisted to help out the FBI with this.<p>But if I was a dissident or protester in Turkey, Syria, Russia, or any of a large number of authoritarian countries, I certainly wouldn't use Tor. Not if my life and the life of my family was at risk.