We live in shitty knee-jerk reactionary times, but did anyone else see his tweet at the time? At best, it seemed in poor taste. At worst, the outcome seems depressingly predictable.<p>I don't know what I'm trying to contribute here, except that whilst I have no problem with EFF working on this, their article here seems overly shrill and over-reactionary at how shrill and over-reactionary the airline was in their response to what (admittedly, in hindsight) could have easily been interpreted as a threat by an over-zealous corporate drone blind to smily-face emoticons.
The tweet in question: <a href="https://twitter.com/Sidragon1/status/588433855184375808" rel="nofollow">https://twitter.com/Sidragon1/status/588433855184375808</a>
I have a problem with the often used phrase "legitimate researchers", because it suggests that certain freedoms should only apply to certain people.<p>"legitimate researcher" is not a specific job, researching is an activity any citizen can and should be free to conduct within the confines of the law, and all of that is "legitimate".<p>The whole "legitimate researcher" creates a huge loophole through which the powers that be can create some kind of registered researcher status, with the obvious consequences for everyone else.
Reminds me of this 2012 story about two British tourists being barred from their flights for tweeting they were going to "destroy America" (slang for "having a blast"):<p><a href="http://www.bbc.com/news/technology-16810312" rel="nofollow">http://www.bbc.com/news/technology-16810312</a><p>I wonder how they connect the tweets to the persons? Do they actually actively search Twitter for keywords, and when they hit they dig into it until they have found a name, which they check against their passengers lists? There's probably some shortcuts they can use, but it still seems weird to me.
So for aspiring infosec people, can someone explain how he can crack the encryption of EICAS? Different commenters on different site articles claim that the 737 never had EICAS, or maybe they mean that the Oxygen Mask On light is of course <i>not</i> connected to the internal avionics network.<p>Are there people who know this stuff better and have pointers? I would love to know more.
Really dumb.<p>Really dumb of this security consultant to have bragged about tampering with airplane control systems in the middle of a flight.<p>Really dumb of EFF to make a <i>cause célèbre</i> of him.<p>EFF's analysis of this situation seems to revolve around the consultant's intent. He's a security researcher, ego not a real threat, and undeserving of scrutiny.<p>I'd have thought that EFF would be better acquainted with pentesters by now. Anyone who spends a lot of time with pentesters knows that when it comes to disrupting or disabling critical systems, intent doesn't have much to do with the outcome of a pentest. We break shit <i>all the time</i> without trying. We break shit even when we're trying not to. Smart clients who have spent the last decade working with pentesters often have e-l-a-b-o-r-a-t-e rules of engagement designed to avoid prod disruption. We still break shit in prod, even when we follow the letter of the rules.<p>So this goofy tweet the consultant sends: is it what you'd expect right before a terrorist crashes a plane? Of course not. But is it exactly what you'd expect right before some idiot trips a bug that does something to force an emergency landing? It absolutely is.<p>Is it outside the realm of possibility that some control system somehow bridged to airplane wireless would have a problem that would allow a passenger to deploy the oxygen masks? It is not. Would that design flaw be idiotic? Yes it would. Does the idiocy of that design flaw mean it's unlikely to be there? No it does not. <i>Virtually every system you interact with in the world has idiotic design flaws</i>. Wait, that's not a question. "Does virtually every system..." YES. YES THEY DO.<p>So imagine that, just like in pretty much every pentest ever, this consultant is merely poking around trying to see what functionality is exposed to him through this design flaw. No intention to make anything happen at all. Now imagine he purely by accident does manage to, I don't know, deploy oxygen masks. No harm done (stipulate nobody on the flight has a severe heart condition). Plane integrity undamaged. Plane fully capable of continuing along its itinerary. Nonetheless, what's the likely outcome here? Unplanned emergency landing.<p>There probably is no such vulnerability. But then you have to ask yourself: who in United's flight operations chain of command is qualified to assess whether there is? Really, who in the entire flight safety chain of command, from flight captain through FAA to DOJ, is? There aren't that many people in the world who know how EICAS messages work. All they have to work with is the hypothetical. "Unexpected behavior found in in-flight wireless. Tinkering in process!" That's a threat!<p>I think the thing that frustrates me most about this story is the fact that it's probably not possible to launch anything more than nuisance attacks from the vantage point of a passenger. And yet because of our (admirable and effective) attitude with regard to flight safety, those nuisance attacks are all economically devastating. In other words, this kind of "research" is unhelpful.<p>Where EFF made me flip out this time: <i>Nevertheless, United’s refusal to allow Roberts to fly is both disappointing and confusing. As a member of the security research community, his job is to identify vulnerabilities in networks so that they can be fixed.</i> Wat. United's decision here is extremely easy to understand: they do not want to offer service to someone who was willing to disrupt a flight to make a point. Meanwhile: the "security research community" does not deputize its members, make them swear an oath, and given them a little tin badge. No part of this guy's "job" gave him the right to tamper with the computer systems on an aircraft. If EFF thinks that's what it means to be a vulnerability researcher, they are broken. They cannot advocate effectively for legitimate research while promoting the idea of special rights for people who call themselves security researchers.
I second the motion that this is dumb. But weakness of airplane security is not unknown. Numerous presentations had been done at BlackHat and DefCon over the last few years, and people generally received good responses. But does anyone know if these presenters ever contacted the airline authority before they went on stage?
'Corporate types' have a lack of humour at the best of times, but that isn't what is going on here.<p>It's the 1 in 100, 1 in 1,000,000 chance that the tweet wasn't a joke, but a real threat. They can't take the risk that they knew about it, and didn't take it seriously and 100's died.
United Airlines is THE worst.<p><a href="http://www.nytimes.com/2013/01/29/business/passenger-vs-airline-policy-stand-offs-in-the-air.html" rel="nofollow">http://www.nytimes.com/2013/01/29/business/passenger-vs-airl...</a><p>In my case they <i>almost</i> apologized for having had Federal Air Marshals detain me.
The War against security researchers "hackers" has began, and I think the reason is because in "information war" the hackers are a threat.<p><a href="http://blog.erratasec.com/2015/01/obams-war-on-hackers.html" rel="nofollow">http://blog.erratasec.com/2015/01/obams-war-on-hackers.html</a><p>Note: They keep saying "HACKERS" and not criminals!