Also: the closing grafs in the article, about Cisco's acquisition of Sourcefire, are particularly dumb.<p>Sourcefire is the commercial backer of Snort, the open source network intrusion detection system (and also the owners of ClamAV). The author of this article and his sources express surprise that Cisco would pay big money for an open-source product that anyone can use.<p>Cisco paid just about 10x trailing revenue for Sourcefire, a public company that had managed to dominate enterprise network security and which competed directly with products that had been cash cows for Cisco for over a decade. Cisco has for as long as I've been in the industry --- in fact, for as long as there's been that industry --- been the single most important acquirer of network security companies. They acquired security companies with the same fervor in 1998 as they do today.<p>Cisco's acquisition of Sourcefire might qualify as the single least interesting story in information security in the last 5 years.<p>Want to make a couple hundred million dollars? You too can do what Sourcefire did: start an open source project that appeals to enterprise teams who spend monopoly money to buy products (that is, start any enterprise-relevant open source project). Get thousands of people to use it. Then start a company and hire an inside sales team. Have them call company after company and ask, "Do you use our open-source project?" Sell extra stuff to the people who say "yes".
FWIW, I think that the idea of software bugs (vulnerabilities) as a product is a scary concept and a bad precedent for overall security.<p>Once you have legitimate corporations who's goal is to find software vulnerabilities, combine them with delivery systems and sell to either specific entities (e.g. the US government) or the highest bidder, I think that the incentives for people involved in software development and testing get odd, and not in a good way.<p>For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.<p>Another example is, are there incentives now for people who work in development or testing, who aren't perhaps happy in their jobs, to sell knowledge of bugs or flaws to these companies? Given the prices paid, which could be several multiples of peoples annual salary, and the anonymity afforded to people who report the flaws, it could be a low risk way to make a lot of money.<p>And then you have open source software which is heavily used in a lot of commercial products that might get attacked. With this kind of thing there's a big incentive not to report bugs to the project but to sell them to a company who has no incentive to see them fixed...
"A survey of 181 attendees at the 2012 Black Hat USA conference in Las Vegas found that 36 percent of “information security professionals” said they’d engaged in retaliatory hack-backs."<p>What? Black Hat attendance is in the high thousands. A plurality of those attending are IT professionals --- people that wouldn't have the technical capability to take over a botnet even if they wanted to. Even if you broadened the definition of "hacking back", as some people do, to recon activities like port scans. No part of this anecdote makes sense.<p>For my part (I'm a security researcher by background, though that's not what I'm doing now, and I've presented at Black Hat numerous times): not only have I never met a professional who claimed to have "hacked back" anything, but I've never even met one who didn't think that was a crazy idea.<p>There is a difference between major organized efforts to bring down botnets and "hackback" the way the term gets associated with Endgame.
I interviewed with Endgame recently. Their arrogance was striking.<p>More topically, there's a basic problem in security - vulnerabilities have value. They have more value to people who want to use them than to people who want to close them. Unless this shifts, the current situation is only going to get worse.<p>Making it illegal isn't going to work. There is already a functional black market. Removing the white market will just drive more groups to the black market.<p>There's no easy answer here. Yesteryear's EFNet junkies have been turned into today's mercenaries and weapon designers. Cyberspace is valuable, and controlling it moreso. It's a dangerous time to have interesting information.
The company profiled in the article ("ex-NSA") isn't exactly the first player in its space - e.g. VUPEN is a pretty established company (<a href="http://www.vupen.com/english/services/lea-index.php" rel="nofollow">http://www.vupen.com/english/services/lea-index.php</a>), and there have been earlier articles on this market (for instance <a href="http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/" rel="nofollow">http://www.forbes.com/sites/andygreenberg/2012/03/23/shoppin...</a> is pretty readable).<p>This may be a very good book, worth reading, but it's not really news.
We need to allow corporations to fight back? Why stop at cyberspace, I think we should let multinational corporations field their own private armies as well. What could possibly go wrong? It's not like they'd ever abuse that power!
I just want to make sure I have this right.<p>The government hires these guys and then keeps the vulnerabilities in our software and our businesses' software secret?<p>They then use this to launch attacks and record our communications and actions?<p>I, of course, would have an opinion on this, I just want to make sure I've got this correct.<p>Edit: punctuation.