We don't accept this argument when it's turned on independent researchers. Researching vulnerabilities doesn't create vulnerabilities --- bad software engineering does.
As long as software exists, by definition, zero-days will exist. A zero-day is simply a bug in its most nascent state; one person has found it, and nobody else knows about it. Whether the finder is a "security researcher," a "blackhat," or a "nation-state" has no impact on whether the bug exists or not. In fact, the bug exists even if nobody finds it! The distinction of who found it, and what they do with it, is purely political. Anyone can still exploit the bug.<p>Sure, maybe the "friendlier" bug finders will responsibly disclose any bugs they find. But there will <i>never</i> be a way to guarantee that all bugs found will be responsibly disclosed. Even if we convince the FBI/NSA to "responsibly disclose" every bug they find (will never happen), what about every other country? The hundreds of security firms? The thousands of independent hackers and "researchers?"<p>Zero-days will ALWAYS exist. Software will ALWAYS be exploitable. Worrying about how people react when they find those exploits is the similar to arguing about gun control. Sure, maybe we can convince <i>some</i> actors to responsibly disclose, but the bad actors will always keep the exploits for themselves and use them "irresponsibly." And there will always be bad actors.<p>So instead of fretting about what happens when someone finds a bug, why don't we prepare for the eventuality that all bugs will be found and exploited, often times without anyone's knowledge? Why don't we build security systems to be <i>tolerant</i> of exploits, instead of resistant to them? There is no security panacea, just as there is no reliability panacea.<p>We build distributed systems with the assumption that nodes will fail, and we call that "fault tolerance." We don't say a system is broken because a node fails. We say it's broken if it cannot handle a node failing.<p>Why can't we do the same for our security systems? Exploits are as inevitable as any type of system failure. We need to design for <i>exploit tolerance</i> with the same enthusiasm we design for <i>fault tolerance.</i>
Let's not forget that Sabu, while an informant for the FBI, supplied Jeremy Hammond with the 0day that he used to hack Stratfor et al.<p>No 0day, no Stratfor hack.
No FBI, no Stratfor hack.<p>Sometimes I wonder if penetrating other agencies and corporations was part of their gameplan. The FBI were entirely behind the formation of antisec.<p>Aside: Other interesting observation... The FBI and Apple seem to have an odd antagonistic relationship with one another. One of the Antisec hacks was against an FBI laptop that caused the release of millions of Apple users' data. The FBI was recording and debriefing Sabu every day. How did they allow that to happen?
Let's all accept a depressing fact: effective cyber-security places all of us in a state of perpetual war. You cannot learn from your enemy without invasive action, and you cannot test your capabilities without constantly attacking your adversaries, whether they know it or not. We cannot simply fork their nation's Github repo and try out zero-days in a safe and isolated environment.<p>We shouldn't be so quick to rail against government zero-day stockpiling. It is likely that other branches of government are using these flaws for their own means to monitor foreign states and other entities. If we give up that power we risk crippling our offensive capabilities more than we might stand to gain by having a stronger defense.<p>I cannot vouch for one side or the other. I am not a senior intelligence official and I do not have all the facts.