It we start naming and shaming the coder for each flaw instead of working on fixing the <i>process</i> that allowed it to sneak through, we'll see a chilling effect on open source software. There's a reason we have tests and code reviews and security audits...
Maybe the silver lining here is that it puts the final nail in the coffin for "many eyes make all bugs shallow" - which was always total BS from the day it was uttered. There's so much code out there, much of it highly specialized and even project-specific, that there are very few eyes looking at any particular piece of code, and not all eyes are connected to the greatest of brains.<p>Most static code analyzers could have caught this particular bug, as it requires only a fairly simple kind of reasoning about allocated vs. used length. In fact I believe it probably <i>was</i> found by static analysis before being reported by a human, which is a shame because it misses an opportunity to highlight the value of such tools. Maybe next time people will spend a little less time designing a logo and a little more time doing things that actually help (though that's a wish and what I <i>expect</i> is the exact opposite).<p>The real lesson here is that we should apply as many different tools and processes as we can at improving code quality for critical infrastructure. Code reviews are nice. Static analysis is nice. Detailed tests are nice. However, <i>none</i> of these alone is sufficient even when pursued with fanatical devotion.
I believe it. Crafting a weapon that can be used by both you and your enemies would be pyrrhic.<p>The lesson out of this ordeal is probably to be as skeptical as possible of everything you take for granted. How do you know that you're secure? What if your assumptions are wrong? Try to invent ways to break your own assumptions. The best way to protect yourself is to try to defeat yourself.<p>Unfortunately, "ain't nobody got time for that," as they say. But if you find time, it's quite rewarding. And disconcerting. You'll wonder why we're still wrestling with these fundamental problems in 2014, and then you'll start questioning the foundations we've been relying on until now.
Strangely, nobody's tracking down the nginx developer who inserted the exact same bug into nginx just a few years back (a NUL in a header would cause the header copies, done using strncpy, to abort early and expose uninitialized memory). He must have been an NSA plant too, right?
Any C programmers who have never failed to check their bounds? Anyone? Anyone? Because everyone has done it.<p>Now would be a nice time for one of the Lint vendors to donate copies of their product to some of the OpenSSL team members, and for them to dedicate some resources into fixing the more important stuff it finds.
I can't imagine what this guy must be feeling right now. I find it embarrassing enough when I am outed in my small team for producing a bug that makes it into production. To be known around the entire internet to have caused the largest security bug in recent times must be quite a slammer.<p>I really hope it doesn't affect his career..
It may be impossible to distinguish genuine bugs from bug-backdoors, which is why it's important to start developing crypto in safer frameworks and languages. C considered harmful.
Compare with the attempted 2003 Linux kernel backdoor [1], which gave root when sys_wait4() is passed uncommon combination of flags. The bug looked inconspicuous and honest enough.<p>[1] <a href="http://lkml.iu.edu//hypermail/linux/kernel/0311.0/0635.html" rel="nofollow">http://lkml.iu.edu//hypermail/linux/kernel/0311.0/0635.html</a>
--
Hopefully one positive thing that will come out of this whole Heartbleed thing is that companies making extensive use of Open Source software for security critical purposes will consider contributing to ensure that security reviews are carried out on them.<p>The cost of regularly reviewing codebases like OpenSSL would likely not be that high when compared to the potential impact of a breach because of a flaw in the software.
I just can't understand why such critical components as OpenSSL just don't use Code Coverage tools like Coverity to find such things as this? Testing, coverage certification, static analysis: this would have been caught if these tools were being used.
I work with both private and public sectors in the DC area.<p>One of the things that sucks about the Federal sector is that they are dominated by Microsoft and Oracle shills (the kind of IT pros who can't learn new skills unless its spoon fed pre-digested in the form of industry certification training) who do nothing but scream about the danger of open source. Now of course we all know that the only difference between enterprise and open source security holes is that the former go undiscovered for longer, and when discovered by the code owners aren't disclosed despite them having knowledge that black-hats know about it.....<p>But make no mistake. This fucking idiot [Edit: Ok, this isn't fair, this could happen to anyone, so he's not an idiot, but why not use a static analysis tool?] who did this to OpenSSL and the idiots who let it happen are going to set open source in the Federal government back YEARS. Not because its an actual threat, but because it will be used by the enterprise assholes as a weapon to keep selling their shitware to the risk averse morons who make up the giant pile of middle manager idiots that composes the Federal government.<p>I've already told my boss I'm not doing any more public sector work after my current project ends, and this is the nail in the coffin for me touching it ever again.