I like the "bank teller" analogy used in the article.<p><i>> it would be like hiring a bank teller who was trained to hand over all the money in the vault if asked to do so politely, and then entrusting that teller with the key. The only thing that would keep a bank safe in such a circumstance is that most people wouldn’t consider asking such a question.</i><p>This does not only work for deserialization issues.<p>It is a great analogy for a huge class of IT security issues!<p>Maybe we should use that one when communicating with the media. This this works much better than the usual burglary analogy. I like how it points out that this is about stupid and/or malicious behaviour (code), where the attacker (hacker) just needs curiosity, and may find this out even by accident. The attacker did not have to break something, and did not damage anything, to get into something. In particular, this makes clear that this is caused by irresponsibile behaviour of the organization and/or other entities to whom they delegate trust.<p>Even for more complicated scenarios, I like the bank teller analogy more than the classic burglary analogy. In that case, the attacker observes multuple bank tellers, and notices e.g. that if you ask the first teller for form A and put in certain words, another bank teller will accept it and give you a stamped form B, which you can show to a third teller in another branch office who will look a bit confused, but finally accept it and hand over all money to you.<p>We need to get over blaming the messengers[1], buying zerodays and declaring cyberwar. What we really need to do is to finally make our[2] computer systems secure and trustworthy, at least up to a certain minimum-level of sanity: no exec, no injection (i.e. typing/tagging), no overflows (i.e. static analysis), input validation, testing, fuzzing, you name it.<p>And this cannot work by just adding more and more complex security measures outside, but more importantly simplifying and cleaning up inside. Although rewriting software from scratch is very risky, radical refactoring is not! And every good software engineering course tells you how to do it correctly.<p>[1] security researchers, but also "amateur" hackers, or just someone running into it by accident because the security issue became so large it finally <i>had</i> to be noticed by someone.<p>[2] in the sense of: everyones!