The list is mostly the kind of tactical mitigations that get bypassed a lot by private individuals and academia. Clever people write one, clever people break one, rinse, repeat. The untested theory is that they would be really hard or impossible to bypass in combination. The kind of people that could test that have mostly been smashing things like Chrome or mobiles where there's enough users to justify their efforts in terms of fame, money, etc. The real mitigation here is obfuscation of using a platform hardly anyone uses that is also harder than average to target. Then, there's some benefits to the security measures used on top of it. Obfuscation is main reason attacks aren't attempted much, though.<p>A better list would start with design techniques and assurance activities that led to systems with few to no vulnerabilities during pentests by well-funded, knowledgeable attackers. That's on top of what survives in the field with lots of attention. In the 80's-90's, those techniques included precise specifications of behavior or security policy, ways of proving/testing that in the code, hierarchical layering with simple coding to facilitate analysis, small kernels with most code deprivileged, memory-safe languages where possible, verification that object code matches source w/ no compiler errors/subversions, partitioning GUI's/filesystems/networking limiting apps effects on each other, covert channel analysis of entire system, secure repo's containing these artifacts w/ secure transfer to users, and option to re-run the analyses or rebuild the kernel themselves for independent replication.<p>Each of these techniques found or prevented many vulnerabilities in systems they were applied to. They even became mandatory requirements under the first, security certification: the TCSEC. Trusted Xenix in 1990 used some of them for that reason. Unlike often-bypassed mitigations, each of these methods still work today. Some work even better due to tooling improvements. The BSD's are largely ignoring these methods to maintain legacy compatibility with insecure architecture, unsafe code, and configuration scripts that can be just as risky. Unsurprising given early attempts at applying strong methods to UNIX, like UCLA Secure UNIX, showed the UNIX design had covert channels and such built in. You couldn't fully secure a UNIX without breaking legacy compatibility in lots of ways on top of a significant performance hit from memory safety and context switching. Led high-security projects to just virtualize UNIX/Linux on top of secure, isolation kernel. Projects that are attempting to follow some of these lessons in low-privilege architecture or language use include GenodeOS, Muen separation kernel, seL4, JX OS, and ExpressOS for mobile. EROS was an interesting older one that added persistence on top of capability-based kernel.<p>I figure someone should mention the methods that stopped NSA's hackers in various evaluations since they're strangely not on the list.