I was involved with OpenBSD (and FreeBSD) security in the '90s and I mostly agree with the conclusion of this article while not really admiring the logic it uses to get there.<p>In 2010, it seems to me, there are basically three schools of OS security, each of which hates the other two:<p>* There's the OpenBSD model, detailed in this article, which suggests that the way to secure an operating system is to audit, simplify, and harden the code; and, at the same time, and new security features are mostly unnecessary.<p>* There's the SELinux-style MAC model, which suggests that if every component of the operating system can be sandboxed and its interactions carefully prescribed, we can get to a place where individual code bugs won't matter, so long as we've got a tiny, ultra-carefully audited reference monitor we can rely on.<p>* There's Brad Spengler's GRSecurity model, different from the OpenBSD model in embracing new, user-visible features, and different from SELinux in that it doesn't rely entirely on a MAC-based security kernel.<p>There's something to be said for all three of these approaches, but if you're going to go all-in on one of them, Spengler seems to have landed closest to the mark. Both OpenBSD and GRSecurity are "exploit-aware" security models: they're both built assuming that there's no way to secure an operating system without keeping abreast of what people are actually doing to break systems. But OpenBSD has picked a fight with computer science that it can't win: its model depends on shipping bug-free distributions --- which is why its security claims get more and more specific over the years.
As I said before when something like this came up, "The problem with security by policy is that the policy is always wrong."<p>Of course, as tptacek alludes, if you honestly believe you can make a correct policy, you don't like that answer.<p>If you think making a policy is easy, here's a simple question: Should /bin/ls be allowed network access?