On one hand, developer tools really are a worst case situation for sandboxing, as they are the most closely tied to a rich Unix heritage that happens to have been designed with security only to protect users from each other, not from their own compromised applications. (I suspect it would work much better if Unix were designed with capabilities in mind from the beginning, though I'm not sure exactly how such a system would work.) Plus, sending people compromised source code to build isn't exactly a common attack vector - although to be fair, if you think about how often people download random source tarballs and run the shell scripts inside, and the known existence of "watering hole" attacks targeting developers, someone somewhere has probably tried it. And of course, Apple does not require itself to implement sandboxing in Xcode.<p>On the other hand, sandboxing in general is definitely a good thing - you can't constantly bring up the security threat posed by the NSA et al. and then complain about the most effective type of anti-exploitation measure we know of today. Apple's sandbox implementation on OS X is reasonably flexible, and the post itself seems to concede that given more time, Panic could have come up with a reasonable experience that runs under it, perhaps with some features more awkward than without but essentially intact. So is it really that evil?