I think it can be formally proved that software that is non-trivial cannot be written to be 100% bug free in all cases, in all contexts, and I think that's a by-product of how software runs.<p>We have so many layers that software has to run on-top of: You have raw hardware, but a layer above that is the BIOS, and a layer above that is the OS, and a layer above that are the SDKs like .NET/GTK/Ncurses/Shell, and above that you have the layers that are built into software itself as abstractions, and on top of that ...<p>If <i>anything</i> on <i>any</i> one of those layers changes, you have the potential of introducing bugs. Not all interface contracts are honored by all developers/engineers. There have absolutely been cases where changes are made at one level and that has an effect on <i>everything</i> that runs on top of it.<p>So even if you can reliably say that your program is 100% bug free for a specific OS running on a specific set of hardware at Time T, that has the potential to break during the next release of any of those products.<p>The software the Patio11 mentions (NASA systems) is running in a very specific context. It's running on hardware that has a high level of formal verification, and it's running on a RTOS or running directly on hardware, such that there isn't anything that can interfere with the software, because otherwise even the CPU scheduler could <i>introduce</i> bugs into your system.<p>Anyway, that's my view of things, I'd love for someone to tell me if I'm wrong, since that's all based on observations I've made during my industry experience.