[NOTE: There's a bit of negativity here. But I'm not trying to be anyone's enemy. It is hoped that this will be taken (esp. by the author, if he's around) as the respectful critique it is intended to be.]<p>I'm wondering about this. It's a heck of a lot of text to get a across a couple of simple ideas.<p>It feels like the author has been stuck in a little a corner of the programming world for some time, and he thinks that everyone has been doing things the way he has. So if he needs to learn something new, then everyone else must need it also.<p>People have been writing interactive, event/message-driven programs for the mass market ever since the Macintosh came out ~30 years ago. Sometimes interfaces between internal components have been written in that same style; sometimes they haven't. Doing a whole system that way is a nice idea, but hardly a revolutionary one.<p>And scalability? One can't really design a scalable architecture until it is clear how a system will need to scale. Will it be huge number of app instances talking to a central data store? Will it be a huge amount of data managed separately by each app instance? Will it be huge numbers of P2P connections between instances? Etc. And information like that is sometimes unavailable until a system has been in existence for a while.<p>And let's not forget Knuth and premature optimization.<p>OTOH, the resilience ideas are definitely worthwhile. Reliability standards for computer systems are rising like crazy. Architectural ideas for improving reliability, robustness, etc., need to be thought about more. So good for the author on this one.<p>Now, put it all together and clear out the fluff, and I get this: A good way to design computer systems is to use modular components that talk to each other using non-blocking interfaces, in such a way that component failure does not bring down the system. That's a fine thought, although not a terribly revolutionary one. And it hardly needs an 18,000-word essay with 4 figures to get it across.