Why I appreciate articles about Software Optimization and I also think that it is often underrated I have experienced in last projects something slightly different:
The systems on which I worked were composed by several separate processes that are spawned in chains and communicate mostly over files. While this might seem an arcane architecture by today's standards it has the advantage of high flexibility, because you can replace/optimize every single process at the OS level and you can very efficiently introduce new logic and even make the system scale over many machines.<p>However in this case case the bottleneck is IO and not CPU. And hence the optimization in this case would be to minimize file access, rereading information that was already present at prior steps of the process chains. I would love to learn efficient methods to measure in such architectures what are the spots to optimize.