This brings up a thought that I've had for a very long time. Almost every type of optimization that a programmer could employ is repeatable. It involves matching patterns ("Identification" in the context of this article), analysis ("Comprehension"), and rewriting ("Iteration").<p>All of these steps can be efficiently automated. And it turns out that compiler writers collectively know about the vast majority of these techniques, but refuse to implement most of them for what I would consider to be the ultimate copout ever: Compile times. I don't know about you, but I would take 100x increase in compilation times for a release build over a 2x increase in development time due to manual optimization. I'm not sure who wouldn't, especially if it also allows you to eliminate technical debt, eliminate leaky abstractions, and improve code comprehensibility.<p>Perhaps I'm being overly idealistic, but I can't help but hope for a day that I can work with a high level language and have the compiler take care of optimizations that range from removing redundant elements from struct definitions all the way down to bitshift optimizations like i * 28 == i<<4 + i<<3 + i<<2. And if I have to wait all day long for a release build of something, so be it.