Premature optimization used to mean unnecessarily writing assembly or borderline obfuscated C in the name of performance, which led to programs being difficult to comprehend, hence it being the root of all evil. Today this has been perverted to mean "hey, buddy, if you think about performance you're optimizing prematurely!"<p>People need to learn their history before throwing out such maxims.
I'm glad this came up; I've been thinking about it for a while.<p>When I first started programming professionally and read this quote, I thought it didn't really apply to me. I do a lot of low-traffic webapps and never really felt like I did much trade-off with premature optimization. It certainly sounded strong to say that pre-mature optimization was the root of <i>all evil</i>. Hell, I messed up so many other things that I would have picked any one of them and thought it to be more of a problem. But I respected the person saying it, so I kept it in the back of my mind.<p>Then I had an epiphany. Pre-mature optimization isn't just about performance, and it doesn't even have to be about only programming. In your business, are you focusing on automating something that will only save a miniscule amount of time? Are you building a feature that will rarely ever be used? Sounds like pre-mature optimization to me. And the problem is not that the end-product won't be better; the problem is the opportunity cost. You're trading off a whole host of other things that could be worked on instead.<p>Pre-mature optimization for me comes down to reassessing where I am at every stage of building a project and asking myself: am I focused on the right thing? Because if not, I'm wasting my time.
I think what Knuth/Hoare/whoever was trying to say here is - solve the problems you <i>know</i> you have. The key point is that the "know" should be demonstrable knowledge (as opposed to asserting "I know" with authority), in which case it would be justifiable to extend the principle to "solve the problems you <i>know</i> you have or will have".
It boils down to common sense.
My approach is to try to writing the code reasonably efficient, but initially, readability, maintainability and simplicity take precedence over performance. I wouldn't write a more hacky or complex code just for the sake of performance before I KNOW there's a problem.
Nothing actionable in the article nor in these comments. Some folks use hashtable as a default container - is that premature optimization? Wouldn't a vector do? Which one is 'evil' and which one is right?