I like the basic premise, which in effect is just "write good code" and, if all else is equal, write efficient code.<p>Premature optimisation is a problem only when you spend too long on it, or it reduces readability, robustness, testability, etc. If it's equally easy to write but better, why not? There's no excuse for routinely writing bad code and "premature optimisation is the root of all evil" is all-too often used as an excuse for sloppiness.<p>However, I <i>possibly</i> disagree with the string formatting example. If it's a function that gets called a lot and does a special-case string conversion like this, fine go ahead and optimise it. But that's a <i>real</i> optimisation, not what the author has termed a micro-optimisation, which I take to be something you should just do routinely (like ++i instead of i++ in C++).<p>If you have lots of string conversions throughout your code then the chances are most of them are going to be sprintfs or whatever is the most flexible tool in the language you are using. In these cases, you should just stick with what is idiomatic within the context of that project. It makes <i>reading</i> the code later a lot faster when everything is similar. It also tends to make it easier to change when your simple special cases need to become more complicated.<p>For example, I have in previous projects standardised on regular expressions for almost all string comparisons, even in situations where a simple substring compare would be much more efficient. However, since 90% of the codebase is using regular expressions to do complex comparisons, it just makes life easier if they are used everywhere unless there's a really, really good reason to do things differently. It reduces cognitive load when reading the code if it follows a similar style throughout.<p>It also makes maintenance easier when you use the most flexible tool at your disposal everywhere instead of special-casing. Let's say you expect the first character of your string to be an "a" and you do it with substr(foo,1)=="a". Later, you need to make it case-insensitive because of a bug. With regex you just add an "i" flag, but with the special case, you need a tolowercase call. No biggie, but the next day you need to support unicode accents. Uh oh...<p>If you have a large codebase where string processing is all done in the same way, when you get a bug like not recognising "á" as "a" then at least you will find that all parts of your system behave consistently. Fixing the problem should require roughly the same fix everywhere. The test cases can all be the same. Going back to the author's example, there's no guarantee that sprintf("%d",x) and itoa(x) will produce the same output on all platforms, so it's possible this change although it should be functionally identical, might in reality introduce new edge-cases that you need to test for.<p>If you've got special cases everywhere then you're going to get different sorts of bugs in different parts of your system which can lead to issues being much harder to trace, harder to test and harder to fix.<p>TL;DR: Optimise for readability first. Then optimise for performance. Allocate the time you have wisely. Homogeneity is reasonable substitute for DRY; special-cases for common patterns are usually bad and can introduce bugs.