Many optimizations are dismissed as irrelevant and "premature". But, for enterprises with finite cash to throw at hardware, fewer optimizations are premature than is often assumed.<p>Use less memory, use less CPU, and you can get more instances running on a single machine.<p>RiiR, switch to a unikernel, etc. same thing.<p>But only real optimization counts---real reductions in resource usage. Not things you believe _should_ make things faster or lighter; only things that _do_.<p>But if the optimization costs you more in developer time than will be reclaimed in operations costs over the _entire lifetime of the code_... then it is premature.<p>Estimating this, though, is a challenge.<p>Just stating the obvious, I suppose.
Optimized code takes longer to write and is often more complex and less readable and less maintainable. Since software engineer salaries costs are higher than hardware costs it would take way longer and cost more to optimize all code.<p>When you're writing code sometimes you need to refactor, requirements change etc.<p>You may focus on optimizing code for saving 3 nanoseconds on code that doesn't run that often.<p>Optimization is important, and can save money on hardware resources, but focusing on the wrong things to optimize can be costly and make the application more difficult to maintain.<p>Once you identify the bottlenecks of the application you can focus on optimizing these.<p>Obviously that doesn't mean you should throw away resources, so it's always a good idea to use static analysis in combination with code reviews to avoid unnecessary use of resources, like repeatedly doing a costly operation in a loop that always yields the same result (it can be done before the loop starts, or you can memoize/cache the result to reuse it), caching network/database requests that are expected to return the same data, use a connection pool for expensive resources like database connections...<p>But more advanced optimization should be reserved for bottlenecks.
The problem with skipping major optimizations is that the tech debt grows with time.<p>The ROI calculation is tough, but there are things the simply make sense by looking at the data, e.g. find underutilized resources based on the native tools in the cloud infrastructure environment.
That only gets you to a local minima. The point of avoiding premature optimization is to avoid optimizations that you've done making things too complex for you to be able to get to global minima for the problem your code is solving.