There was some academic research mentioned, I don't know exactly when, on HN related to software optimization. I can't seem to track down the source and need some help. From what I can remember, the gist is this:<p>- How do you know your optimization is useful?<p>- It could be fragile, dependent on compilers, etc.<p>- Modelling in this research was done by modifying a concept of time. Something like assuming some subsystem ran faster using some sort of novel technique I can't recall.<p>- Instrumentation within this framework gave data that pointed to subsystems where optimization would be fruitful and avoid fragility.<p>I can't be sure if any of the details of the above points are misremembered. Sorry. Hopefully someone can help.
This one ? <a href="https://wordsandbuttons.online/the_real_cpp_killers.html" rel="nofollow">https://wordsandbuttons.online/the_real_cpp_killers.html</a><p>from <a href="https://news.ycombinator.com/item?id=39770467">https://news.ycombinator.com/item?id=39770467</a>