Is this really an optimization that we want/need a compiler to do? I kind of get it for really high-level languages like Haskell, but for low-level languages like C/C++ it feels like a silly toy example. The abstract machine is so low-level that many big optimizations can’t be done. Instead the compiler can only do little O(1) optimizations. And you know what, I’m perfectly fine with that. C/C++ trades it all in order to give the programmer significant control over how the code works. If I want to use the summation formula, I can just write that myself. Honestly the thing I hate about the standards bodies is that they try to have it both ways so their compiler developer buds can implement their favorite (imho dubious) optimizations. But as a programmer, what I really want is well-defined, predictable constructs. Worrying about UB while also trying to design an algorithm, while also designing good data structures, while also designing for cache hierarchy
, while also optimizing, while also thinking about cache coherency, while also thinking about paging and tlb behaviors, while also.. .. .. It’s all already too complicated before having to worry about what UB the compiler may try to exploit next week. I have a very strong suspicion that almost any non-trivial C/C++ program has UB somewhere. If so, then something is seriously wrong with this whole concept when useful and practical programs written in C are not technically defined/valid C programs!