> It is possible in theory that code that's less carefully optimized exhibits different behavior, or that the benchmarks chosen here are simply not as amenable to compiler optimization as they could be<p>This seems like a rather important point that's glossed over. Typical code is not often as optimized and meticulously written. It would be nice to see how much compilers have improved there.
I'm not all that surprised by the small improvement on regular C++ code: the last decade hasn't seen radical changes in how this is done; compiler innovation has been elsewhere with only the SIMD story seen in this article. I was surprised by the lousy build times, though.<p>The choice of WSL2 as platform introduces a few confounders, especially filesystem performance, which might distort the differences between build times in particular. If someone wants to get a better understanding of what's going on, maybe a breakdown of where the time is spent or performing the benchmarks on other platforms would be a good idea.
I have a simple C++ raytracer I wrote by going through Ray Tracing in One Weekend. I have not even made an attempt to optimize it. I really only made it parallel by splitting it up into tiles.<p>Clang 10 was able to automatically vectorize the code, so it performs >2x as fast as GCC 8.3. To be fair to GCC, I'm using my distro's GCC, but I built a newer Clang for C++ coroutine support.
Are bigger optimizations to be hard in the design of the higher level languages that are easier for compilers to optimize?<p>As an extreme example, I imagine dynamic languages are hard to optimize because the compiler can make few assumptions about the code.<p>(Have little knowledge of compilers so correct me if I'm wrong.)
I would expect Proebsting's law to hit a wall faster than Moore's law, simply because software performance is better understood than physics.<p>Perhaps someone could compare FORTRAN compilers to get a longer term view.