In general I've long been very skeptical of removing optimizations that rely on undefined behavior. People say "I'd happily sacrifice 1% for better theoretical semantics", but theoretical semantics don't pay the bills of compiler writers. Instead, compiler developers are employed by the largest companies, where a 1% win is massive amounts of dollars saved. Any complaint about undefined behavior in C <i>must</i> acknowledge the underlying economics to have relevance to the real world.<p>As the paper notes, there are plenty of alternative C compilers available to choose from. The reason why GCC and LLVM ended up attaining overwhelming market share is simply that they produce the fastest possible code, because, at the end of the day, <i>that is what users want</i>.<p>If you want to blame someone, blame the designers of the C language for doing things like making int the natural idiom to iterate over arrays even when size_t would be better. The fact that C programmers continue to write "for (int i = 0; i < n; i++)" to iterate over an array is why signed overflow is undefined, and it is absolutely a critical optimization in practice.
A bad workman blames his tools, so they say.<p>There is a large population of C "real programmers" who, when they write a C program that unsurprisingly doesn't work, conclude this must be somebody else's fault. After all, as a real programmer they certainly meant for their program to work, and so the fact it doesn't can't very well be their fault.<p>Such programmers tend to favour very terse styles, because if you don't write much then it can't be said to be your fault when, invariably, it doesn't do what you intended. It must instead be somebody else's fault for misunderstanding. The compiler is wrong, the thing you wanted was the obviously and indeed only correct interpretation and the compiler is <i>willfully</i> misinterpreting your program as written.<p>Such programmers of course don't want an error diagnostic. Their program doesn't have an error, it's correct. The <i>compilers</i> are wrong. Likewise newer better languages are unsuitable because the terse, meaningless programs won't compile in such languages. New languages often demand specificity, the real programmer is obliged to spell out what they meant, which introduces the possibility that they're capable of mistakes because what they meant was plain wrong.
The fact that a compiler is allowed/encouraged to silently remove whole sections of code because of some obscure factoid is an amazing source of footguns.<p>At least the warnings are getting a bit better for some of these.
This is a typical whining about UB article, but removing it won't get what you want, in particular your program still won't behave correctly across architectures. Overflow on shift left may be undefined, but how do you want to define it? If you want a "high level assembler", well, the underlying instructions behave differently on ARM, x86 scalar, and x86 SIMD.<p>The reason they claim program optimizations aren't important is because you can do it by hand for a specific architecture pretty easily, but you'll still want them when porting to a new one, eg if it wants loop counters to go in the opposite direction.
How much of this is driven by modern C++ style? I always assumed optimizers needed to become much more aggressive because template-heavy code results in convoluted IR with tons of unreachable code. And UB-based reasoning is the most effective tool to prove unreachability.
The actual title of the paper is "How ISO C became unusable for operating systems development". Is there a particular reason why the first and last words have been removed here?
direct PDF link: <a href="https://www.yodaiken.com/wp-content/uploads/2021/10/yodaiken_plos_c2-.pdf" rel="nofollow">https://www.yodaiken.com/wp-content/uploads/2021/10/yodaiken...</a>
To me the stupid thing is the abuse of undefined behavior for changing the semantic of the code. The fact that a behavior is not defined in the standard doesn't mean that on a particular hardware platform it doesn't have a particular meaning (and most C programs doesn't need to be portable, since C it's mainly used for embedded these days and thus you are targeting a particular microcontroller/SOC).<p>These optimizations leave for C++ folks. C doesn't need all of that, just leave it as the "high level assembler" that it was in the old days, where if I write an instruction I can picture the assembler output in my mind.<p>Optimizers should not change the code semantics to me. Unfortunately with gcc it's impossible to rely on optimizations, so the only safe option is to turn them off entirely (-O0).