This again. A perfect example of solving the wrong problems in a clever way. To his credit, Wheeler at least gives credit to the brilliant engineer (Karger) who invented the attack, points out it took 10 years before that knowledge reached anyone via Thompson (recurring problem in high-security), and did the reference essays on the two solutions to the actual problem (high-assurance FLOSS & SCM's). That's what you're better off reading.<p>Here's a quick enumeration of the problems in case people wonder why I gripe about this and reproducible builds fad:<p>1. What the compiler does needs to be fully specified and correct to ensure security.<p>2. The implementation of it in the language should conform to that spec or simply be correct itself.<p>3. No backdoors are in the compiler, the compilation process, etc. This must be easy to show.<p>4. The optimizations used don't break security/correctness.<p>5. The compiler can parse malicious input without code injection resulting.<p>6. The compilation of the compiler itself follows all of the above.<p>7. The resulting binary that everyone has is the same one matching the source with same correct <i>or malicious</i> function but no malicious stuff added that's not in the source code already. This equivalence is what everyone in mainstream is focusing on. I already made an exception for Wheeler himself given he did this <i>and</i> root cause work.<p>8. The resulting binary will then be used on systems developed without mitigating problems above to compile other apps not mitigating problems above.<p>So, that's a big pile of problems. The Thompson attack, countering the Thompson attack, or reproducible builds collectively address the tiniest problem vs all the problems people actually encounter with compilers and compiler distribution. There's teams working on the latter that have produced nice solutions to a bunch of them. VLISP, FLINT, the assembly-to-LISP-to-HLL project & CakeML-to-ASM come to mind. There's commercial products, like CompCert, available as well. Very little by mainstream in FOSS or proprietary.<p>The "easy" approach to solve most of the real problem is a certifying compiler in a safe language bootstrapped on a simple, local one whose source is distributed via secure SCM. In this case, you do not have a reproducible build in vast majority of cases since you've verified source itself and have a verifying compiler to ASM. You'll even benefit from no binary where your compiler can optimize the source for your machine or even add extra security to it (a la Softbound+CETS). Alternatively, you can get the binary that everyone can check via signatures on the secure SCM. You can even do reproducible builds on top of my scheme for the added assurance you get in reproducing bugs or correctness of specific compilations. Core assurance... 80/20 rule... comes from doing a compiler that's correct-by-construction much as possible, easy for humans to review for backdoors, and on secure repo & distribution system.<p>Meanwhile, the big problems are ignored and these little, tactical solutions to smaller problems keep getting lots of attention. Same thing that happen between Karger and Thompson time frame for Karger et al's other recommendations for building secure systems. We saw where that went in terms of the baseline of INFOSEC we had for decades. ;)<p>Note: I can provide links on request to definitive works on subversion, SCM, compiler correctness, whatever. I think the summary in this comment should be clear. Hopefully.<p>Note 2: Anyone that doubts I'm right can try an empirical approach of looking at bugs, vulnerabilities and compromises published for both GCC and things compiled with it. Look for number of times they said, "We were owned by the damned Thompson attack. If only we countered it with diverse, double compilation or reproducible builds." Compare that to failures in other areas on my list. How unimportant this stuff is vs higher-priority criteria should be self-evident at that point. And empirically proven.