Very interesting topic, but rather low on detail --- really wanted to see what those 60 lines of Asm that allegedly show a faulty CPU instruction were, and also surprised that it wasn't intermittent; in my experience, CPU problems usually are intermittent and heavily dependent upon prior state, and manually stepping through with a debugger has never shown the "1+1=3" type of situation they claim. That said, I wonder if LINPACK'ing would've found it, as that is known to be a very powerful stress-test with divisive opinions among the overclocking community; some, including me, claim that a system can never be considered stable it if fails LINPACK since that is essentially showing intermittent "1+1=3" behaviour, while others are fine with "occasional" discrepancies in its output since the system otherwise appears to be stable.
Related:<p><i>Meta quickly detects silent data corruptions at scale</i> - <a href="https://news.ycombinator.com/item?id=30905636">https://news.ycombinator.com/item?id=30905636</a> - April 2022 (95 comments)<p><i>Silent Data Corruptions at Scale</i> - <a href="https://news.ycombinator.com/item?id=27484866">https://news.ycombinator.com/item?id=27484866</a> - June 2021 (12 comments)
Google also had a "Cores That Don't Count" paper on so-called "mercurial cores" <a href="https://news.ycombinator.com/item?id=27378624">https://news.ycombinator.com/item?id=27378624</a>
as well as a presentation <a href="https://www.youtube.com/watch?v=QMF3rqhjYuM" rel="nofollow">https://www.youtube.com/watch?v=QMF3rqhjYuM</a>
I wrote an article about these affecting LLM training at <a href="https://www.adept.ai/blog/sherlock-sdc" rel="nofollow">https://www.adept.ai/blog/sherlock-sdc</a>
Interesting. The corruption was in a math.pow() calculation, representing a compressed filesize prior to a file decompression step.<p>Compressing data, with the increased information density & greater number of CPU instructions involved, seems obviously to increase the exposure to corruption/ bitflips.<p>What I did wonder was why compress the filesize as an exponent? One would imagine that representing as a floating-point exponent would take lots of cycles, pretty much as many bits, and have nasty precision inaccuracies at larger sizes.
Interesting paper, but has some technical errors. First of all, they keep mentioning SRAM+ECC, instead of DRAM+ECC; you cannot use gcj to inspect assembly code generated for Java method, as it will be completely different from the code generated by Hotspot; you do not need all that acrobatics to get disasm of the method, you could just add an infinite loop to the code and attach gdb to the JVM process and inspect the code or dump the core.