No he didn't, he just proved the wellknown and widely implemented Schoenhage–Strassen conjecture from 1971 to be correct.
Which was O(n log log n), now implemented for O(n log n).<p>Schoenhage–Strassen is used for multiplying large numbers with >30.000 digits. Below Karatsuba is used, and for >trillion digits Fuerer could be used. What he did was to improve Fuerer's method from 2007, nobody is using yet.<p><a href="https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm</a>
From my understanding, this won't have any implications on the way ALUs are designed, correct? I was taught that Russian Peasant multiplication[1] was the most straightforward method for implementing multiplication efficiently on hardware. I would have to imagine that performing a Fast Fourier Transform (FFT from the paper) would add significant overhead. Can anyone more familiar with ALU design confirm or deny this?<p>[1] = <a href="http://mathforum.org/dr.math/faq/faq.peasant.html" rel="nofollow">http://mathforum.org/dr.math/faq/faq.peasant.html</a>
The paper:<p><a href="https://hal.archives-ouvertes.fr/hal-02070778/document" rel="nofollow">https://hal.archives-ouvertes.fr/hal-02070778/document</a>