TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Mathematicians Discover the Perfect Way to Multiply

194 pointsby WMCRUNabout 6 years ago

16 comments

makapufabout 6 years ago
Preceding discussion on hackernews <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19474280" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19474280</a>
svatabout 6 years ago
&gt; <i>in 1960, the 23-year-old Russian mathematician Anatoly Karatsuba took a seminar led by Andrey Kolmogorov [....] Kolmogorov asserted that there was no general procedure for doing multiplication that required fewer than n^2 steps. Karatsuba thought there was—and after a week of searching, he found it.</i><p>Karatsuba&#x27;s method is beautiful and actually quite simple to explain.<p>Suppose you want to multiply two 200-digit numbers. Write them as aX+b and cX+d, where X=10^100. We want to compute the product (aX+b)(cX+d), which is (ac)X^2 + (ad+bc)X + (bd).<p>By the naive method, we&#x27;d end up effectively performing all four multiplications ac, ad, bc, bd. Karatsuba observed that we can do it with just three multiplications: ac, bd, and (a+b)(c+d) — and get our (ad+bc) as (a+b)(c+d)-ac-bd.<p>Doing this recursively (so that T(n) = 3T(n&#x2F;2) + O(n)) gives Karatsuba&#x27;s method, cutting down the n^2 time of the naive method to about n^1.58 time.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Karatsuba_algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Karatsuba_algorithm</a>
评论 #19687942 未加载
评论 #19687330 未加载
评论 #19687461 未加载
xiphias2about 6 years ago
,,Hardware changes with the times, but best-in-class algorithms are eternal&#x27;&#x27;<p>Nowdays for these kind of problems the really interesting solution is the best parallel solution. It may be elegant to still prove things for turing machines, but a GPU has 2000 running threads (and if the tensor cores could be used, it&#x27;s even more instructions), so an algorithm implementation that&#x27;s not using it is about 100x inferior in practice.
评论 #19687004 未加载
评论 #19686825 未加载
评论 #19687204 未加载
davesqueabout 6 years ago
Is &quot;Perfect&quot; really an appropriate qualifier here? Did the source article present a proof that O(n log n) is the lowest growth order for this kind of task? Or am I just out of the loop and it&#x27;s already known that this is the case?<p><i>Update:</i> Apparently, the article actually says that the approach is <i>not</i> perfect.
评论 #19686172 未加载
评论 #19686311 未加载
评论 #19687357 未加载
frosted-flakesabout 6 years ago
This article was originally published in Quanta Magazine: <a href="https:&#x2F;&#x2F;www.quantamagazine.org&#x2F;mathematicians-discover-the-perfect-way-to-multiply-20190411&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.quantamagazine.org&#x2F;mathematicians-discover-the-p...</a>
评论 #19686704 未加载
ecesenaabout 6 years ago
Paper: <a href="https:&#x2F;&#x2F;hal.archives-ouvertes.fr&#x2F;hal-02070778&#x2F;document" rel="nofollow">https:&#x2F;&#x2F;hal.archives-ouvertes.fr&#x2F;hal-02070778&#x2F;document</a>
kristianpabout 6 years ago
&quot;The speed gap between multiplication and addition has narrowed considerably over the past 20 years to the point where multiplication can be even faster than addition in some chip architectures.&quot;<p>Does anyone have an example of hardware where multiplication is actually faster than addition? That&#x27;s not an intuitive result unless they&#x27;re talking about floating point numbers, and even then...
评论 #19688459 未加载
asimpletuneabout 6 years ago
I remember implementing strassen, only to be surprised that the naive implementation was actually faster, because it better exploited the machine architecture... that is until I tried using absolutely huge numbers.<p>This recent innovation is very exciting.
评论 #19686406 未加载
评论 #19686324 未加载
_bxg1about 6 years ago
&quot;It was kind of a general consensus that multiplication is such an important basic operation that, just from an aesthetic point of view, such an important operation requires a nice complexity bound&quot;<p>It is just nuts that this kind of reasoning works out reliably.
评论 #19687297 未加载
评论 #19688364 未加载
yahrlyabout 6 years ago
Ok I&#x27;ll be that guy - where is the reference implementation and at what n can I expect an improvement over existing libgmp multiplication on a core i7 ?
评论 #19686255 未加载
评论 #19686841 未加载
评论 #19686419 未加载
Lowkeylokiabout 6 years ago
I was going to make a joke that they could have found out years ago by talking to biologists, but then it turned out that this article is really fascinating!
lordnachoabout 6 years ago
Strassen, perhaps unsurprisingly, also has something to say about matrix multiplication:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strassen_algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strassen_algorithm</a><p>&quot;Strassen&#x27;s algorithm works for any ring, such as plus&#x2F;multiply...&quot;
lamenameabout 6 years ago
Could someone explain how one can use the FFT in an algorithm for multiplication?<p>I guess I would understand if I understood what FFT actually does &#x27;under the hood&#x27;, but I assumed fft itself required multiplication.
评论 #19687244 未加载
评论 #19687905 未加载
评论 #19687030 未加载
gigatexalabout 6 years ago
Might be fastest but it’s not intuitive for my way of thinking. Perhaps because I am so entrenched with the n^2 method.
评论 #19686374 未加载
NelsonMinarabout 6 years ago
This is a lovely, readable summary of a relatively abstract mathematical result.
wppickabout 6 years ago
Are exponentiation operations much faster than multiplication since an exponentiation b^n is just n additions? Given that is true, I wonder if there is a fast approximation method of solving a multiplication by solving an exponentiation that would have a close result?
评论 #19687548 未加载
评论 #19687381 未加载