TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Matrix Multiplication with Half the Multiplications

310 pointsby emacs28about 1 year ago

7 comments

halflingsabout 1 year ago
This looks pretty cool! What's the catch? e.g. why isn't this already implemented in accelerators, is it really just a forgotten algorithm, or this has some implications on the cost of building the accelerator or else?
评论 #39715790 未加载
评论 #39714576 未加载
评论 #39715277 未加载
评论 #39719264 未加载
评论 #39717572 未加载
评论 #39714599 未加载
ixaxaarabout 1 year ago
Man I remembered something similar I had tried working on in 2018, but gave up after all my PhD applications got rejected.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ixaxaar&#x2F;pytorch-dni">https:&#x2F;&#x2F;github.com&#x2F;ixaxaar&#x2F;pytorch-dni</a><p>The concept here goes a bit further and tries to replicate backprop with an external network, arguing that that&#x27;s probably what the brain actually does.
评论 #39715105 未加载
评论 #39717641 未加载
评论 #39715951 未加载
michelppabout 1 year ago
This is very cool and a real interesting read! For those in the comments confused about how this is better, the paper is talking about synthesizing matrix multiplication pipelines in hardware, like an FPGA or ASIC. On a CPU or GPU you won&#x27;t notice because adds and multiplications take the same amount of time generally, but multiplication units takes up many more transistors, so if you can reduce the circuit complexity you can increase the speed and parallel throughput and reduce power and routing complexity. This approach could be particularly useful for efficient sparse matrix multiplication accelerators.<p>Another cool way to eliminate multiplication in matrix multiplication is to use different semirings [1]. The Tropical Semiring [2] for example substitutes addition for multiplication and min (or max) for addition. It&#x27;s still matrix multiplication but with substituted binary operations. The research in this relatively new field of Tropical Algebra [3] is quite active and rich right now, being used for all kinds of optimization problems and in research for optimizing neural networks [4] . This approach also lends itself to hardware synthesis since most FPGA configurable logical blocks can add&#x2F;min&#x2F;max in one clock cycle, whereas efficient multiplication requires fixed dedicated on-chip hardware multipliers.<p>Another way to efficiently remove multiplications with a different but related semiring is to use a Log Semiring [5]. If you have to multiply chains of probabilities (like Markov chains) then the numbers quickly become very small and floating point loses its accuracy to represent the numbers. By scaling the numbers first by taking the log, multiplication becomes addition and addition becomes x + log1p(exp(y - x)).<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Semiring" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Semiring</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tropical_semiring" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tropical_semiring</a><p>[3] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tropical_geometry" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tropical_geometry</a><p>[4] <a href="https:&#x2F;&#x2F;proceedings.mlr.press&#x2F;v80&#x2F;zhang18i&#x2F;zhang18i.pdf" rel="nofollow">https:&#x2F;&#x2F;proceedings.mlr.press&#x2F;v80&#x2F;zhang18i&#x2F;zhang18i.pdf</a><p>[5] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Log_semiring" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Log_semiring</a>
评论 #39721305 未加载
评论 #39720332 未加载
评论 #39720475 未加载
评论 #39717424 未加载
评论 #39720758 未加载
评论 #39721563 未加载
Drakimabout 1 year ago
I&#x27;m surprised this actually works, usually detecting whether to use multiplication or addition is slower than simply using multiplication. Especially if it&#x27;s massive amounts of work being done in parallel.
评论 #39738893 未加载
skykoolerabout 1 year ago
I find it fascinating that this is using a process invented in 1968 and hasn&#x27;t been used for this purpose until now!
评论 #39718704 未加载
Lucasoatoabout 1 year ago
If you&#x27;re interested in the mathematical theory behind sub-cubic algorithms for matrix multiplications, you can start from here: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Matrix_multiplication_algorithm#Sub-cubic_algorithms" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Matrix_multiplication_algorith...</a><p>I conjecture that for every j &gt; 0 in R, a number n exists so that any two n x n matrices can be multiplied together in O(n^(2+j)) steps.<p>(Now proven for for 2+j = w = 2.3728596, or j &gt; 0.3728596)
评论 #39716748 未加载
评论 #39719103 未加载
评论 #39716626 未加载
barfbagginusabout 1 year ago
This readme does a really poor job of explaining what the improvement is or how they drop half the multiplications. What is the Big O run time on this? Is this shifting the known best bounds?<p>And the diagrams are chaotic and don&#x27;t really explain anything about why this approach is fast or good. The result is that I&#x27;m reluctant to even click-through to the PDF.<p>If you want to improve the project credibility please consider being honest and open about what is actually going on and giving some clear explanations and illustrations, rather than things that may as well be designed to hype people too busy to tell you that you are cranks. It&#x27;s hard to tell if this is incredibly groundbreaking or just but nothingburger. Sadly I almost feel like that must be an intentional decision motivated by poor merits of work and a desire to exploit AI height. The alternative - which I prefer to believe is the case - is that the author simply needs to revise and better contextualize.
评论 #39714991 未加载
评论 #39714778 未加载
评论 #39717316 未加载
评论 #39717343 未加载
评论 #39714541 未加载