If the algorithm's so short, why not post the pseudocode?<p>Or did they? Here's my impression based on the article and a few minutes of refreshers:<p>Problem: Given a (weighted?) graph, output its Laplacian matrix, which represents (in the sense of being polynominal-time reducible to?) how much current would flow between any two nodes if voltage were applied across it.<p>New algorithm:<p>1) Find a (minimal?) spanning tree for the graph.<p>2) Calculate how much current would flow through that graph if it were so weighted and a unit voltage were applied through it, with the edges being resistors (and leaves presumably connected to ground?).<p>3) Create a loop restoring one of the edges in the original graph.<p>4) Update the currents your previously calculated by the amount needed to balance the circuit.<p>5) Repeat from Step 2) until you have the original graph.<p>6) The currents are your solution?<p>(Sorry, lots of edits to correct and clarify.)
This algorithm is useless without quantifying constants or numerically demonstrating that the constants are practical for some test problems. Although they cite CMG in passing, they do not compare performance. They also (negligently) do not cite LAMG. MG algorithms are also highly parallelizable, which is not clearly the case for this algorithm.<p>To get an idea of how essential it is to quantify constants, recall that in 1964, Fedorenko rigorously proved that the number of iterations of multigrid required to reduce the residual of Poisson on a rectangular grid with n points by a factor of epsilon is O(n * |log epsilon|). He quantified the associated constant as<p><pre><code> W(n, 0.01) <= 210000*n + W(10^6, 0.01)
</code></pre>
Groundbreaking optimal asymptotics, but utterly useless (the proposed algorithm had the wrong idea about good coarse spaces) and nobody cared until Achi Brandt came along it 1973 and revolutionized implicit solvers by demonstrated that if you get the details right<p><pre><code> W(n, discretization error) <= 40 * n
</code></pre>
(using only addition and shift operations).
> an algorithm that solved graph Laplacians in “nearly linear time,” meaning that the algorithm’s running time didn’t increase exponentially with the size of the problem.<p>MIT, I expected more from you.
Wasn't this algorithm described in a post to HN a few months ago in the context of fraud detection, but subsequently taken down at the request of the poster's employer?
Wasn't there an article pretty recently about a well-performing algorithm for the traveling salesman problem that sounded pretty much exactly like this algorithm?
I would love to see a visualization of their algorithm iterating its loop rebalancing thing on a simple 2D graph. Maybe a weekend project...<p>[removed: I was curious about the SDD requirement, but that's always the case for a graph Laplacian]