I have created a tutorial slideshow on the theory of GraphBLAS with lots of examples and step-by-step illustrations: <a href="http://mit.bme.hu/~szarnyas/grb/graphblas-introduction.pdf" rel="nofollow">http://mit.bme.hu/~szarnyas/grb/graphblas-introduction.pdf</a><p>A shorter version of this tutorial was recorded at FOSDEM earlier this year: <a href="https://fosdem.org/2020/schedule/event/graphblas/" rel="nofollow">https://fosdem.org/2020/schedule/event/graphblas/</a><p>More papers/tutorials/libraries are listed at <a href="https://github.com/GraphBLAS/GraphBLAS-Pointers" rel="nofollow">https://github.com/GraphBLAS/GraphBLAS-Pointers</a>.
This interview with Tim Davis has a bit of background information.<p><a href="https://www.acm.org/articles/people-of-acm/2019/tim-davis" rel="nofollow">https://www.acm.org/articles/people-of-acm/2019/tim-davis</a><p>> I picked Gaussian elimination. "That will be quick," I thought. Not! I got started on matrices in 1985 and I'm not done with Gaussian elimination yet.<p>> GraphBLAS is a community effort, including industry, academics, and government labs, that is working to design a library that can implement graph algorithms based on sparse linear algebra over semirings. There are lots of great graph libraries out there that don't exploit the linear algebraic abstraction, but most of what they do can be viewed as matrix operations on adjacency matrices. GraphBLAS makes the connection to linear algebra explicit.<p>> GraphBLAS gives the graph algorithm developer the power and abstraction of linear algebra, with all its advantages (associative and distributive laws, AB=(B'A')', and so on). One level of breadth-first search, for example, is a single sparse matrix-times-sparse vector multiply, with no loss of asymptotic efficiency. Google's entire PageRank computation can be done with a handful of iterations around a single call to GraphBLAS, using what I call the "PageRank semiring."
RedisGraph uses GraphBLAS:<p><a href="https://github.com/RedisGraph/RedisGraph/" rel="nofollow">https://github.com/RedisGraph/RedisGraph/</a><p>Their paper claims that computing with adjacency (sparse) matrices is more performant than alternatives:<p><a href="https://arxiv.org/pdf/1905.01294.pdf" rel="nofollow">https://arxiv.org/pdf/1905.01294.pdf</a><p>They compile queries written in Cypher (Neo4j's language) to operations in linear algebra. An example query is:<p><pre><code> MATCH (n:actor{name:"Nicolas Cage"})-[:act]->(m:movie)<-[:act]-(a:actor) RETURN a.name, m.title</code></pre>
Shameless self promotion, I started a rust wrapper as my first project for GraphBLAS<p><a href="https://github.com/fabianmurariu/rustgraphblas" rel="nofollow">https://github.com/fabianmurariu/rustgraphblas</a><p>and a JNI wrapper<p><a href="https://github.com/fabianmurariu/graphblas-java-native" rel="nofollow">https://github.com/fabianmurariu/graphblas-java-native</a><p>They are not 100% complete, the JNI one is not published yet and only works on linux
There is also a Python wrapper:<p><a href="https://github.com/michelp/pygraphblas" rel="nofollow">https://github.com/michelp/pygraphblas</a>
For those like me who prefer watching talks, Tim Davis presented for Redis a year or two ago.<p><a href="https://youtu.be/xnez6tloNSQ" rel="nofollow">https://youtu.be/xnez6tloNSQ</a>