Fun fact: this is only valid for domains that have a notion of "selfness", i.e. that there is such thing as an "identity matrix" for the quantities.<p>Consider the following square matrix:<p><pre><code> TSLA APPL GOOG MSFT
Alice | 100 5 0 1
Bob | 0 30 100 5
Carol | 2 2 2 2
Dan | 0 0 0 1000
</code></pre>
An input vector of stock prices gives an output vector of net worths. However, that is about the only way you can use this matrix. You cannot transform the table arbitrarily and still have it make sense, such as applying a rotation matrix -- it is nonsensical to speak of a rotation from Tesla-coordinates to Google-coordinates. The input and output vectors lacks tensor transformation symmmetries, so they are not tensors.<p>This is also why Principal Component Analysis and other data science notions in the same vein are pseudoscience (unless you evaluate the logarithm of the quantities, but nobody seems to recognize the significance of unit dimensions and multiplicative vs additive quantities)
This approach reminds me of RedisGraph[1] (which is now unfortunately EoL).<p>"RedisGraph is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph."<p>1. <a href="https://github.com/RedisGraph/RedisGraph">https://github.com/RedisGraph/RedisGraph</a>
Beautifully done. I subscribe to the author’s Substack, lots of other really nice stuff there.<p>A little off topic, but this is just one more example of beautifully done content on Substack. I have seriously considered setting my freedom.to settings to only allow accessing HN, FB, Twitter, Mastodon, etc., 1 or 2 mornings a week, and that time would mostly be for ensuring that I was always subscribed to a few good Substack channels.<p>With the explosion of ‘tech stuff I should read’, I think I need a more extreme culling of what I spend my time on.
Another interesting mapping is that a vector is (or can be thought of as) a discrete function (f(x) = ....) over an interval, a dot product of two vectors is a discrete integral product, and a matrix is a discrete scalar field.<p>I wonder what the continuous form of a graph is... Some sort of a manifold perhaps?
GraphBLAS completely revolves around this duality. Sparse linear algebra is really powerful.
<a href="https://graphblas.org/" rel="nofollow noreferrer">https://graphblas.org/</a>
If folks are looking for terms to Google, try "spectral graph theory" and "algebraic graph theory"<p><a href="https://en.wikipedia.org/wiki/Spectral_graph_theory" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spectral_graph_theory</a><p><a href="https://en.wikipedia.org/wiki/Algebraic_graph_theory" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Algebraic_graph_theory</a><p>Pretty much every field in math has a related field where you try to turn problems from the former into linear algebra problems.<p>The "spectral theorem" (<a href="https://en.wikipedia.org/wiki/Spectral_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spectral_theorem</a>) is an important theorem in linear algebra that gives conditions for when a matrix can be diagonalized, which is closely related to what its eigenvalues/eigenvectors look like.<p>The simplest version of the spectral theorem says that a symmetric matrix with real-number entries has real-number eigenvalues. The eigenvalues of a matrix are called the "spectrum of the matrix", hence "spectral theorem" and "spectral graph theory".<p>The adjacency matrix of any undirected graph is real symmetric, so its eigenvalues are all real numbers and it's natural to ask whether they say anything about the underlying graph.<p>Lucky for us, there are lots of surprising connections!<p>For example, say G is an finite undirected graph. The chromatic number of G, denoted χ(G), is the fewest number of colors needed to color its vertexes so that no two adjacent vertexes have the same color.<p>If λ₁ is the largest eigenvalue of G's adjacency matrix then there's a theorem (Wilf's theorem) that says<p><pre><code> χ(G) ≤ 1 + ⌊λ₁⌋
</code></pre>
That is, you can always color a graph with 1 + ⌊λ₁⌋ colors, where ⌊x⌋ is the floor of x.<p>And there are some (finite, undirected) graphs that require exactly 1 + ⌊λ₁⌋ colors, so we're not doing any better unless we can say something more specific about the graph.<p>Wilf's Theorem:<p><a href="https://www2.math.upenn.edu/~wilf/website/Eigenvalues%20of%20a%20graph.pdf" rel="nofollow noreferrer">https://www2.math.upenn.edu/~wilf/website/Eigenvalues%20of%2...</a><p><a href="https://www2.math.upenn.edu/~wilf/website/Inequality%20for%20Chromatic%20Number.pdf" rel="nofollow noreferrer">https://www2.math.upenn.edu/~wilf/website/Inequality%20for%2...</a>
If expressed as adjacency matrix every graph is a square matrix. And every square matrix has a characteristic polynomial. So that means: Every graph is also a polynomial!<p>see <a href="https://mathworld.wolfram.com/CharacteristicPolynomial.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/CharacteristicPolynomial.html</a>
I think I’m misunderstanding. The node relabeling seems backwards.<p>He says start with the highest order, which makes me think the neighborhood with order 3 would get the smaller node labels, and the neighborhoods with order 0 would get the highest.<p>It looks like the opposite was done.