TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Matrices and Graph

205 pointsby jgrodziskialmost 2 years ago

14 comments

EsportToysalmost 2 years ago
Fun fact: this is only valid for domains that have a notion of &quot;selfness&quot;, i.e. that there is such thing as an &quot;identity matrix&quot; for the quantities.<p>Consider the following square matrix:<p><pre><code> TSLA APPL GOOG MSFT Alice | 100 5 0 1 Bob | 0 30 100 5 Carol | 2 2 2 2 Dan | 0 0 0 1000 </code></pre> An input vector of stock prices gives an output vector of net worths. However, that is about the only way you can use this matrix. You cannot transform the table arbitrarily and still have it make sense, such as applying a rotation matrix -- it is nonsensical to speak of a rotation from Tesla-coordinates to Google-coordinates. The input and output vectors lacks tensor transformation symmmetries, so they are not tensors.<p>This is also why Principal Component Analysis and other data science notions in the same vein are pseudoscience (unless you evaluate the logarithm of the quantities, but nobody seems to recognize the significance of unit dimensions and multiplicative vs additive quantities)
评论 #36738343 未加载
评论 #36736298 未加载
评论 #36736833 未加载
评论 #36736846 未加载
评论 #36737009 未加载
评论 #36742182 未加载
评论 #36737834 未加载
kmadalmost 2 years ago
This approach reminds me of RedisGraph[1] (which is now unfortunately EoL).<p>&quot;RedisGraph is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph.&quot;<p>1. <a href="https:&#x2F;&#x2F;github.com&#x2F;RedisGraph&#x2F;RedisGraph">https:&#x2F;&#x2F;github.com&#x2F;RedisGraph&#x2F;RedisGraph</a>
评论 #36739579 未加载
mark_l_watsonalmost 2 years ago
Beautifully done. I subscribe to the author’s Substack, lots of other really nice stuff there.<p>A little off topic, but this is just one more example of beautifully done content on Substack. I have seriously considered setting my freedom.to settings to only allow accessing HN, FB, Twitter, Mastodon, etc., 1 or 2 mornings a week, and that time would mostly be for ensuring that I was always subscribed to a few good Substack channels.<p>With the explosion of ‘tech stuff I should read’, I think I need a more extreme culling of what I spend my time on.
Scene_Cast2almost 2 years ago
Another interesting mapping is that a vector is (or can be thought of as) a discrete function (f(x) = ....) over an interval, a dot product of two vectors is a discrete integral product, and a matrix is a discrete scalar field.<p>I wonder what the continuous form of a graph is... Some sort of a manifold perhaps?
评论 #36736714 未加载
评论 #36737629 未加载
评论 #36740157 未加载
评论 #36736704 未加载
yvdriessalmost 2 years ago
GraphBLAS completely revolves around this duality. Sparse linear algebra is really powerful. <a href="https:&#x2F;&#x2F;graphblas.org&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;graphblas.org&#x2F;</a>
jfarmeralmost 2 years ago
If folks are looking for terms to Google, try &quot;spectral graph theory&quot; and &quot;algebraic graph theory&quot;<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Spectral_graph_theory" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Spectral_graph_theory</a><p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Algebraic_graph_theory" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Algebraic_graph_theory</a><p>Pretty much every field in math has a related field where you try to turn problems from the former into linear algebra problems.<p>The &quot;spectral theorem&quot; (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Spectral_theorem" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Spectral_theorem</a>) is an important theorem in linear algebra that gives conditions for when a matrix can be diagonalized, which is closely related to what its eigenvalues&#x2F;eigenvectors look like.<p>The simplest version of the spectral theorem says that a symmetric matrix with real-number entries has real-number eigenvalues. The eigenvalues of a matrix are called the &quot;spectrum of the matrix&quot;, hence &quot;spectral theorem&quot; and &quot;spectral graph theory&quot;.<p>The adjacency matrix of any undirected graph is real symmetric, so its eigenvalues are all real numbers and it&#x27;s natural to ask whether they say anything about the underlying graph.<p>Lucky for us, there are lots of surprising connections!<p>For example, say G is an finite undirected graph. The chromatic number of G, denoted χ(G), is the fewest number of colors needed to color its vertexes so that no two adjacent vertexes have the same color.<p>If λ₁ is the largest eigenvalue of G&#x27;s adjacency matrix then there&#x27;s a theorem (Wilf&#x27;s theorem) that says<p><pre><code> χ(G) ≤ 1 + ⌊λ₁⌋ </code></pre> That is, you can always color a graph with 1 + ⌊λ₁⌋ colors, where ⌊x⌋ is the floor of x.<p>And there are some (finite, undirected) graphs that require exactly 1 + ⌊λ₁⌋ colors, so we&#x27;re not doing any better unless we can say something more specific about the graph.<p>Wilf&#x27;s Theorem:<p><a href="https:&#x2F;&#x2F;www2.math.upenn.edu&#x2F;~wilf&#x2F;website&#x2F;Eigenvalues%20of%20a%20graph.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;www2.math.upenn.edu&#x2F;~wilf&#x2F;website&#x2F;Eigenvalues%20of%2...</a><p><a href="https:&#x2F;&#x2F;www2.math.upenn.edu&#x2F;~wilf&#x2F;website&#x2F;Inequality%20for%20Chromatic%20Number.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;www2.math.upenn.edu&#x2F;~wilf&#x2F;website&#x2F;Inequality%20for%2...</a>
amaialmost 2 years ago
If expressed as adjacency matrix every graph is a square matrix. And every square matrix has a characteristic polynomial. So that means: Every graph is also a polynomial!<p>see <a href="https:&#x2F;&#x2F;mathworld.wolfram.com&#x2F;CharacteristicPolynomial.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;mathworld.wolfram.com&#x2F;CharacteristicPolynomial.html</a>
dustingetzalmost 2 years ago
what book to read to develop these concepts and intuitions with engineering level of math (not proofs)?
评论 #36736707 未加载
评论 #36735891 未加载
评论 #36736370 未加载
enchiridionalmost 2 years ago
I think I’m misunderstanding. The node relabeling seems backwards.<p>He says start with the highest order, which makes me think the neighborhood with order 3 would get the smaller node labels, and the neighborhoods with order 0 would get the highest.<p>It looks like the opposite was done.
duckqlzalmost 2 years ago
Does anyone know how the graph illustrations were created? I have driven myself mad with tikz and dot .
评论 #36744624 未加载
评论 #36736011 未加载
评论 #36735761 未加载
评论 #36736794 未加载
luttikalmost 2 years ago
This is also the basis of process mining. And why process graphs are so easy to analyse.
zmgsabstalmost 2 years ago
This is especially fascinating when you consider graphs&#x2F;diagrams are a way to encode math.
评论 #36737658 未加载
评论 #36737670 未加载
owlbitealmost 2 years ago
Sparse Linear Algebra is Graphs all the way down.
bionhowardalmost 2 years ago
Beautiful illustrations, thank you for sharing.