In case you're interested in learning about graph deep learning, and are familiar with standard DL, I strongly recommend these two very good, recent books (freely available):<p>[1] Graph Representation Learning - William L. Hamilton
<a href="https://www.cs.mcgill.ca/~wlh/grl_book/" rel="nofollow">https://www.cs.mcgill.ca/~wlh/grl_book/</a><p>[2] Deep Learning on Graphs - Yao Ma, Jilian Tang
<a href="https://cse.msu.edu/~mayao4/dlg_book/" rel="nofollow">https://cse.msu.edu/~mayao4/dlg_book/</a>
A good survey paper is here (Jan 2021) by Chami, Abu-El-Haifa, Perozzi, Re and Murphy<p>[2005.03675] Machine Learning on Graphs: A Model and Comprehensive Taxonomy <a href="https://arxiv.org/abs/2005.03675" rel="nofollow">https://arxiv.org/abs/2005.03675</a>
Super dumb question, when we talk about graph deep learning are we talking about deep learning with graphs is input? Or is it the use of graphs as the underlying architecture for a deep learning system.<p>I work in the field of information security and in my head I can see a scenario of using what I would call time series graphs, in which something like an attack or basic system behavior could be modeled as an evolution of relationships over time (implying some causality). Or maybe more precisely, they are a set of discrete graphs in which some edges are temporal proximity rather than a concrete relationship. My problem is I’m so illiterate when it comes to machine learning that I’m having trouble translating it into a basic context that I could start tinkering with to learn.<p>Part of the challenge comes from the fact that nearly every relationship or metric is discrete/categorical in nature, there are very few continous/scaler values when it comes to information that would be processed. As a result, most of the introductory tutorials and training around image classification or speech recognition or clustering are difficult to apply. I want to watch a billion events per hour to find instances where there are atypical relationships between system events in one log to network connection events in another. These graphs are extremely simple to compute, but categorizing them and analyzing them is proving difficult for my lay brain.
What's the current killer app of GNNs? By that I mean, in which tasks is using a GNN-based approach obviously better?<p>Conceptually, just by framing the graph as a mxm matrix I assume the most obvious comparison would be against some sort of dimensionality reduction for node embedding. Should I see graph(node) embedding as an alternative to e.g. PCA, LDA, t-SNE and UMAP and variants?