Great write up! Using learned representations for content in personal knowledge bases seems like a huge missing piece of tools like Roam. This appears to go all the way to the other side of the spectrum, not supporting any explicit graph links, IIUC.<p>I feel like ultimately you want both. Explicit links are a useful navigation affordance with nice properties that spatial embeddings won't give you (e.g. you can explicitly establish a link between things that are not similar according to the embedding space).<p>More important than that though, explicit links let you train the embedding model to understand the dataset the way the user does. All of these embedding models are trained on graphs (word or sentence cooccurence graphs, parent/child comment graphs on social media, etc.). The graph structure in something like Roam can provide training data for updating and adapting the embedding space to the specific knowledge context in which it's used.<p>Conversely, if you have an embedding representation of your knowledge base, you can use that to suggest explicit links. The embedding space is the dense dual to the sparse graph of explicit links in something like Roam. It's a fully connected weighted graph rather than a sparse unweighted graph.<p>Maybe this system is meant to only focus on the spatial embedding representation. That makes a lot of sense. A fully-fledged version of this vision though IMO should include a bridge between these two dual representations.