I find it interesting that this appears to be written by a group of physicists rather than NLP or ML researchers, and I think you can kind of see that in the way they approach the problem. I think a bunch of the work done after LDA among ML and NLP people tended towards (a) using Hierarchical Dirichlet Process models as a platform from which to explore Bayesian nonparametrics more generally (b) better inference algorithms for topic models and (c) somewhat richer models (i.e. author topic models, syntax aware topic models, etc).<p>And it's not like the people in this field haven't been aware of network-oriented methods. But rather than using community-detection as a mechanism for topic discovery, instead people either focused on networks among topics to see how topics are related, networks among authors such that social network information informed topic discovery, or networks among documents where link/reference information was explicitly part of the model.<p>These authors seem to get solid results in part by having totally different values/aesthetics. Unlike the Bayesian nonparametrics people, they clearly don't care about picking arbitrary, inflexible parameters (e.g. the 5% threshold), nor do they want their model to have a clear, generative form, nor are they particularly concerned about having a new algorithmic insight (since they throw their hard work to InfoMap, and discuss none of its details), nor do they attempt to advance the expressiveness of their topic model (they proceed with the most basic bag-of-words model available). But it does seem like they get good results on the basic task with a very pragmatic, pipeline approach.