Huh. Based on a quick skim, these guys claim that:<p>* There is low-dimensional structure within the space of representations learned by language models.<p>* This low-dimensional structure is reflected in brain responses predicted by model embeddings -- specifically, they say they can recover known language processing hierarchies from the principal dimension of representations, and that the embeddings can be used to predict which representations map well to each area in the brain.<p>I'm going to read the paper carefully.