It's certainly an interesting paper, but there's a bit of publication weirdness at play here.<p>In October '17, Cueva & Wei put out a(n anonymous) paper that recapitulates the core result almost exactly -- that training a recurrent neural network to perform dead reckoning/path integration gives you intermediate units whose place fields strongly resemble grid cells. Critically, this only happens when regularization is applied; Cueva/Wei used noisy inputs and DeepMind implemented 50% stochastic dropout in the intermediate linear layer. There are some superficial differences (generic RNN units vs. LSTM), but at their core these studies are virtually identical. Check it out:<p><a href="https://openreview.net/forum?id=B17JTOe0-" rel="nofollow">https://openreview.net/forum?id=B17JTOe0-</a><p>What I don't get -- why doesn't DeepMind acknowledge this result? Sure, the Nature paper was submitted in July '17, but these things go through many revisions. Clearly, DeepMind went a bit further with the whole integrating visual CNNs/grid cells part. Nonetheless: Fig. 1 is the core result, everything from Fig. 2 onwards is nice-to-have but not essential, and I feel like Cueva/Wei got there first.<p>Ah, well. At least the minor controversy brings in great publicity for the Cueva/Wei paper.
Nature news article on the paper:
<a href="https://www.nature.com/articles/d41586-018-04992-7" rel="nofollow">https://www.nature.com/articles/d41586-018-04992-7</a>