I see a lot of criticism here saying things like "DNNs have nothing to to with brains, they weren't designed to work like brains, and any resemblance is surely just an artifact of training them to do brain-like things."<p>The fact is, there have been neuroscientists working with neural network models with greater and lesser complexity than DNNs for decades. They've been utilized to great profit outside of neuroscience lately, but that doesn't make them not an abstraction of some aspects of cortical computation.<p>We don't quite understand how brains could perform or approximate backprop yet, but it's the only training algorithm that has been remotely successful at training networks deep enough to do human-like visual recognition. So many people take that as a big clue as to what we should be looking for in the brain to explain its great performance and ability to learn, rather than a reason to disqualify DNNs entirely.<p>There's plenty of modeling work going on with more traditional biophysical models, such as those that include spiking, interneuron compartments, attractor dynamics, etc. This is just an attempt to also come at the problem from the other direction, starting from something that we know works well (for vision) and trying to figure out how to ground it in biophysical reality.