They use evolutionary search to discover spiking neural networks whose response dynamics can solve a control task. This is a fascinating approach, but one that I've only ever seen as a means to do theoretical neuroscience: A way to obtain interesting spiking networks whose dynamics we can study in the hope of developing mathematical tools that will help understand biological networks.<p>But here, from the claims in the post and the lab website, it sounds as if the goal is in application: Creating better, more efficient controllers. This comes across as a little detached from the applied machine learning literature. At the least, I missed a comparison to reinforcement learning (which has a history of learning to solve this exact task with simpler controller designs and most likely shorter search times) and also to non-bio-inspired recurrent networks.<p>One more point: Even if I follow along with the claim that 'deep learning' approaches don't have memory (implying recurrent networks aren't included in that label), I want to point out that this particular task setup, with positions/angles as well as their rates of change provided, can be solved by a memoryless controller. It would have done more to highlight the strengths of the recurrent network approach if a partially observable benchmark task had been used, e.g. feeding positions and angles only. Much more difficult high-dimensional tasks e.g. in robotic control are tackled in the (deep) reinforcement learning literature among others.