They use evolutionary search to discover spiking neural networks whose response dynamics can solve a control task. This is a fascinating approach, but one that I've only ever seen as a means to do theoretical neuroscience: A way to obtain interesting spiking networks whose dynamics we can study in the hope of developing mathematical tools that will help understand biological networks.<p>But here, from the claims in the post and the lab website, it sounds as if the goal is in application: Creating better, more efficient controllers. This comes across as a little detached from the applied machine learning literature. At the least, I missed a comparison to reinforcement learning (which has a history of learning to solve this exact task with simpler controller designs and most likely shorter search times) and also to non-bio-inspired recurrent networks.<p>One more point: Even if I follow along with the claim that 'deep learning' approaches don't have memory (implying recurrent networks aren't included in that label), I want to point out that this particular task setup, with positions/angles as well as their rates of change provided, can be solved by a memoryless controller. It would have done more to highlight the strengths of the recurrent network approach if a partially observable benchmark task had been used, e.g. feeding positions and angles only. Much more difficult high-dimensional tasks e.g. in robotic control are tackled in the (deep) reinforcement learning literature among others.
Two-pole balancing (2009):<p><a href="https://www.youtube.com/watch?v=fqk2Ve0C8Qs" rel="nofollow">https://www.youtube.com/watch?v=fqk2Ve0C8Qs</a><p>Double inverted pendulum balancing (2015), a much harder task:<p><a href="https://www.youtube.com/watch?v=8t3i2WPpIDY" rel="nofollow">https://www.youtube.com/watch?v=8t3i2WPpIDY</a><p>Double inverted pendulum balancing with a physical cart (2011), a much much harder task:<p><a href="https://www.youtube.com/watch?v=B6vr1x6KDaY" rel="nofollow">https://www.youtube.com/watch?v=B6vr1x6KDaY</a><p>Triple!!! inverted pendulum balancing with a physical cart (2011), a much much much harder task:<p><a href="https://www.youtube.com/watch?v=cyN-CRNrb3E" rel="nofollow">https://www.youtube.com/watch?v=cyN-CRNrb3E</a>
The single inverted pendulum balancing problem has been solved using neural nets, fuzzy logic, and nonlinear control theory. It's a standard problem in controls classes.<p>Here's a system learning how to do this.[1] Takes about 200 trials.<p>Here's the <i>triple</i> inverted pendulum balancing problem, solved using feedforward control.[2]<p>[1] <a href="https://www.youtube.com/watch?v=Lt-KLtkDlh8" rel="nofollow">https://www.youtube.com/watch?v=Lt-KLtkDlh8</a>
[2] <a href="https://www.youtube.com/watch?v=cyN-CRNrb3E" rel="nofollow">https://www.youtube.com/watch?v=cyN-CRNrb3E</a>
Are there any non-evolutionary training methods for neuromorphic networks?<p>Currently, fixed architecture ANNs can solve the cartpole problem very quickly already with Q-learning or policy gradient methods: <a href="https://gym.openai.com/envs/CartPole-v0" rel="nofollow">https://gym.openai.com/envs/CartPole-v0</a><p>It seems like some kind of neuromorphic networks are going to be necessary for the long term AI 'dream', but there really needs to be something better then evolutionary algorithms for training, those just don't scale.
"A Deep Learning system does not have a temporal component" but isnt that what recursive deep learning is for.<p>And how do they calibrate low/middle/high.
I understand that some of the inputs don't affect the behavior and so don't connect to anything, but why do some of the synapses not connect to anything? In particular, what is the meaning of the two nodes above and below the outputs. They are on long chains that appear meaningless. Why did the network grow there; is it just evolutionary noise?