TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Neuromorphic computing implementation of pole balancing

56 pointsby kleebeeshover 8 years ago

7 comments

obstbraendeover 8 years ago
They use evolutionary search to discover spiking neural networks whose response dynamics can solve a control task. This is a fascinating approach, but one that I&#x27;ve only ever seen as a means to do theoretical neuroscience: A way to obtain interesting spiking networks whose dynamics we can study in the hope of developing mathematical tools that will help understand biological networks.<p>But here, from the claims in the post and the lab website, it sounds as if the goal is in application: Creating better, more efficient controllers. This comes across as a little detached from the applied machine learning literature. At the least, I missed a comparison to reinforcement learning (which has a history of learning to solve this exact task with simpler controller designs and most likely shorter search times) and also to non-bio-inspired recurrent networks.<p>One more point: Even if I follow along with the claim that &#x27;deep learning&#x27; approaches don&#x27;t have memory (implying recurrent networks aren&#x27;t included in that label), I want to point out that this particular task setup, with positions&#x2F;angles as well as their rates of change provided, can be solved by a memoryless controller. It would have done more to highlight the strengths of the recurrent network approach if a partially observable benchmark task had been used, e.g. feeding positions and angles only. Much more difficult high-dimensional tasks e.g. in robotic control are tackled in the (deep) reinforcement learning literature among others.
imaginenoreover 8 years ago
Two-pole balancing (2009):<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=fqk2Ve0C8Qs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=fqk2Ve0C8Qs</a><p>Double inverted pendulum balancing (2015), a much harder task:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8t3i2WPpIDY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8t3i2WPpIDY</a><p>Double inverted pendulum balancing with a physical cart (2011), a much much harder task:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=B6vr1x6KDaY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=B6vr1x6KDaY</a><p>Triple!!! inverted pendulum balancing with a physical cart (2011), a much much much harder task:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cyN-CRNrb3E" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cyN-CRNrb3E</a>
Animatsover 8 years ago
The single inverted pendulum balancing problem has been solved using neural nets, fuzzy logic, and nonlinear control theory. It&#x27;s a standard problem in controls classes.<p>Here&#x27;s a system learning how to do this.[1] Takes about 200 trials.<p>Here&#x27;s the <i>triple</i> inverted pendulum balancing problem, solved using feedforward control.[2]<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Lt-KLtkDlh8" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Lt-KLtkDlh8</a> [2] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cyN-CRNrb3E" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=cyN-CRNrb3E</a>
alexbeloiover 8 years ago
Are there any non-evolutionary training methods for neuromorphic networks?<p>Currently, fixed architecture ANNs can solve the cartpole problem very quickly already with Q-learning or policy gradient methods: <a href="https:&#x2F;&#x2F;gym.openai.com&#x2F;envs&#x2F;CartPole-v0" rel="nofollow">https:&#x2F;&#x2F;gym.openai.com&#x2F;envs&#x2F;CartPole-v0</a><p>It seems like some kind of neuromorphic networks are going to be necessary for the long term AI &#x27;dream&#x27;, but there really needs to be something better then evolutionary algorithms for training, those just don&#x27;t scale.
评论 #12480908 未加载
评论 #12477739 未加载
mooneaterover 8 years ago
&quot;A Deep Learning system does not have a temporal component&quot; but isnt that what recursive deep learning is for.<p>And how do they calibrate low&#x2F;middle&#x2F;high.
评论 #12476309 未加载
评论 #12475626 未加载
philipovover 8 years ago
I understand that some of the inputs don&#x27;t affect the behavior and so don&#x27;t connect to anything, but why do some of the synapses not connect to anything? In particular, what is the meaning of the two nodes above and below the outputs. They are on long chains that appear meaningless. Why did the network grow there; is it just evolutionary noise?
评论 #12476113 未加载
评论 #12476105 未加载
musesumover 8 years ago
How was the wiring between neurons and synapses created?
评论 #12476132 未加载