I first learned about C. elegans neuron-mapping project from this Society of Mind video:<p><a href="https://www.youtube.com/watch?v=6Px0livk6m8" rel="nofollow">https://www.youtube.com/watch?v=6Px0livk6m8</a><p>My immediate interest was in seeing the differences and similarities between real and simulated worm. I haven't spent much time searching for resulting papers, but it's been 7 years since then and I'm not aware of any ground-breaking publications on the subject.<p>Unless I'm missing something massive, describing this paper as training the worm to "balance a pole at the tip of its tail" is highly misleading.<p>In this paper researchers use an external algorithm to tweak parameters of a part of worm's neural model until that part can perform a certain task. The neural circuit effectively serves as a controller for a mechanism that has nothing to do with the original worm. The task, the setup, the subset of the model and the training algorithms are all chosen by the researchers.
I vaguely recall that experts consider artificial neural networks to be a very gross approximation of biological ones. They often state that one reason we don't have AI today is that we don't really know how the brain and the neurons it is made of work.<p>Then I wonder : how does openworm deal with that lack of knowledge? Is there any chance progress in modeling C. Elegans could be used to improve machine learning?
It’ll become interesting when we can teach the worm before “uploading”it, and the resulting NN already knows how to balance that pole without any further training. As is, The article sounds underwhelming
"Uploaded" no, copied into a perfect or near-perfect simulation.<p>A perfect copy of an organism is still not the organism, but a copy. Just like your twin brother is not you. You won't live forever in a computer.