The amount of computational power in biological systems is simply staggering.<p>In extremely simple organisms like roundworms, there are on the order of hundreds of neurons; for most insects you're in the 10k-1M range.<p>A honeybee contains one million neurons, which are computational devices that we have a hard time fully and accurately mapping, and something like a billion connections between them.<p>Each of those neurons contains the entire genome for that honeybee, around 250 million base pairs. Those code for all of the ~thousands of proteins that make up a honeybee - proteins are made up of sequences of amino acids which arrange themselves into shapes with different molecular interaction properties. Figuring out that shape given the amino acid sequence is so computationally difficult that it spawned the Folding@Home project, which is one of the largest collections of computing power in the world.<p>The process of translating from DNA through RNA to a protein is itself substantially harder than it sounds - spend time with a bioinformatics textbook at some point to see some of the features of DNA, such as non-coding regions in the middle of sequences that describe proteins, or sections of RNA which themselves fold into functional forms.<p>None of this is even getting down to the molecular level, where the geometry of the folded proteins allows them to accelerate reactions by millions or trillions of times, allowing processes which would normally operate at geological scales to be usable for something with the lifespan of a bacterium.<p>The most complex systems we've ever devised pale in comparison to even basic biological systems. You need to start to look at macro-scale systems like the internet or global shipping networks before you start to see things that approximate the level of complexity of what you can grow in your garden.<p>Nature builds things, we're playing with toys.
To put it gently, highly reminiscent of:
<a href="https://www.biorxiv.org/content/10.1101/613141v2" rel="nofollow">https://www.biorxiv.org/content/10.1101/613141v2</a>
>>> work suggests that popular neuron models may severely underestimate the computationalpower enabled by the biological fact of nonlinear dendrites and multiple synapses per pair of neuron<p>Actually sounds quite significant ;)
Call me crazy, but isn’t this “single biological neuron” actually 2 locally connected layers with a field width of 2 and unshared weights with a third fully connected layer at the end? With a relu nonlinearity?<p>I’m not surprised this does well on MNIST and I’m not sure it breaks with present research directions in deep learning. This network could be built pretty easily in torch or tensorflow.
I can't really comment on the novelty of this work, but I don't think the connectivity structure makes much sense.<p>I mean, it does in the sense that local pixels are strongly correlated and a binary tree will captures this. In fact if you add weight-sharing to the K-tree model you can recover 1D convolution with a stride and kernel of 2.<p>But is this really the right operation for images? Why fixed kernel of 2? I think capsules or some other vector-based operation would make more sense. Perhaps with a learned or dynamic connectivity pattern.
She made a video presentation at the Brains@Bay Meetup:<p><a href="https://youtu.be/40OEn4Gkebc?t=2769" rel="nofollow">https://youtu.be/40OEn4Gkebc?t=2769</a>