It appears that we might have jumped on the transistor metaphor too quick and to intensely. I believe not everything is binary in the brain, even if the resulting executive action is. Patterns, for example, might not be.
Correct me if I'm dead wrong here, but isn't "software machine learning" taking advantage from all the neurons being "interconnected", similar to a brain? How does that work with physical (discrete?) components as in this case?
I find it remarkable that their simulations exhibit sparse encoding. Is this a know property of artificial neural networks based on spike-time dependant plasticity?<p>I can imagine how it might emerge in this particular implementation from the electric current following the path of least resistance through the circuit, thereby preventing adjacent neurons from reaching criticality. This mechanism never occured to me before reading this article, though. Is anyone aware of any prior art on this topic?