TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Computational Power of Biological Dendritic Trees

90 pointsby lamenameover 4 years ago

8 comments

roughlyover 4 years ago
The amount of computational power in biological systems is simply staggering.<p>In extremely simple organisms like roundworms, there are on the order of hundreds of neurons; for most insects you&#x27;re in the 10k-1M range.<p>A honeybee contains one million neurons, which are computational devices that we have a hard time fully and accurately mapping, and something like a billion connections between them.<p>Each of those neurons contains the entire genome for that honeybee, around 250 million base pairs. Those code for all of the ~thousands of proteins that make up a honeybee - proteins are made up of sequences of amino acids which arrange themselves into shapes with different molecular interaction properties. Figuring out that shape given the amino acid sequence is so computationally difficult that it spawned the Folding@Home project, which is one of the largest collections of computing power in the world.<p>The process of translating from DNA through RNA to a protein is itself substantially harder than it sounds - spend time with a bioinformatics textbook at some point to see some of the features of DNA, such as non-coding regions in the middle of sequences that describe proteins, or sections of RNA which themselves fold into functional forms.<p>None of this is even getting down to the molecular level, where the geometry of the folded proteins allows them to accelerate reactions by millions or trillions of times, allowing processes which would normally operate at geological scales to be usable for something with the lifespan of a bacterium.<p>The most complex systems we&#x27;ve ever devised pale in comparison to even basic biological systems. You need to start to look at macro-scale systems like the internet or global shipping networks before you start to see things that approximate the level of complexity of what you can grow in your garden.<p>Nature builds things, we&#x27;re playing with toys.
评论 #24414305 未加载
评论 #24415143 未加载
评论 #24414678 未加载
gtsnexpover 4 years ago
To put it gently, highly reminiscent of: <a href="https:&#x2F;&#x2F;www.biorxiv.org&#x2F;content&#x2F;10.1101&#x2F;613141v2" rel="nofollow">https:&#x2F;&#x2F;www.biorxiv.org&#x2F;content&#x2F;10.1101&#x2F;613141v2</a>
ArtWombover 4 years ago
&gt;&gt;&gt; work suggests that popular neuron models may severely underestimate the computationalpower enabled by the biological fact of nonlinear dendrites and multiple synapses per pair of neuron<p>Actually sounds quite significant ;)
评论 #24409390 未加载
dave_sullivanover 4 years ago
Call me crazy, but isn’t this “single biological neuron” actually 2 locally connected layers with a field width of 2 and unshared weights with a third fully connected layer at the end? With a relu nonlinearity?<p>I’m not surprised this does well on MNIST and I’m not sure it breaks with present research directions in deep learning. This network could be built pretty easily in torch or tensorflow.
评论 #24414122 未加载
评论 #24412743 未加载
评论 #24411916 未加载
angusturnerover 4 years ago
I can&#x27;t really comment on the novelty of this work, but I don&#x27;t think the connectivity structure makes much sense.<p>I mean, it does in the sense that local pixels are strongly correlated and a binary tree will captures this. In fact if you add weight-sharing to the K-tree model you can recover 1D convolution with a stride and kernel of 2.<p>But is this really the right operation for images? Why fixed kernel of 2? I think capsules or some other vector-based operation would make more sense. Perhaps with a learned or dynamic connectivity pattern.
kroggenover 4 years ago
She made a video presentation at the Brains@Bay Meetup:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;40OEn4Gkebc?t=2769" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;40OEn4Gkebc?t=2769</a>
bionhowardover 4 years ago
Nice paper, this could motivate more sophisticated ANN models vs multiply-add-activation paradigm
评论 #24410463 未加载
评论 #24409287 未加载
29athrowawayover 4 years ago
Some people tend to forget that most neural models are an oversimplified approximation of biological nervous systems.
评论 #24409517 未加载
评论 #24412014 未加载