I am a computational cognitive neuroscientist, an have worked at many levels. I find each kind of data and model useful to some extent, but I have to admit that the least useful, are, to my mind, those at the detailed neural network level, like the ones discussing in this paper. Somewhat more useful are higher level dynamic architecture models, and, at the highest level, cognitive models, which constrain the behavioral target we are trying to explain. I personally (as one can tell from my other posts here) find the dynamics brain development models to be the most compelling as overall models, but they are not particularly explanatory at the detailed level. Brain science is trying to do the hardest thing you can imagine, that is, explain the most complex machine in the known universe. We persist, but no one entering this field should have very high expectations of near term grand successes.
What we need is a Newtonian model of the brain. A model that is incomplete and "wrong", but useful and generative. While Newtonian physics may be "wrong", is much easier to learn than quantum physics or relativity, etc.<p>Neuroscience usually focuses on precision details, but doesn't aim to tell big picture stories. There are a few exceptions, however, like Karl Friston's free energy reduction model.
The article touched upon the C. elegans connectome. There are a few interesting projects attempting to simulate the creature.<p><a href="https://en.wikipedia.org/wiki/OpenWorm" rel="nofollow">https://en.wikipedia.org/wiki/OpenWorm</a><p><a href="https://en.wikipedia.org/wiki/WormBase" rel="nofollow">https://en.wikipedia.org/wiki/WormBase</a>
From the article, a quote that is enlightening:
if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”
There is a good chance that complete brain mapping will be similar to whole genome sequencing. The result will be interesting but only answer a limited subset of questions.
I'm confused. This article doesn't say anything. It makes no points and has no insight. "There's a lot of data in neuroscience?" Is that the message? An unusual number of Nautilus articles frontpage HN like this one, where there doesn't seem to be any value in the article itself. What is going on?
To identify what the author feels is missing in neuroscience: in order to understand something, you need to figure out how to describe two things about it (1) what its state is at any point in time, and (2) how that state evolves in time. Connectomics gives you the beginnings to solve (1), but it doesn't go the whole way. There's a fundamental misunderstanding that you can collect exabytes of data and glean understanding from it just because how much you had to sweat getting the storage and collecting it. That's not how it works, it needs to have structure too. I wish more biologists / neuroscientists understood this.
Tangentially related: Could a Neuroscientist Understand a Microprocessor? [1] (short answer: not with current analytic tools)<p>[1] <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268" rel="nofollow">https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...</a>
I highly recommend this lecture by Jeff Lichtman, where he describes the machine they've built to slice the brain and the software they have written to visualize and make sense of this vast amount of data:<p><a href="https://www.youtube.com/watch?v=2QVy0n_rdBI" rel="nofollow">https://www.youtube.com/watch?v=2QVy0n_rdBI</a>
Generally too purple for its own good but a very interesting read! The Borges reference (and C. elegans conundrum) makes me as a layreader really appreciate how little we actually know about the endgame for all this data.<p>But there is such elegance in "rudimentary" DNN's giving us the ability to at all assemble this stuff.
The author should go talk to some astrophysicists. They have a similar problem -- humans are unlikely to ever understand how the entirety of the cosmos works, but it's still interesting to learn about the small bits.
"We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets."<p>Exemplifies:<p>- Data is not information.
- Information is not knowledge.
- Knowledge is not understanding.<p>We've not even left the gate of the first tier. Both exciting and intimidating, but mostly humbling. Or should be.
I believe we are at the part where we think how the city works is by mapping it, but you got sewers, pipelines, everything underneath that you haven’t really dug into. There is a lot more inside a neuron that can be mapped. Let’s just say your map isn’t detailed enough
If you've seen some of the high resolution videos of neural activity captures from even simple fish, the slightest motor movements activate hundreds of thousands of cells in a chaotic pattern. Neural circuitry is not neatly laid out like a silicon chip, its a forest of inter-connectivity that resists analysis even with extremely detailed visualization and data captures.
“ connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”<p>As a scientist to say something you don’t fully understand as impossible really pisses me off.
Physicists don't understand gravity...neuroscientists don't understand the main...maybe the universe is a giant brain and the stars are neurons, the big bang was conception and we are bacterial growth. Better than all current theories.
I am planning to solve this =) just cracked the inner workings of ANNs (future SHOW HN) and am going to read a book on computational neuroscience tomorrow.