Testing analytical methods of a field against engineered artefacts is a good idea but there is a fatal flaw here; devices that do a fetch-decode-execute-retire loop against a register file and a memory bus have perversely little in common with what neurobiology is concerned with. A more appropriate artefact would be a CPU <i>and its memory</i> (where NOP'ing out code or flipping flags corresponds to "lesioning"), or even better, an FPGA design (where different functions work in parallel in different locations on the silicon, much like brains).<p>That the tools of neuroscience choke on a 6502 is as much of an indictment of the former as my inability to fly helicopters is an indictment of my fixed-wing airmanship; not coping well with notoriously perverse edge cases outside your domain of expertise isn't inherently a sign of failure (it's not a licence to stop improving, of course). Brains and 6502s are quite literally entirely different kinds of computing, much like designing for FPGA is weird and different from writing x86 assembly or C.<p>A far more interesting question is "could a neuroscientist understand an FPGA?".
Great work.<p>"In other words, we asked if removed each transistor, if the processor would then still boot the game. Indeed, we found a subset of transistors that makes one of the behaviors (games) impossible. We can thus conclude they are uniquely necessary for the game—perhaps there is a Donkey Kong transistor or a Space Invaders transistor. "<p>A fantastic comment to show that describing a system is not the same as understanding the system!
It's an interesting idea here. The paper is arguing that current brain analysis methods don't work well in an alternative environment with a lot of data, so maybe the methods are the problem instead of our lack of data in neuroscience.<p>However, I think this misses part of the point. We use these methods <i>because</i> we have very little data available. There are tons of interesting new ways to analyze brain data that I think computational neuroscientists are dying to explore, but don't have enough data to do so. If we had a lot more data, we might not be using these approaches.
Older article in similar vein: "Can a biologist fix a radio?" <a href="http://math.arizona.edu/~jwatkins/canabiologistfixaradio.pdf" rel="nofollow">http://math.arizona.edu/~jwatkins/canabiologistfixaradio.pdf</a>
Really happy to see this article. This viewpoint is not new, but it is still far from being mainstream.<p>A big issue touched upon in this article is that the space of possible dynamical systems represented in the brain is large, and trying to collect data is not a practical way of trimming this search space. It's more useful to look at types of dynamical systems that have certain stability properties that are desirable for computation.<p>But the issue then becomes that these dynamical systems become mathematically intractable past a few simplified neurons. So it's really hard to make progress either by looking at data, or by studying simplified dynamical systems mathematically.<p>There is a third option. Evolve smart dynamical systems by large scale brute force computation. Start with guesses about neuron-like subsystems with desirable information processing properties (at the single neuron level, such properties are mathematically tractable). Play with the configurations, the rules of evolution, the reward functions, the environment, everything. This may sound a lot like witchcraft but look at how far witchcraft has taken machine learning in recent years (deep learning is just principled witchcraft). This is IMO the only way we will learn how biological intelligence works.
This paper offers some good points but exhibits a number of flaws that limit its applicability to the utility of current neuroscience methods. For a generally thoughtful conversation, see <a href="http://www.brainyblog.net/2016/08/30/could-a-neuroscientist-understand-a-microprocessor-2/" rel="nofollow">http://www.brainyblog.net/2016/08/30/could-a-neuroscientist-...</a> from 2016. One comment that I'd like to highlight from this conversation is pasted below:<p><i></i><i>" But no attempt is made to analyze the similarities and differences in those behaviors. All three game behaviors rely on similar functions. Depending on the level of similarity between the behaviors, you might think of it as trying to find a lesion that only knocks out your ability to read words that start with “k” versus words that start with “s.” That’s an experiment that’s unlikely to succeed. But if the behaviors are more like “speaking” vs “understanding spoken words” vs “understanding written words” then it’s a more reasonable experiment.<p>The authors argue that neuroscientists make the same mistake all the time; that we are operating at the wrong level of granularity for our behavioral measures and don’t know it. That argument denies the degree to which we characterize behaviors in neuroscience, and how stringent we are about controls.<p>The authors point to the fact that transistors that eliminate only one behavior are not meaningfully clustered on the chip. But what they ignore are the transistors that eliminate all three behaviors. Those structures are key to the functioning of the device in general. To me, those 1560 transistors that eliminated all three behaviors are more worthy of study than the lesions that affect only one behavior, because they allow us to determine what is essential to the behavior of the system. You can think of those transistors as leading to the death of the organism, just as damage to certain parts of the brain cause death in animals."</i><i></i>
People do reverse engineer chips by photographing them. <a href="https://youtu.be/aHx-XUA6f9g" rel="nofollow">https://youtu.be/aHx-XUA6f9g</a> (Reading Silicon: How to Reverse Engineer Integrated Circuits). But as far as I know, the same cannot be done with the brains even if we can photograph it. I guess the 3D structure of the brain compounded with high interconnection between neurons does not make it easy.
I had the odd, but unique, experience of taking "Computer Engineering" and "Formal Logic" (a neurology/history-of-thought course) during the same semester. One observation from that experience is that there is a great deal of cognitive overlap in our representation and communication of those fields of study. Typically, I would see that overlap as being indicative of broad similarity.<p>Reading this and the comments makes me question the similarity of the fields somewhat. Perhaps it is just our tools for comprehension that are shared between the two rather than any deeply tactical, functional commonality.<p>To that end, I think that experts in these fields could communicate very effectively with each other once some vocabulary had been sorted out. How effective one expert would be in the other's field is less clear to me.
what i don't like about this sort of thing is: the only guaranteed way to succeed in the (apparent, revealed) objectives of the paper is not try very hard.<p>The obvious problem here is the clear mismatch between the behaviors and their research objectives and methods.<p>If they wanted to understand transistors, they'd do what cellular neuroscientists do, and isolate and manipulate individual transistors inputs and measure the outputs.<p>If they wanted to understand how clusters of transistors, whose activities are tightly coupled (as you'd expect them to be in a logic gate), then you'd isolate those, and manipulate the inputs and measure the outputs.<p>If you wanted to understand higher levels of organization, using a lesion approach, you need to decide how much to lesion. In the brain, function is localized in clusters of related activity, and there is usually a lot of redundancy. Single neuron lesions are not usually enough to have noticeable effects. But even then, a lesion approach is more interesting when you couple it with real experiments. Consider this paper by Sheth et al. <a href="https://www.nature.com/articles/nature11239" rel="nofollow">https://www.nature.com/articles/nature11239</a>, which had subjects perform a cognitive control task before a surgical lesion to the dorsal anterior cingulate, coupled with single unit recordings, and then had them perform the same task after the lesion. The experiment yielded pre-lesion behavioral and neural evidence of a signal related to predicted demand for control, and post-lesion, the behavioral signal was abolished.<p>Of course, the Sheth paper would not have been possible without the iterative improvements in understanding made by prior work, including Botvinick's neural models of conflict monitoring and control. That is, its iterative; and this cpu paper was never intended to be iterative.
> An optimized C++ simulator was constructed to enable simulation at the rate of 1000 processor clock cycles per wallclock second.<p>Following links in through "code and data":<p><a href="http://ericmjonas.github.io/neuroproc/pages/data.html" rel="nofollow">http://ericmjonas.github.io/neuroproc/pages/data.html</a><p>I found:<p><a href="https://github.com/ericmjonas/neuroprocdata" rel="nofollow">https://github.com/ericmjonas/neuroprocdata</a><p>But I couldn't find any link to the c++ code. Surely the emulator is also needed in order to be able to reproduce the research?<p>A bit of a shame they used closed source games - I'm not sure how one would go about obtaining copies (legally). But it would be interesting to try replication via other places/demos - as they only model booting anyway.
"This example nicely highlights the importance
of isolating individual behaviors to understand the
contribution of parts to the overall function. If we
had been able to isolate a single function, maybe
by having the processor produce the same math
operation every single step, then the lesioning experiments
could have produced more meaningful
results. "<p>I submit that this direction is an important one to pursue.
It would be interesting to repeat this for a GPU.<p>A CPU has so much hardware common to most instructions that any failure will take it down completely. That's less true of a GPU, where a failure of one of the massively parallel units is likely to manifest as some alteration of the output image.
I didn't read it but the abstract's first sentences sound as if it's rather about "Do the issues neuroscientists face when examining the human brain persist when they examine a microprocessor instead?"
Of course a neuroscientist could understand a microprocessor by other methods. The point of the article is the usual methods of neuroscience would have limited results though I think in general in science people use whatever methods they can think of to figure what's going on and the methods of neuroscience are probably the best people can come up with for figuring brains. Though there are also interesting results from the AI researchers mucking about with artificial neural networks also.
From some conversations with neuroscientists, it seems that one issue that limits investment in new tools to measure the brain is that its easier to get a publication by analyzing an existing data set in a new way, or even generating and analyzing a bigger / different data set with fMRI or eeg or clinical data, than it is to develop a novel tool to measure the brain (like optogenetics). but there are a lot of advances being made in new tools to get better data on how the brain works
It's definitely easier to understand a microprocessor than chemical-oriented protein systems that have mostly evolved into an operable state by chance.<p>A CPU is founded on a limited set of basic components that possess reasonable qualities, behave consistently, and only scale to large quantities with identical repetition.<p>Just leave out the deeper materials science and solid state quantum physics behind the "why" of how transistors operate.
This feels like a silly strawman being made out of the methods used in neuroscience. In a microprocessor, a single bit being flipped the wrong way potentially stops the whole show; your Donkey Kong game from 1981 doesn't run at all. By contrast, you will not anywhere near fully incapacitate a brain by lesioning a single neuron, if at all.
> This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.<p>Using the same analytical techniques against a cpu who's design is unknown at time of analysis. Nice meta-analysis, clickbait title.<p>Edit: reformatted, thanks.
is this something like this XKCD? :)<p><a href="https://xkcd.com/1588/" rel="nofollow">https://xkcd.com/1588/</a>
It is garbage :(.<p>Srsly a CPU has nothing to do with a brain at all. It doesn't make sense to use technics from one for the other.<p>I have no idea how anyone comes up with such an idea and even publishes it.<p>A Brain itself is everything. Ram and CPU.<p>A CPU is just a CPU there is no state in physical form.<p>A CPU is a turing machine, a brain isn't.
That question ist weird. "Sure, why not?" would be my reply to that. I thought one should avoid yes/no questions for a paper?<p>I'm not a neuroscientist (I cannot afford medical education), nor am I a microprocessor engineer (yet). But I understand how systems work, so I might have a chance to understand how neural networks work (as models and their real counterparts) and I might have already an understanding on how microprocessors are designed by principle. So, yes, a neuroscientist who decides to visit some lectures on digital logic circuits and microprocessor design might have a chance to understand it! I'm really confused about this quenstion.