"‘SpiNNaker’ machine is capable of completing more than 200 million million actions per second, with each of its chips having 100 million moving parts."<p>Could someone elaborate? I am probably missing something as I hadn't heard of moving parts on a solid-state device?
From the article: "The newly formed million-processor-core ‘Spiking Neural Network Architecture’ or ‘SpiNNaker’ machine is capable of completing more than 200 million million actions per second, with each of its chips having 100 million moving parts."<p>I kinda doubt that.
Article author:<p>>> The world’s largest neuromorphic supercomputer designed and built to work in
<i>the same way a human brain does</i><p>Project lead:<p>>> We’ve essentially created a machine that works <i>more like a brain</i> than a
traditional computer<p>Press releases, ladies and gentlemen.
I went to a code jam with some of these guys (as part of the Human Brain Project), the architecture is pretty interesting but it's lots and lots of little ARM (v6?) processors on a grid interconnect, probably not too far from Xeon Phi, even if it aims for neural like computation more so that a Phi.
Does that add up? Doesn't the human brain have several more zeroes on there? Maybe 'worm brain' or 'lizard brain' would be a better description?
This article was pretty low on details, so I went out and collected some links:<p>This looks more or less like another stab at the Cray connection machine[0], but with modern hardware and a better framework about how neural nets can and should work.<p>> The SpiNNaker engine is a massively-parallel multi-core computing system. It will contain up to 1,036,800 ARM9 cores and 7Tbytes of RAM distributed throughout the system in 57K nodes, each node being a System-in-Package (SiP) containing 18 cores plus a 128Mbyte off-die SDRAM (Synchronous Dynamic Random Access Memory). Each core has associated with it 64Kbytes of data tightly-coupled memory (DTCM) and 32Kbytes of instruction tightly-coupled memory (ITCM). The cores have a variety of ways of communicating with each other and with the memory, the dominant of which is by packets. These are 5- or 9-byte (40- or 72-bit) quanta of information that are transmitted around the system under the aegis of a bespoke concurrent hardware routing system. [1]<p>So, lots of relatively relatively tiny, interconnected nodes.<p>They built their own SoC to handle this. With a built-in router in the middle. The router handles routing on the chip, and multicasts to its neighbors.<p>> The heart of the communications infrastructure is a bespoke multicast router that is able to replicate packets where necessary to implement the multicast function associated with sending the same packet to several different destinations. [2]<p>It also looks like they're developing dev boards [3]<p>So basically, this looks like a giant, really awesome, custom ARM cluster that they want to do neural network stuff with.<p>If anyone from the team is here, I'd love to hear more about how this will be used. Specifically, how will you prevent SpiNNaker from going down the same path as the Connection Machine - (stops doing AI stuff because, say, geneticists want to use it for protein sequencing)? Why do you see this as the future over something like NVIDIA's new HGX-2 or clusters of TPUs?<p>[0] <a href="https://en.wikipedia.org/wiki/Connection_Machine" rel="nofollow">https://en.wikipedia.org/wiki/Connection_Machine</a><p>[1] <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/architecture/" rel="nofollow">http://apt.cs.manchester.ac.uk/projects/SpiNNaker/architectu...</a><p>[2] <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/SpiNNchip/" rel="nofollow">http://apt.cs.manchester.ac.uk/projects/SpiNNaker/SpiNNchip/</a><p>[3] <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/hardware/index3.php" rel="nofollow">http://apt.cs.manchester.ac.uk/projects/SpiNNaker/hardware/i...</a>
> capable of completing more than 200 million million actions per second<p>> To reach this point it has taken £15million in funding, 20 years in conception and over 10 years in construction, with the initial build starting way back in 2006.<p>Wow, those numbers.. and 10 years to build... I’d be very excited to turn it on!
> Biological neurons are basic brain cells present in the nervous system that communicate primarily by emitting ‘spikes’ of pure electro-chemical energy.<p>I don't think that those are terms of art.
This article, while obviously PR, is confusing to me. It seems to pin too much emphasis on the nature of its hardware as if that's enough to "emulate" the brain. If they don't have folks at the calibre of Deepmind to drive this thing, can it really go very far?<p>What it could be useful for is neural structure modeling at a more primitive layer, even if the end outcome isn't usable for practical consumption.
Furber's lab built an interesting extension on SDM with N-of-M rank codes <a href="http://apt.cs.manchester.ac.uk/ftp/pub/apt/papers/sbf_TNN07_old.pdf" rel="nofollow">http://apt.cs.manchester.ac.uk/ftp/pub/apt/papers/sbf_TNN07_...</a>
Some are saying this is no where near the amount of processing the human brain has. But, it doesn't seem we need that much of the brain. There are plenty of stories of highly functional and talented individuals with almost no brain, and this machine would probably be in their ballpark.
Steve Furber did some early design work in the ARM processor:<p><a href="https://en.m.wikipedia.org/wiki/Steve_Furber" rel="nofollow">https://en.m.wikipedia.org/wiki/Steve_Furber</a>
"...and even before its data banks had been connected, it deduced the existence of rice pudding and income tax before anyone managed to turn it off." - H2G2
1M ... 1,000,000<p>Powers of 10 ... 10 fingers on the ape-man. Such a weird non computing number to be thrilled with.<p>I'm always suspicious when numbers fit into powers of ten like that. Like, somewhere in that build process the person who holds the purse strings doesn't know binary.