Yann LeCun (neural net pioneer and Facebook AI head) has a somewhat-skeptical post about this chip: <a href="https://www.facebook.com/yann.lecun/posts/10152184295832143" rel="nofollow">https://www.facebook.com/yann.lecun/posts/10152184295832143</a>. His essential points:<p>1. Building special-purpose hardware for neural nets is a good idea and potentially very useful.<p>2. The architecture implement by this IBM chip, spike-and-fire, is <i>not</i> the architecture used by the state-of-the-art convolutional networks, engineered by Alex Krizhevsky and others, that have recently been smashing computer vision benchmarks. Those networks allow for neuron outputs to assume continuous values, not just binary on-or-off.<p>3. It would be possible, though more expensive, to implement a state-of-the-art convnet in hardware similar to what IBM has done here.<p>Of course, just because no one has shown state-of-the-art results with spike-and-fire neurons doesn't mean that it's impossible! Real biological neurons are spike-and-fire, though this doesn't mean the behavior of a computational spike-and-fire 'neuron' is a reasonable approximation to that of a biological neuron. And even if spike-and-fire networks are definitely worse, maybe there are applications in which the power/budget/required accuracy tradeoffs favor a hardware spike-and-fire network over a continuous convnet. But it would be nice for IBM to provide benchmarks of their system on standard vision tasks, e.g., ImageNet, to clarify what those tradeoffs are.
For a technical article about the architecture , see :<p><a href="http://www.research.ibm.com/software/IBMResearch/multimedia/IJCNN2013.algorithms-applications.pdf" rel="nofollow">http://www.research.ibm.com/software/IBMResearch/multimedia/...</a>
I'm very excited about this, as it's at least 2 decades overdue. When Pentiums were getting popular in the mid 90s, I remember thinking that their deep pipelines for branch prediction and large on-chip caches meant that fabs were encountering difficulties with Moore's law and it was time to move to multicore.<p>At the time, functional programming was not exactly mainstream and many of the concurrency concepts we take for granted today from web programming were just research. So of course nobody listened to ranters like me and the world plowed its resources into GPUs and other limited use cases.<p>My take is that artificial general intelligence (AGI) has always been a hardware problem (which really means a cost problem) because the enormous wastefulness of chips today can’t be overcome with more-of-the-same thinking. Somewhere we forgot that, no, it doesn’t take a billion transistors to make an ALU, and no matter how many billion more you add, it’s just not going to go any faster. Why are we doing this to ourselves when we have SO much chip area available now and could scale performance linearly with cost? A picture is worth a thousand words:<p><a href="http://www.extremetech.com/wp-content/uploads/2014/08/IBM_SyNAPSE_20140807_005.jpg" rel="nofollow">http://www.extremetech.com/wp-content/uploads/2014/08/IBM_Sy...</a><p>I can understand how skeptics might think this will be difficult to program etc, but what these new designs are really offering is reprogrammable hardware. Sure, we only have ideas now about what network topologies could saturate a chip like this, but just watch, very soon we’ll see some wizbang stuff that throws the network out altogether and uses content addressable storage or some other hash-based scheme so we can get back to thinking about data, relationships and transformations.<p>What’s really exciting to me is that this chip will eventually become a coprocessor and networks of these will be connected very cheaply, each specializing in what are often thought of as difficult tasks. Computers are about to become orders of magnitude smarter because we can begin throwing big dumb programs at them like genetic algorithms and study the way that solutions evolve. Whole swaths of computer science have been ignored simply due to their inefficiencies, but soon that just won’t matter anymore.
While the efficiency gains are nice and definitely welcome, it would be interesting to see what the performance gains are over a GPU. The article makes the chip sound somehow superior to existing implementations but really this is just running the same neural network algorithms we know and love on top of a more optimized hardware architecture.<p>Meaning I have no idea how this signals the beginning of a new era of more intelligent computers as the chip provides nothing to advance the state of the art on this front. Unless I am missing something?
I wonder what the possibilities are for adding a neuromorphic chip to a normal stack for specialized tasks such as the image/video recognition (cpu, gpu, npu). GPUs are very similar in their need for specialized code vs cpus.<p>Just an uneducated wild-thought.
The interesting thing about this project is that they're using transistors to physically simulate synapses and neurons, which is quite an inefficient method. Transistors are expensive, and your brain has about 100 billion neurons, and trillions of synapses.<p>A recent discovery by Leon Chua has shown that synapses and neurons can be directly replicated using Memristors [1]. Memristors are passive devices which may be much simpler to build in the scale of neurons compared to transistors.<p>1. <a href="http://iopscience.iop.org/0022-3727/46/9/093001/" rel="nofollow">http://iopscience.iop.org/0022-3727/46/9/093001/</a>
Lots of problems with the way this is presented in the article. Though the chip is patterned after a naive model of the human brain, the headline assertion is far too bold. Additionally, while the Von Neumann architecture can be characterized as bottlenecked and inefficient, it has also allowed for extremely cheap computing. A processor with all of its memory on the chip would not be inexpensive. Note this article never mentions the cost of the chip nor its memory capacity.<p>The comparison of this chip's performance with that of a nearby traditionally-chipped laptop is questionable. A couple of paragraphs later it says that the chip is programmed using a simulator that runs on a traditional PC. So I'm guessing the 100x slowdown is because the traditional PC is simulating the neural-net hardware, rather than using optimized software of its own.<p>Yes, this is important research, but engineer-speak piped through hype journalists will always paint an entirely unrealistic and overoptimistic picture of what's really going on.
What percentage of readers know you could fill a football stadium with these chips and for many tasks it wouldn't come close to a human brain with today's knowledge of software? I love news like this just feels like analogies using brains are easy to overhype.
Although IBM's hardware implementation does not support the current hotness in neural models, I still think that this is a big deal, both for applications with the current chip and also future improvements in even less required energy and smaller and more dense chips.<p>I was on a DARPA neural network tools advisory panel for a year in the 1980s, developed two commercial neural network products, and used them in several interesting applications. I more or less left the field in the 1990s but I did take Hinton's Coursera class two years ago and it is fun to keep up.
Wonder if they are then going into direct competition with Qualcomm and Samsung; all these companies have quite active neuromorphic chip research groups going.
NY Times article, by John Markoff: <a href="http://www.nytimes.com/2014/08/08/science/new-computer-chip-is-designed-to-work-like-the-brain.html" rel="nofollow">http://www.nytimes.com/2014/08/08/science/new-computer-chip-...</a>
If you are a scientist, here is the Epistemio page for rating and reviewing the scientific publication discussed here: <a href="http://www.epistemio.com/p/AJ09k7Yx" rel="nofollow">http://www.epistemio.com/p/AJ09k7Yx</a>
'IBM Chip Processes Data Similar to the Way Your Brain Does'<p>Interesting, I did not know that we already know how the brain 'processes data'.
vonsydov's link is not dead and I don't know why his comment was downvoted or why I can't reply to it. There's nothing wrong with his link, though this one may have been slightly better<p><a href="http://dx.doi.org/10.1126/science.1254642" rel="nofollow">http://dx.doi.org/10.1126/science.1254642</a><p>More broadly, I don't understand why HN seems to prefer press pieces (so often containing more inaccuracies than useful information) to the papers on which they're based.<p>In this case, even if you can't access the full text, the single-paragraph abstract contains all of the new information in the 12-paragraph Tech Review story.