I'm very excited about this, as it's at least 2 decades overdue. When Pentiums were getting popular in the mid 90s, I remember thinking that their deep pipelines for branch prediction and large on-chip caches meant that fabs were encountering difficulties with Moore's law and it was time to move to multicore.<p>At the time, functional programming was not exactly mainstream and many of the concurrency concepts we take for granted today from web programming were just research. So of course nobody listened to ranters like me and the world plowed its resources into GPUs and other limited use cases.<p>My take is that artificial general intelligence (AGI) has always been a hardware problem (which really means a cost problem) because the enormous wastefulness of chips today can’t be overcome with more-of-the-same thinking. Somewhere we forgot that, no, it doesn’t take a billion transistors to make an ALU, and no matter how many billion more you add, it’s just not going to go any faster. Why are we doing this to ourselves when we have SO much chip area available now and could scale performance linearly with cost? A picture is worth a thousand words:<p><a href="http://www.extremetech.com/wp-content/uploads/2014/08/IBM_SyNAPSE_20140807_005.jpg" rel="nofollow">http://www.extremetech.com/wp-content/uploads/2014/08/IBM_Sy...</a><p>I can understand how skeptics might think this will be difficult to program etc, but what these new designs are really offering is reprogrammable hardware. Sure, we only have ideas now about what network topologies could saturate a chip like this, but just watch, very soon we’ll see some wizbang stuff that throws the network out altogether and uses content addressable storage or some other hash-based scheme so we can get back to thinking about data, relationships and transformations.<p>What’s really exciting to me is that this chip will eventually become a coprocessor and networks of these will be connected very cheaply, each specializing in what are often thought of as difficult tasks. Computers are about to become orders of magnitude smarter because we can begin throwing big dumb programs at them like genetic algorithms and study the way that solutions evolve. Whole swaths of computer science have been ignored simply due to their inefficiencies, but soon that just won’t matter anymore.