Color me skeptical. The project will fail, of course, because it is too ambitious. There are too many required new developments for it all to come together: new chip, new OS, new forms of scheduling, a lot more bookkeeping, not to mention new programming paradigms and compiler technology. I would not be surprised that bookkeeping and bandwidth would eat up 90% of the processing cycles. (Obviously, I'm pulling this out of my nether end.)<p>It feels to me that is the next iteration of refinements of technology that is 40 - 50 years old. Caches are what, from 1970? Interconnect issues date from the same time. I think machine cycles are in abundance, the scarce resource is interconnections for data flow. So one of the first things you want to do is organize your data flow so that processing is local. Think simulations like vision processing, weather prediction, rendering, where a processor can work locally and pass on a reduced amount information to its neighbors. The interesting problems arise when the results have to be delivered non-locally. If you store them in main memory for the recipient to pick them up, you run into bandwidth problems.<p>So what I see as needed is gazillions of low level worker bees with modest bandwidth requirements that have semi-permanent connections to the consumers of their output. Think the human brain, Google search, image rendering.<p>Apologies for the rambling, lack of citations, etc. etc, but I am interested in HNer's views on these issues.