> a new architecture that could in one fell swoop kill off the general purpose processor as a concept and the X86 instruction set as the foundation of modern computing.<p>Do you want me to think you're a credulous idiot? Because this is how you acheive that.<p>Okay, so laying aside the bizarrely stereotypical tech journalism. From what I understand there are a number of problems with this that need addressing:<p>If you create a custom compute unit layout for a specific data flow diagram it's very difficult to identify which layout is most efficient, and then when you want to optimize for higher performance it's almost impossible - because you don't know what you're targetting. It may be your optimization pushes your design to a different layout completely and all the cost functions are impossible to know. You end up with too many free variables to optimize for. We're very good at taking a fixed design like a CPU and then taking a program and jamming it in to that paradigm.<p>The second problem is that either you need 1 architecture that will dynamically reconfigure to different graphs or you needs lots of architectures. They seem to be going for the 'Spin 100 designs' path -so firstly, how is a customer meant to know which of those designs to actually buy, what happens if their design evolves from 1 design to another? Secondly, how is this cost effective? There's a good reason why Intel only spins a handful of designs per CPU generation.<p>The third problem is that if you have a custom compute unit layout and your program doesn't fit to it well it's not like a CPU. You can't re-order operations to maximally use the units, the bits that aren't useful are just dead silicon - and from history it seems like the killer is that dead silicon tends to be a LOT of silicon for any given program.<p>To be honest, this is a very well understood problem, and there are good reasons why it hasn't worked so far, and this article doesn't really give us any information on why it would work this time.