TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Intel’s Exascale Dataflow Engine Drops X86 and Von Neumann

23 pointsby ssvssover 6 years ago

1 comment

shriverover 6 years ago
&gt; a new architecture that could in one fell swoop kill off the general purpose processor as a concept and the X86 instruction set as the foundation of modern computing.<p>Do you want me to think you&#x27;re a credulous idiot? Because this is how you acheive that.<p>Okay, so laying aside the bizarrely stereotypical tech journalism. From what I understand there are a number of problems with this that need addressing:<p>If you create a custom compute unit layout for a specific data flow diagram it&#x27;s very difficult to identify which layout is most efficient, and then when you want to optimize for higher performance it&#x27;s almost impossible - because you don&#x27;t know what you&#x27;re targetting. It may be your optimization pushes your design to a different layout completely and all the cost functions are impossible to know. You end up with too many free variables to optimize for. We&#x27;re very good at taking a fixed design like a CPU and then taking a program and jamming it in to that paradigm.<p>The second problem is that either you need 1 architecture that will dynamically reconfigure to different graphs or you needs lots of architectures. They seem to be going for the &#x27;Spin 100 designs&#x27; path -so firstly, how is a customer meant to know which of those designs to actually buy, what happens if their design evolves from 1 design to another? Secondly, how is this cost effective? There&#x27;s a good reason why Intel only spins a handful of designs per CPU generation.<p>The third problem is that if you have a custom compute unit layout and your program doesn&#x27;t fit to it well it&#x27;s not like a CPU. You can&#x27;t re-order operations to maximally use the units, the bits that aren&#x27;t useful are just dead silicon - and from history it seems like the killer is that dead silicon tends to be a LOT of silicon for any given program.<p>To be honest, this is a very well understood problem, and there are good reasons why it hasn&#x27;t worked so far, and this article doesn&#x27;t really give us any information on why it would work this time.
评论 #17884316 未加载