TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Tiny Chip That Could Disrupt Exascale Computing (2015)

56 pointsby GUNHED_158almost 9 years ago

8 comments

trsohmersalmost 9 years ago
Founder of REX here, and surprised to see this posted here. Happy to answer any questions, and you can check my comment history for some of my prior posts on REX.<p>We&#x27;ve had some really great progress that we hope to share in the near future, so stay tuned.<p>EDIT: Since this article is over a year old, we have made a lot of progress, and have recently taped out our first chip. We haven&#x27;t officially posted a job opening, but we are very shortly going to be looking for software engineers that would love to work on our architecture. Feel free to shoot me an email if you&#x27;re interested!
评论 #12157754 未加载
评论 #12157882 未加载
评论 #12157193 未加载
评论 #12169062 未加载
评论 #12157200 未加载
dewsteralmost 9 years ago
From the article: “Caches and virtual memory as they are currently implemented are some of the worst design decisions that have ever been made,” Sohmers boldly told a room of HPC-focused attendees at the Open Compute Summit this week.<p>As a lay processor designer, I couldn&#x27;t agree more. I don&#x27;t like VLIW, but this architecture makes a lot of sense. I think it took up to this point for compiler technology to catch up with what is possible in hardware.<p>Almost all the good ideas in computing were mined out long ago, the trick I think is to get the computing world to give up on those which are holding things back (cold dead hands if necessary).
Nomentatusalmost 9 years ago
This is a 2015 story that I remember reading, then. Google news search shows only a couple articles this year about Rex Computing and only one tiny bit of news, that they&#x27;re at tapeout. That&#x27;s probably par for the course for a startup creating product (or prototype) one. <a href="http:&#x2F;&#x2F;semiengineering.com&#x2F;power-centric-chip-architectures&#x2F;" rel="nofollow">http:&#x2F;&#x2F;semiengineering.com&#x2F;power-centric-chip-architectures&#x2F;</a><p>also a speaking engagement: <a href="http:&#x2F;&#x2F;insidehpc.com&#x2F;2016&#x2F;01&#x2F;call-for-papers-supercomputing-frontiers-in-singapore&#x2F;" rel="nofollow">http:&#x2F;&#x2F;insidehpc.com&#x2F;2016&#x2F;01&#x2F;call-for-papers-supercomputing-...</a><p>and a comment elsewhere that mentions another approach: the &quot;Mill CPU of Mill Computing&quot;<p>As I recollect (perhaps quite wrongly) Itanium (VLIW) failed because compiler-writers couldn&#x27;t really be bothered or couldn&#x27;t mount the learning curve. So I&#x27;m most curious about what progress is being made on the compiler side.
评论 #12157215 未加载
评论 #12155855 未加载
评论 #12157083 未加载
ameliusalmost 9 years ago
&gt; there is no virtual memory translation happening, which in theory, will significantly cut latency (and hence boost performance and efficiency). This means that there is one cycle to address the SRAM, so “this saves half the power right off the bat just by getting rid of address translation from virtual memory.”<p>In protected mode (i.e., what the kernel is using), will an Intel processor not also disable virtual memory lookup? Couldn&#x27;t we just recompile scientific software to a protected mode environment to get those same benefits?<p>Also, I think it is more useful and fair to compare against a GPU than a general purpose CPU.<p>(As an aside, I don&#x27;t see where the reduced latency gives such a big advantage. There will be latency anyway, so in any case your software has to deal with waiting in an efficient way (doing useful stuff in the mean time). Shaving off some latency will only help if your software design was bad to begin with.)
SeanDavalmost 9 years ago
It would just be great to get in a decent chip that does not have built in, and unblockable, back-door hacking, like those on Intel, AMD and probably ARM.
评论 #12157896 未加载
ridgeguyalmost 9 years ago
I&#x27;m curious about the thermal issues.<p>From the article, the power density is (4 W)&#x2F; (0.1mm^2), or 40W&#x2F;mm^2. Intel&#x27;s Haswell chip has a TDP of ~ 65W, an area of 14.7mm^2, for a power density of 4.4W&#x2F;mm^2.<p>Is this power density a cooling challenge?
评论 #12158834 未加载
gpderettaalmost 9 years ago
This chip was discussed on RealWorldTech a while ago: <a href="http:&#x2F;&#x2F;www.realworldtech.com&#x2F;forum&#x2F;?threadid=151566" rel="nofollow">http:&#x2F;&#x2F;www.realworldtech.com&#x2F;forum&#x2F;?threadid=151566</a><p>Let&#x27;s say it wasn&#x27;t well received.
KKKKkkkk1almost 9 years ago
There is nothing to disrupt. Exascale computing is a haux perpetrated on the US government by unscrupulous hardware vendors. Kudos to Rex for grabbing a piece of that action.