TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

End of Moore's Law: It's not just about physics

60 pointsby davidiachabout 11 years ago

16 comments

reitzensteinmabout 11 years ago
For a nice dose of doom and gloom, I quite like the dark silicon paper[1], which explores the limited use we&#x27;ll get out of Moore&#x27;s Law, <i>even if</i> it manages to continue (and as far as I understand it, transistors&#x2F;$ has already flat lined so it&#x27;s at least temporarily over).<p>Since we&#x27;re no longer getting good power scaling out of shrinking, if Moore&#x27;s Law keeps up, we&#x27;re just essentially getting price discounts.<p>A nice thought exercise for me is what would computing look like if you could fab wafers for free, today. Sort of the ad absurdum take on Moore&#x27;s Law continuing.<p>We&#x27;d have more memory, and a ridiculous amount of flash storage, and high end graphics cards would become cheaper (but not faster). Desktop CPUs might look more like server CPUs, but single core performance wouldn&#x27;t change one bit.<p>We&#x27;d probably see heterogeneous computing a whole lot more. Xeon Phi like processors on package next to Haswell cores at the very least.<p>Probably computers would start to have FPGAs in them, as well as large amounts of niche circuitry to compute hashes, encode videos.<p>We might see computation embedded inside memory (I think Hynix did this recently but I couldn&#x27;t find a link). Maybe memory that accelerates garbage collection in order to accelerate modern workloads.<p>So there are some interesting places to go, especially from a Hacker News perspective, but even if transistors became <i>free</i>, we still wouldn&#x27;t see the rate of progress we did in the 80s or 90s (in terms of speed up to the tasks we&#x27;re doing).<p>Of course, this is all near to medium term stuff. I personally believe we&#x27;ll see that rate of progress again when we move off of silicon, to a different computing paradigm, or (most likely) both - I can&#x27;t believe we&#x27;ll inch our way to science fiction computing, or that science fiction computing won&#x27;t be possible at all.<p>[1] Dark Silicon and the End of Multicore Scaling (ftp:&#x2F;&#x2F;ftp.cs.utexas.edu&#x2F;pub&#x2F;dburger&#x2F;papers&#x2F;ISCA11.pdf)
评论 #7692893 未加载
评论 #7692928 未加载
评论 #7693129 未加载
评论 #7692580 未加载
评论 #7692977 未加载
lscabout 11 years ago
I have, perhaps, a different perspective. I have been spending a lot of my income on computer hardware and power for the last decade or so; and being involved in spending other people&#x27;s money on same for another half-decade back.<p>My take? When Intel thinks they are ahead, compute doesn&#x27;t get cheaper.<p>In the DDR2 days, If you were on Intel, you had the choice between the stunningly inefficient and expensive rambus ram, or a stunningly shitty memory buss with (not very many) low-power ddr2 modules.<p>At the time, the AMD HyperTransport system was absolutely beautiful. Even on cheap boards, you could get more than 2x the low-power ddr2 ram modules per CPU that intel could. (at the time, lower-density modules were dramatically cheaper, per gigabyte, than higher-density modules) It worked way better when you had multiple CPUs, too.<p>Then ddr3 came, and Intel came up with their QPI systems, which were awesome. AMD came back with a competently built ddr3 platform, too; the G34 systems were a huge upgrade from the mcp55 chipset socket F platform.<p>But the benchmarks came out in Intel&#x27;s favor, even when AMD had twice the cores. I mean, you could argue that the AMD systems had advantages in some limited situations, but they had lost the dramatic advantage.<p>As far as I can tell, intel has been largely resting on their laurels, price-wise. The E5-2620 is better than, but really not radically better than the E5520. Now, some of the higher-end E5s are pretty nice, but they are priced accordingly.<p>Until Intel gets some real competition again, we have to pay for our performance gains.<p>So yeah, really, until AMD gets their legs back under them, and I hope the A1100[1] will do it, I don&#x27;t expect dramatic performance per dollar gains from Intel.<p>[1]<a href="http://www.amd.com/en-us/press-releases/Pages/amd-to-accelerate-2014jan28.aspx" rel="nofollow">http:&#x2F;&#x2F;www.amd.com&#x2F;en-us&#x2F;press-releases&#x2F;Pages&#x2F;amd-to-acceler...</a>
评论 #7693830 未加载
tedsandersabout 11 years ago
It&#x27;s always been economics. EUV already works. E-beam lithography already works. Carbon nanotube transistors already work. III-V transistors already work. It&#x27;s just that none of these technologies work as cheaply as double patterned silicon.
评论 #7692598 未加载
评论 #7692471 未加载
评论 #7692699 未加载
PeterisPabout 11 years ago
We still have a magnitude order or two of computing power that we can squeeze out iff the semiconductor density&#x2F;price stops.<p>We don&#x27;t implement many optimization possibilities in each hw&#x2F;sw layer because the &quot;layer below&quot; keeps changing all the time and we need to keep compatibility. Once we could say &quot;this is it, this layer is as good as it will ever get&quot;, then (and not a day before) you can start to re-architecture everything above it to maximize performance by throwing away flexibility that won&#x27;t be needed anymore.<p>E.g., instead of transistors being spent on translating x86 to the underlying microcode and cycles being spent on translating JVM&#x2F;CLR bytecode (or javascript) to x86, we&#x27;d be able to define a single standard and adapt both processors and compilers to that. You can&#x27;t break compatibility at every technological change - but if you have a reason to believe that it will finally be stable (which hasn&#x27;t ever happened yet) then it does make sense to make a single final switch that disregards all compatibility and legacy issues - even if the benefits are small, they accumulate for each such layer and you only have to do it once.
tambourine_manabout 11 years ago
I&#x27;ve been hearing about the end of Moore&#x27;s Law ever since I was a kid. I remember smart people citing wave lengths and economics convincing me that there as no way we&#x27;d go beyond 300nm. Oh yes, and that would mean the end of x86 as well (since intel wouldn&#x27;t have the process advantage to compensate for the less efficient CISC).<p>Well, maybe they are right this time, who knows, but I&#x27;m way more skeptical.
评论 #7692457 未加载
评论 #7692491 未加载
评论 #7692473 未加载
评论 #7692585 未加载
skywhopperabout 11 years ago
One thing I can predict with complete confidence is that we have no clue what the path forward for improving computer processing power will be from 2022. It&#x27;s fun to speculate but that&#x27;s a long time for new technologies, approaches, and architectures to take hold. Intel isn&#x27;t the only company with an interest in improving the state of the art here.
DanielBMarkhamabout 11 years ago
I think maybe we might have been mis-reading Moore&#x27;s Law for years, or rather, Moore himself might have mis-stated it.<p>The radical changing reality that it describes is the number of transistors a person could economically employ at any one time to perform work on their behalf.<p>So yes, physics and economics might provide us with a limit (or increasing slope of difficulty) for the construction of single chips, the <i>practical effects</i> of the Law continue unabated, at least as far as I can see. The average person continues to be able to employ more and more electronics to perform work for them. This is increasing geometrically.<p>I&#x27;m not trying to dismiss either this DARPA guy or Moore, just point out that the specific details of Moore&#x27;s Law may not be as important as we make them out to be.<p>In my mind, the big obstacle we have now to continued growth is small-system, imperative thinking. Systems of the future will be massively parallel. I have no idea how long it will take the IT industry to truly transition, but that&#x27;s the next big hurdle, not counting atoms inside a switch.
评论 #7692402 未加载
评论 #7692418 未加载
hershelabout 11 years ago
For the talk by the DARPA guy:<p><a href="https://www.youtube.com/watch?v=JpgV6rCn5-g#t=15" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JpgV6rCn5-g#t=15</a>
anigbrowlabout 11 years ago
Worst formatting I&#x27;ve ever seen. Anyway:<p><i>Colwell said that for the Defense Department, he uses the year 2020 and 7 nanometers as the &quot;last process technology node.&quot; But he adds, &quot;In reality, I expect the industry to do whatever heavy lifting is needed to push to 5nm, even if 5nm doesn&#x27;t offer much advantage over 7, and that moves the earliest end to 2022. I think the end comes right around those nodes.&quot;</i><p>Let&#x27;s assume that in 10 years time everyone agrees we have hit the wall on how small we can go. What then? Is there any reason to believe that popular architectures of today are necessarily optimal? I&#x27;m curious to hear people&#x27;s ideas about what we do instead of shrinking dies.<p>My personal guess is that we move towards massively parallel systems with large numbers of low-power cores, but bare-metal programming completely and work on developing smarter and smarter compilers to take advantage of parallelism. My personal hope is that we find some sort of cold optical switching technology that lets us build ridiculously fast computers that look like glowing crystal cubes. Of course, I have no idea how that would work, or I&#x27;d be out pitching it XD
评论 #7692657 未加载
评论 #7693215 未加载
kstenerudabout 11 years ago
Yeah, I&#x27;m sure someone had similar doom and gloom to say about vacuum tubes.<p>The thing is, you just don&#x27;t know where the next technological revolution will come from. Yes, we&#x27;ve about hit the limit for silicon, and we may very well stagnate for another decade because of it, but there&#x27;s always going to be someone scrappy enough to try what the big slow incumbents won&#x27;t.
belochabout 11 years ago
There are gains yet to be realized besides reducing the size of the process. For example, reversible (a.k.a. isentropic or adiabatic) computing offers a way to reduce heat generation, which might combine with 3D construction in interesting ways. New ways of designing chips might allow progress to continue, but they&#x27;re hard and risky. They&#x27;re not terribly attractive as long as shrinking the process offers predictable advances and remains economically feasible.<p>Still, it&#x27;s worth taking a moment to appreciate how crazy just getting under 10 nm is. The wavelength of light that is visible to the human eye starts at around 380 nm. Looking at a 10 nm chip with violet light would be like trying to navigate your house by sonar using a sub-woofer as your emitter!
williadcabout 11 years ago
One thing that seems to be glossed over in his argument is the impending move to 450mm wafer technology. That should allow Intel and others to continue shrinking at reasonable cost.
lucb1eabout 11 years ago
&gt; it&#x27;s time to start planning for the end of Moore&#x27;s Law<p>Do we? Because all current software runs on current hardware. Even if in the next ten years we only get another 1% increase in speed, then the current software will run at the current speed (something which we are all perfectly fine with right now). I think it&#x27;s indeed a doom and gloom article.
评论 #7693242 未加载
hyp0about 11 years ago
smaller devices will provide the economic drive, just as they did when 14&quot; HDDs had enough performance.
pmoriciabout 11 years ago
Either this article or the guy&#x27;s speech perhaps both are pretty lame. No supporting facts are presented as to why he thinks the 7nm generation will be the last frontier. He just basically states the obvious that if there is no profit motive or the technology isn&#x27;t there then it won&#x27;t happen. No kidding.<p>The whole article reads like an appeal to authority.<p>Are there any factual reasons to believe this time is actually different than the past 35 years?
评论 #7693546 未加载
评论 #7693210 未加载
transfireabout 11 years ago
For decades I have heard that 11nm would be the end of Moore&#x27;s law as we know it. This is proving to be true. CPUs have been frozen at 1 to 4GHz for nearly a decade, with the latest advances going to power savings. Currently at 22nm, the next step is 14, and then 11. Perhaps they can eek out one more jump to 7nm or 5nm, but I expect that will barely be worth the effort and thus will drag out for at least decade itself.<p>But Moore fans need not worry. There is a clear next step, and I am wondering why no one is talking about it: Optical Interconnects. Connecting chips and circuits within chips with optical channels should allow plenty of room for speeding up processors and reducing power requirements well into the mid-21st century.
评论 #7693053 未加载