TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why CPUs aren't getting any faster (2010)

78 pointsby michael_nielsenabout 11 years ago

16 comments

delrothabout 11 years ago
CPUs are getting faster. Sandy Bridge is a 15-20% IPC improvement on Nehalem for some heavily integer and memory access based workloads. On the same workloads, Haswell is another 15-20% IPC improvement on Sandy Bridge.<p>I work on the Dolphin Emulator (<a href="https://dolphin-emu.org/" rel="nofollow">https:&#x2F;&#x2F;dolphin-emu.org&#x2F;</a>) which is a very CPU intensive program (emulates a 730MHz PowerPC core, plus a GPU, plus a DSP, all of that in realtime). We try and track CPU improvements to provide our users with proper recommandations on what hardware to go for. Here are the results of a CPU benchmark based on our software: <a href="https://docs.google.com/spreadsheet/ccc?key=0AunYlOAfGABxdFQ0UzJyTFAxbzZhYWtGcGwySlRFa1E#gid=0" rel="nofollow">https:&#x2F;&#x2F;docs.google.com&#x2F;spreadsheet&#x2F;ccc?key=0AunYlOAfGABxdFQ...</a>
评论 #7662830 未加载
trustfundbabyabout 11 years ago
The article doesn&#x27;t go into it in depth, but I think the answer is that chip makers hit a wall somewhere over 3Ghz range where it became difficult to ramp up cpu frequency without spending ridiculous sums on cooling the processor so it could operate properly (you&#x27;ll notice that even now, the fastest intel chips come in at the 3.1-3.2 Ghz range ... theres a reason for that)<p>I was big into building computers during the cpu race between AMD&#x2F;Intel back in the late 90&#x27;s early 2000&#x27;s and the Intel Pentium 4 processor line is notable for pushing the envelope from the the high 2Ghz range all the way up to the 3.4Ghz and 3.6ghz (I still have a 3.4Ghz chip sitting in my home office ... those were the days!)<p>Wikipedia does a great job of chronicling what happened with the Pentium 4 line here <a href="http://en.wikipedia.org/wiki/Pentium_4" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pentium_4</a> with hints to what I&#x27;ve just alluded to above<p>&quot;Overclocking early stepping Northwood cores yielded a startling phenomenon. While core voltage approaching 1.7 V and above would often allow substantial additional gains in overclocking headroom, the processor would slowly (over several months or even weeks) become more unstable over time with a degradation in maximum stable clock speed before dying and becoming totally unusable&quot;<p>It was after their failures with the brute force attempt at higher cpu cycles that Intel finally went a different way with initially the Pentium M line (code named Dothan and Banias) <a href="http://en.wikipedia.org/wiki/Pentium_M" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pentium_M</a> and eventually the Core duo&#x2F;Core series that they&#x27;ve since built on.
评论 #7662358 未加载
评论 #7662160 未加载
评论 #7662588 未加载
higherpurposeabout 11 years ago
Interesting that the article mentions SNB, because since then the gains in performance have been much smaller. SNB was the last &quot;significant&quot; gain in performance for Intel CPU&#x27;s I&#x27;d say (+35 percent over previous generation). All of the new generations since then have gotten like 10 percent increase in IPC, at best, and Broadwell will probably get a max gain of +5 percent.<p>To &quot;hide&quot; this, Intel has refocused its marketing on power consumption, where gains seem easier to achieve (for now), as well as other pure marketing tricks such as calling what used to be &quot;Turbo Boost speed&quot;, the &quot;normal speed&quot;. I&#x27;ve noticed for example recently a Bay Trail laptop being marketed at &quot;2 Ghz&quot;, even though Bay Trail&#x27;s base speed is much lower than that.
评论 #7662071 未加载
评论 #7662146 未加载
Spittieabout 11 years ago
I personally have my own idea for that: Sure, we have hit many walls, but I don&#x27;t think that&#x27;s the main reason for the slow down in CPU development. I think it&#x27;s mostly because the R&amp;D moved from making the CPU faster for making the CPU consume less, to follow the Laptop&#x2F;Mobile market (as everyone loves&#x2F;hates to say, every year is the year of the death of the PC).<p>Also we&#x27;re at the point where most of the very-requesting software isn&#x27;t bottlenecked by the CPU, or where you can just throw more cores at the problem and solve it. And also software is starting to leverage GPU acceleration, which gives an huge boost when usable. And GPUs are getting a lot faster every new generation.
评论 #7662641 未加载
logicalleeabout 11 years ago
This is why CPUs aren&#x27;t getting any faster:<p><a href="https://www.google.com/search?q=c+%2F+5+ghz" rel="nofollow">https:&#x2F;&#x2F;www.google.com&#x2F;search?q=c+%2F+5+ghz</a>
评论 #7662100 未加载
评论 #7662804 未加载
zokierabout 11 years ago
The power wall theory is bit odd though. Why are modern Intel desktop CPUs limited to so low power budgets? Ivy bridges were just 77W (TDP), and now Haswells are apparently 65-84W. Desktop platforms should be able to handle far more power, at least in the 100-150 watt range. Meanwhile desktop GPUs are hitting 200-300 watt TDPs regularly, with far more limited cooling systems.<p>Why isn&#x27;t Intel able (or willing) to push the power envelope higher in desktops?
评论 #7662900 未加载
bluedinoabout 11 years ago
Additions to the instruction set can help out where raw GHz don&#x27;t get things done.<p>Another big improvement has been moving certain functions to hardware - Intel&#x27;s Quicksync is a great example of this.
评论 #7661951 未加载
neonaabout 11 years ago
I hope we see an increase in real software parallelism, since that&#x27;s the only real way out of this for the foreseeable future. Tacking on more cores is still an option we have, we&#x27;re just having trouble using them right now in many contexts.<p>In the longer term, we&#x27;ll hopefully see advancements that let us fundamentally change how logic processors are constructed, such as possibly photonic logic chips. Only a major shift will let us break through the current single-thread performance wall.
评论 #7661769 未加载
th3iedkidabout 11 years ago
Weren&#x27;t there walls back in the 90s?I would rather bet on a new tech leap than to go by federated designs at this stage.
评论 #7661850 未加载
zwegnerabout 11 years ago
The article doesn&#x27;t really seem to answer the question the title says it does.<p>Of course there&#x27;s the well-known reasons, nonlinearity of power vs frequency scaling, diminishing returns in hardware design, etc. But there are others that we don&#x27;t hear so much about.<p>Hardware design is still in pretty nascent stage, technology-wise. The languages used (say SystemC or Verilog) offer very little high-level abstraction, and the simulation tools suck. Sections of the CPU are still typically designed in isolation in an ad-hoc way, using barely any measurements, and rarely on anything more than a few small kernels. Excel is about the most statistically advanced tool used in this. Of course, CPUs are hugely intertwined and complicated beasts, and the optimal values of parameters such as register file sizes, number of reservation stations, cache latency, decode width, whatever, are all interconnected. As long as design teams only focus on their one little portion of the chip, without any overarching goal of global optimization, we&#x27;re leaving a ton of performance on the table.<p>And for that matter, so is software&#x2F;compiler design. The software people have just been treating hardware as a fixed target they have no control over, trusting that it will keep improving. That makes us lazy, and our software becomes more and more slow, by design (The Great Moore&#x27;s Law Compensator if you will, also known as <a href="https://en.wikipedia.org/wiki/Wirth%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wirth%27s_law</a>).<p>The same problems we see in hardware design, huge numbers of deeply intertwined parameters, also applies to software&#x2F;compiler design. We&#x27;re still writing in C++ for performance code, for chrissakes. And even beyond that, the parameters in software and hardware are deeply intertwined with each other. To optimize hardware parameters, you need to make lots of measurements of representative software workloads. But where do those come from, and how are they compiled? Compiler writers have the liberty to change the way code is compiled to optimize performance on a specific chip (even if this isn&#x27;t done so much in practice). To get an actually representative measurement of hardware, these compiler changes need to be taken into account. Ideally, you&#x27;d be able to tune parameters at all layers of the stack, and design software and hardware together as one entity. That is, make a hardware change, then do lots of compiler changes to optimize for that particular hardware instantiation. This needs to be automated, easy to extend, and super-duper fast, to try all of the zillions of possibilities we&#x27;re not touching at the moment. There&#x27;s even &quot;crazy&quot; possibilities like moving functionality across the hardware&#x2F;software barrier. Of course it&#x27;s a difficult problem, but we&#x27;ve made almost zero progress on it.<p>Backwards compatibility is another reason. New instructions get added regularly, but those are only for cases where big gains are achieved in important workloads. For the most part, CPU designers want improvements that work without a recompile, because that&#x27;s what most businesses&#x2F;consumers want. One can envision a software ecosystem that this wouldn&#x27;t be such a problem for, but instead we have people still running IE6&#x2F;WinXP&#x2F;etc. Software can move at a glacial pace, and hardware needs to accommodate it. But this of course also enables this awfully slow pace of software progress.
评论 #7662519 未加载
评论 #7663016 未加载
评论 #7663410 未加载
ufmaceabout 11 years ago
I&#x27;m curious if anyone here has any perspective on how close we are to absolute physical limits in CPU design. Last I heard, we&#x27;re getting pretty close to dealing with quantum issues due to how small the transistor and connection size is getting, the frequency of light we need to do the etching, etc. I wonder if anybody knows how close we are to hitting hard limits in various categories. Surely, we&#x27;ll hit some eventually, and I wonder what happens then.
评论 #7663075 未加载
评论 #7663108 未加载
snarfyabout 11 years ago
Grace Harper explains it best:<p><a href="http://www.youtube.com/watch?v=JEpsKnWZrJ8" rel="nofollow">http:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=JEpsKnWZrJ8</a>
akuma73about 11 years ago
The end of Dennard scaling is the root-cause. This is causing power density to stop scaling with smaller transistors and will ultimately be the end of Moore&#x27;s law.<p><a href="http://research.microsoft.com/en-us/events/fs2013/doug-burger_beastfrombelow.pdf" rel="nofollow">http:&#x2F;&#x2F;research.microsoft.com&#x2F;en-us&#x2F;events&#x2F;fs2013&#x2F;doug-burge...</a>
评论 #7665582 未加载
exeliusabout 11 years ago
IMO CPUs aren&#x27;t getting any faster because we don&#x27;t really need them to be much faster.<p>Now before the flames begin, let me caveat that as &quot;We don&#x27;t really need them to be much faster at single-threaded workloads.&quot; The article round-aboutly mentions this in the context of specialized CPUs, specifically GPUs: GPUs are basically hyper-concentrated thread runners. They&#x27;re not very fast at running any single thread, but they have efficient shared memory and can run thousands of individual threads at once.<p>For larger workloads, we&#x27;ve gotten a lot more efficient through cloud computing. An individual CPU in the cloud is really not any faster than it was 5 years ago; but the advances made in energy efficiency (aka heat) and miniaturization mean you can fit a lot more of them in a smaller space.<p>While the technical hurdles to going faster are very real, I think we&#x27;ve built a technical infrastructure that&#x27;s just not as reliant on the performance of any single piece of the system as it used to be. Therefore there is less demand for faster CPUs, when for many of the traditional &quot;hard&quot; computational workloads, more CPUs works almost as well and is a lot easier to scale than faster CPUs.
评论 #7662201 未加载
评论 #7662207 未加载
philosophusabout 11 years ago
I realize this article is from 2010, but it could have mentioned AMD, which does have a 5 GHz chip available now. It requires liquid cooling however.
评论 #7662263 未加载
jokoonabout 11 years ago
or &quot;why it&#x27;s more and more relevant to code with performance in mind, and consider minimalist designs&quot;
评论 #7662329 未加载