TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why CPUs Aren't Getting Any Faster

56 点作者 pietrofmaggi超过 14 年前

12 条评论

1053r超过 14 年前
The reason CPUs aren't getting any faster is only tangentially mentioned in the article. Yes, it is heat dissipation. But why then did they get faster for so many decades?<p>As the process size drops, you can crank up the clock-speed while leaving the total heat dissipation constant. But the heat density is related to the voltage, resistance, and the amount of time the transistors spend partially on or off (we wish they were perfect switches, but they aren't really). So as you switch faster, unless you can lower the resistance or voltage (which requires changing your materials), you probably spend more and more time in a partially on or off state. This means your heat density rises to the point where you are just south of burning things out.<p>You make some engineering decision about the reliability you want in your chips, and calculate or test how high a heat density you can tolerate. But unless you change your materials so they use lower voltage, or invent new ways to move heat away faster, or use materials that are more conductive, you aren't upping the heat density or clock rate. But you can still make them smaller and use less power total for the same amount of computation.<p>This is why reversible computing (gives off less heat), diamond substrates (much higher thermal conductivity), microfluidic channels (moves heat away faster), and parallelism (larger chips = more computation) are being explored. And only the last one is practical THIS year.
评论 #1794873 未加载
评论 #1795575 未加载
评论 #1795034 未加载
WillyF超过 14 年前
Most of the comments seem to focus on the supply side, but I think that demand for faster processors is waning—at least at the consumer level. I used to care what speed my processor was. Now I have to check my system info to remind me what I have in my MacBook Pro. Cloud computing has changed a lot for me as a user.<p>Building faster processors is extremely expensive, so demand has to be a key concern for manufacturers. I still think there's plenty of demand for faster processors, and I'm sure we'll continue to see lots of innovation, but the issue doesn't seem to be as pressing as it was 10 years ago.
评论 #1795473 未加载
评论 #1795569 未加载
pbw超过 14 年前
This wasn't a great article, but the topic fascinates me. I'm surprised the shift from faster-clocks to multi-core went so smoothly. No one seems to really mind. I really like my quad core, 4 processors is a lot nicer than one.<p>But I wonder about hundreds or thousands of cores, if we'll see that and if people will start to worry that single threaded software uses ever smaller amounts of their shiny new hardware. Will there every be some magic layer that can run single threaded software on many cores?<p>I wrote about the end of faster clocks and start of multi-core recently: <a href="http://www.kmeme.com/2010/09/clock-speed-wall.html" rel="nofollow">http://www.kmeme.com/2010/09/clock-speed-wall.html</a>
评论 #1797076 未加载
ramchip超过 14 年前
<i>As CPUs have become more capable, their energy consumption and heat production has grown rapidly. It's a problem so tenacious that chip manufacturers have been forced to create "systems on a chip"--conurbations of smaller, specialized processors.</i><p>I don't think that's a very good explanation of SoCs.
评论 #1794995 未加载
alain94040超过 14 年前
I believe the main reason is CPU micro-architecture <i>(note: I have had way too much exposure in my life to the design of the CPU in your phone and in your laptop to have a non-biased opinion)</i><p>What does that mean? Essentially that the race for deep pipelines has ended, with 20-40 stages being the optimal depth. After that, miss penalties just hurt too much. Therefore, when you can't make the pipeline deeper, you can't make the frequency much faster, you are stuck with following process progress (which is already pretty good). So it's more tempting to go after multi-cores: same pipeline depth, more silicium, more efficient overall.
edparcell超过 14 年前
I think that one approach that may yield domain-specific improvements would be to add certain numerical routines into the x86 instruction set.<p>When I was working in finance as a quant, I was shocked by the amount of time code spent executing the exponential function - it is used heavily in discount curves and similar which are the building blocks of much of financial mathematics. An efficient silicon implementation would have yielded a great improvement in speed.
评论 #1795464 未加载
评论 #1795511 未加载
jdavid超过 14 年前
I have a few reasons chip speeds have stalled<p><pre><code> - Intel has been at the top for too long. - x86 is to complex - nVidia and AMD are being blocked from making x86 chips - PCs pretty much require x86 to exist. </code></pre> moving to graphine might allow for an increase in chip temp, but do you really want a processor running at a few hundred to a thousand degrees? are you willing to pump 200-2k watts into a chip?<p><pre><code> thank god that we didn't make this mistake with mobile and most platforms use an abstraction level language like c#, java, or javascript. </code></pre> I am personally hoping that data centers really are evaluating ARM chips. The instruction set is smaller, they are lower power, and there are more producers of ARM cores, so prices are much lower. A good intel chip will cost $200-$500, while a chip with an ARM core is probably in the $25-$100 range.<p><pre><code> How much does a tegra2, A4, or snapdragon cost? </code></pre> I imagine the future of data centers will be arrays of system on chip ARM cores paired with high doses of flash memory. Running your web app off of 1,000 ARM cores might cost you a few thousand a month.
评论 #1797532 未加载
评论 #1796897 未加载
smackfu超过 14 年前
Back when I was in college for CE, one of my professors was very concerned that testing CPUs would eventually be the bottleneck. That verifying that it was actually working correctly would be such a burden once the number of transistors reached a high enough level.<p>Of course, I never heard about this again. Ring a bell with anyone?
评论 #1795396 未加载
ctkrohn超过 14 年前
This is probably a stupid question, but if heat dissipation is a big problem, why can't we just build better cooling systems: bigger heatsinks, refrigeration, etc.? I'm not an electrical engineer, so I'm sure there's something I'm missing.
评论 #1795385 未加载
评论 #1795441 未加载
olegkikin超过 14 年前
CPUs are getting faster, even if they have the same clock speed:<p><a href="http://www.cpubenchmark.net/high_end_cpus.html" rel="nofollow">http://www.cpubenchmark.net/high_end_cpus.html</a>
Brashman超过 14 年前
The article seems to imply that optimizing for power means Intel isn't innovating in CPUs. Optimizations in power allow the chip to be clocked faster (or perform more in parallel) leading to overall performance improvements. These optimizations are improvements in CPUs.
bherms超过 14 年前
Also, we're flirting with the limits of Moore's "law" here. I did a report back in high school on it and speculated you'd never really see processors over 4GHz. I guess I was right.<p>As you start to shrink transistors and the spacing between them, the chips get hotter, burn more power, and throw more errors. You also get electron "leakage" where the electrons inadvertently jump between gates, so the processors become less efficient and you have to run extra fault-tolerance to check for the errors.<p>Multi-core and bringing all the other components up to speed is the way to go for now until a newer technology comes, like quantum computing or light based processing.
评论 #1794839 未加载
评论 #1795235 未加载
评论 #1795573 未加载