TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

ARM Pioneer Sophie Wilson Also Thinks Moore’s Law is Coming to an End

115 点作者 jcbeard大约 8 年前

17 条评论

Animats大约 8 年前
Some limits were hit a decade ago. The Pentium 4 (2004) clocked at 3.8GHz max. Most Intel processors today are slower than that. Intel&#x27;s fastest offering is a little over 4GHz.<p>The article says that 28nm will dominate for another decade, even though 14nm fabs exist. Having to use extreme ultraviolet (really soft X-rays) for lithography runs costs way up. EUV &quot;light sources&quot; are insanely complex, involving heating falling droplets of metal to plasma levels with lasers. It&#x27;s amazing that works as a production technology. The equipment looks like something from a high energy physics lab.<p>It&#x27;s interesting that we hit the limit of photons before the limits of atoms or electrons.<p>Another problem with all this downsizing is electromigration. Every once in a while, an atom gets pulled out of position by the electric field across a gap. Higher temperatures make it worse. Narrower wires make it more of a problem. This is now a major reason ICs wear out in use.<p>Getting rid of the heat is another problem. High performance CPUs are already cooling-limited. This is also why 3D IC schemes aren&#x27;t too useful for active components like CPUs. Getting heat out of the middle of the stack is hard. Memory can be stacked, if it&#x27;s not used too hard.<p>There&#x27;s no problem making lots of CPUs on a chip, if the application can use them. Things look better server-side; you can use vast numbers of CPUs in a server farm, but it&#x27;s hard to see what 20 or 100 CPUs would do for a laptop.<p>Drastically different architectures may help on specialized problems. GPUs have turned out to be more generally useful than expected. There will probably be &quot;deep learning&quot; ICs; that&#x27;s a problem where the basic operation is simple and there&#x27;s massive parallelism.<p>For ordinary CPU power per CPU, we&#x27;re close to done.
评论 #14114110 未加载
评论 #14113896 未加载
评论 #14113229 未加载
评论 #14113653 未加载
评论 #14113362 未加载
评论 #14114654 未加载
评论 #14113181 未加载
评论 #14115446 未加载
评论 #14113209 未加载
评论 #14113561 未加载
static_noise大约 8 年前
Moores law has been the driving force of chip development?<p>Prophet Moore predicted the future and now engineers start breaking the law?<p>Isn&#x27;t it the other way around that Moore made an observation about some effect that arose naturally? The formula was then called Moores law and its extrapolation had great predictive power for a long time.<p>Similar effects occur all through industries when you start scaling things up. Quality will go up and cost per unit will go down. Often following a simple mathematical formula which describes the learning curve.<p>In many technologies there is something called maturity where the straight line in the diagram starts to bend and approaches a technical limit. Markets overcome this a few times by changing the technological approach of solving a problem to an approach that has a better limit. This makes the general trend continue for decades... until the point where the next technology is so expensive that noone can afford it anymore.<p>Thus far Silicon has won every round and chip manufacturing plants cost many billions of dollars.
评论 #14112653 未加载
deepnotderp大约 8 年前
I think a key point that&#x27;s ignored is that data movement is the new problem. For example, according to Lawrence Livermore National Laboratories, the cost of moving a 64-bit word 1mm ON CHIP on the 10nm projection is approximately equal to doing a 64-bit FLOP. And the cost of DRAM is outrageous... It&#x27;s what&#x27;s holding back exascale and will hold back general purpose compute as well.<p>Architectures MUST change radically to adapt to this or there can be no progress.
评论 #14113143 未加载
评论 #14113171 未加载
评论 #14112920 未加载
to3m大约 8 年前
&gt; In 1975, Wilson was part of the team that developed the 6502<p>You can get a better summary of her early career from her computer history museum oral history interview: <a href="http:&#x2F;&#x2F;www.computerhistory.org&#x2F;collections&#x2F;catalog&#x2F;102746190" rel="nofollow">http:&#x2F;&#x2F;www.computerhistory.org&#x2F;collections&#x2F;catalog&#x2F;102746190</a> - worth your time.
评论 #14118923 未加载
0xCMP大约 8 年前
I imagine this will begin to put some pressure back to making things faster again as speed-ups that were previously expected fail to appear. (i.e. JS performance on mobile)<p>These days it&#x27;s not a big deal to most developers, but I think over the next few years if there aren&#x27;t major advances in speed we will want to get that extra battery life and speed out of our applications and devices. Independent Developers hopefully will have a good financial reason to do that, unlike today.
paulsutter大约 8 年前
AI processor speedups will advance faster than Moore&#x27;s law in the next 2-3 years, mostly due to lower precision (12&#x2F;8&#x2F;4 bits instead of 64&#x2F;32 bits), massive parallelism, and a different programming paradigm. Google&#x27;s TPU&#x27;s for example are close to hardwired matrix multiplication. Maybe speedups for traditional scalar-oriented code matters less now.<p>Intel Lake Crest: &quot;will enable training of neural networks at 100 times the performance on today’s GPUs, said Diane Bryant, executive vice president and general manager of Intel’s data center group&quot;<p><a href="https:&#x2F;&#x2F;venturebeat.com&#x2F;2016&#x2F;11&#x2F;17&#x2F;intel-will-test-nervanas-lake-crest-silicon-in-first-half-of-2017-knights-crest-also-coming&#x2F;" rel="nofollow">https:&#x2F;&#x2F;venturebeat.com&#x2F;2016&#x2F;11&#x2F;17&#x2F;intel-will-test-nervanas-...</a><p>Google TPU: &quot;The TPU...used 8-bit integer math...process 92 TOPS&quot; (trillion operations per second)<p><a href="https:&#x2F;&#x2F;www.nextplatform.com&#x2F;2017&#x2F;04&#x2F;05&#x2F;first-depth-look-googles-tpu-architecture&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.nextplatform.com&#x2F;2017&#x2F;04&#x2F;05&#x2F;first-depth-look-goo...</a><p>Generally:<p><a href="http:&#x2F;&#x2F;www.moorinsightsstrategy.com&#x2F;what-to-expect-in-2017-from-amd-intel-nvidia-xilinx-and-others-for-machine-learning&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.moorinsightsstrategy.com&#x2F;what-to-expect-in-2017-f...</a>
评论 #14112817 未加载
评论 #14113803 未加载
kurthr大约 8 年前
The death of Moore&#x27;s Law will have as much to do with CFOs deciding that the investment isn&#x27;t worth the return as it will with technological innovation. When Intel decided to layoff 12k last year, it seemed like the writing was on the wall. ITRS seemed to think so, anyway:<p><a href="https:&#x2F;&#x2F;www.hpcwire.com&#x2F;2016&#x2F;07&#x2F;28&#x2F;transistors-wont-shrink-beyond-2021-says-final-itrs-report&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.hpcwire.com&#x2F;2016&#x2F;07&#x2F;28&#x2F;transistors-wont-shrink-b...</a><p>Going from Tick-Tock to Tick-Tock-Tweak... and this year to Tick-Tock-Tweak-Tuck the fourth year of 14nm (still as compact as other companies 10nm) makes the slowdown palpable. Perhaps they will manage a 2.7x shrink at their &quot;10nm node&quot; with or without EUV, but it&#x27;s not the straight scaling of yesteryear.
评论 #14116192 未加载
评论 #14112790 未加载
api大约 8 年前
I disagree about the limitations of software parallelism. The article is correct that many existing algorithms like ray tracing or apps like web rendering have inherent limits to parallelization, but there exist a large number of &quot;embarrassingly parallel&quot; things that simply are not done on small PCs and phones right now because they&#x27;re too costly. This includes things like neural networks, genetic algorithms, all kinds of optimization algorithms, etc.<p>This article is from 2007 so it predates the AI renaissance. Lots of AI, ML, and optimization stuff can happily eat as many cores as you want to throw at it.<p>Then there&#x27;s the multitasking angle. On a desktop at least I often run dozens of applications, developer VMS, etc. I could definitely use 20 cores in a desktop&#x2F;laptop right now. We have tests that easily max out a 24 core server that I&#x27;d love to run on my own box.
评论 #14116624 未加载
visarga大约 8 年前
On the other hand, many computer functions have reached the &quot;good enough&quot; level. A normal laptop can handle web browsing and document editing just fine. Resolution over Retina level and digital cameras over 10 megapixels are not necessary. Also, sound fidelity over 44khz is not useful. Video over 4K also is on a diminishing curve of returns. We have little extra improvement to get from many domains. Where do you think more processing power would add a large benefit?
评论 #14115141 未加载
评论 #14115165 未加载
rhaps0dy大约 8 年前
&gt;Even for highly parallel workloads like ray tracing, the performance increase levels off at about 20 times. “No matter how many processors I apply, ray tracing ain’t going to go any faster than 20 times faster,”<p>What? That&#x27;s just not true. Matrix multiplication is one such embarrassingly parallel workload that can go much faster than 20 times. Ray tracing very probably too.
评论 #14114929 未加载
评论 #14114181 未加载
buzzybee大约 8 年前
If one believes Ray Kurzweil(among others), this is just a phase shift where the focus of change moves away from this technology towards a new one. But then the question is: which one? We do have some options floating around.
rini17大约 8 年前
Memory did not go faster so much. You can cram bazillions of transistors on a chip, even do clever tricks to fix power consumption&#x2F;dissipation...but no trick will feed them data fast enough.
Symmetry大约 8 年前
Yup, we won&#x27;t be able to keep shrinking MOSFETs forever. There&#x27;s like to be an interregnum of some sort before a new computing substrate is developed that give us substantially faster gates. And possibly fewer but higher frequency gates at first, which would be interesting.<p>In the mean time we might see a new golden age of computer architecture where the only way to increase performance is to question assumptions about how we design computers.
deepnotderp大约 8 年前
We also always tend to neglect the equally important counterpart to Moore&#x27;s Law,Dennard scaling. Dennard scaling is on its deathbed, and has been plateauing from around 40&#x2F;28nm. Since power consumption is now the problem for everyone, including supercomputers, this problem will compound the almost impossible to solve data movement wail m
justinbaker84大约 8 年前
Very sad to see this ending.
framebit大约 8 年前
Interesting and relevant paper on the end of Moore&#x27;s Law: ftp:&#x2F;&#x2F;ftp.cs.utexas.edu&#x2F;pub&#x2F;dburger&#x2F;papers&#x2F;ISCA11.pdf
kutkloon7大约 8 年前
Is this even news?<p>I have heard the dramatic &quot;Oh no Moore&#x27;s law is coming to an end&quot; a dozen times during computer engineering courses. Professors are usually slow to adapt new information and it is already a couple years ago that I took those courses. I think that the transistor count has been slowing down for about a decade already.