I've seen this interpretation before in HN comments and I think it is an unnecessary distortion of Moore's Law:<p>Note the historical context the observation was originally made in (1965) when doubling was achieved through process size reduction, the effect this had on speed was twofold: operating frequency would increase roughly in proportion (increasing about 40%); secondly and more uniquely to the time, since predecessors transistor count were so limited there was significant room for improvements in instruction implementation and specialization as available transistors increased.<p>Although it's implied, Moore also explicitly stated this in his 1965 paper:<p>> [...] In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area. [1]<p>Later this effect was more explicitly defined as Dennard Scaling in 1974 [2]<p>Transistor count increases in recent years has very little to do with dennard scaling or improving individual instruction performance and everything to do with improving some form of parallel compute by figuring out how to fit, route and schedule more transistors at the same process size, which does not have the same <i>effect</i> Moore was originally alluding to.<p>[1] <a href="https://drive.google.com/file/d/0By83v5TWkGjvQkpBcXJKT1I1TTA/view" rel="nofollow">https://drive.google.com/file/d/0By83v5TWkGjvQkpBcXJKT1I1TTA...</a><p>[2] <a href="https://en.wikipedia.org/wiki/Dennard_scaling" rel="nofollow">https://en.wikipedia.org/wiki/Dennard_scaling</a>
Instead of an ad, here's a good description of the situation:<a href="https://semiengineering.com/why-scaling-must-continue/" rel="nofollow">https://semiengineering.com/why-scaling-must-continue/</a><p>Tl;Dr - old Moore's law is dead , but our new systems(ai , gpu...) still need a huge number of transistors , beyond what can be made on a single chip today, so scaling is still valuable.
Gordon Moore himself is quoted in 2015 as saying that the transistor count interpretation of Moore’s law will be dead within 10 years, in the Wikipedia article on Moore’s Law.<p>If something changed and it’s not dead, I’d love to hear more about the new processes that are making more transistors possible. Working at a chip maker now, but I’m a software guy. My understanding of the problem is that we’ve reached the optical limits of resolving power for the lithography processes. Trace widths are a tiny fraction of the wavelength of light, and chip area is so large we don’t have lenses than can keep the projection in focus at the edges. While there is theoretical potential to get smaller due to gates being many atoms across, actually building smaller gates has real physical barriers, or so I’m told.<p>I’d love to hear more about the manufacturing processes in generally, and more specifically whether something really has changed this year. Does TSMC have a new process allowing larger dies or smaller traces, or is this article mostly hype?
This article is complete fluff.<p>There's no _real_ discussion of Moore's law. No new revelations about chip design. You say workloads need to exploit parallelism these days to see increased performance gains? No shit. Putting memory closer to the logic cores is a good idea? duh. Hell, the author makes the common mistake of conflating AI with ML, because it's clearly illegal for a businessman in any industry to not buzz about "AI".<p>> by Godfrey Cheng, Head of Global Marketing, TSMC<p>Yeah, this is fluff.
It depends what you call Moore's Law. For me, Moore's Law was essentially that the cost per transistor was divided by two every 18 month. Nowadays, it's not true anymore. But of course, it doesn't mean that progress has stopped.
About the industry at large: lots of changes ahead.<p>Take a look at this: <a href="https://www.hotchips.org/program/" rel="nofollow">https://www.hotchips.org/program/</a><p>For the first time in a long while they gave the entire first day to non-semi companies: Amazon, Google, Microsoft<p>Nobody could've imagine the industry turning this way a decade ago.
It is not the density that I worry about. Judging from TSMC's Investor notes and technical presentation, they don't see any problem with 2nm or may be even 1nm. 3nm is currently scheduled for 2022, and 2nm for 2024. So it isn't much about the technical but achieving those within budget.<p>The problem is somewhere along the next 5 years we may see cost per transistor stop decreasing. i.e Your 100mm2 Die would be double the price of previous 100mm2 Die assuming the node scaling was double.<p>At which point the cost of processors, whether that is CPU, GPU or others becomes expensive and the market contracts, which will slow down the foundry pushing for leading node. We could see Hyperscaler each designing their own Processor to save cost, and we are back to the mainframe era, where these Hyperscaler has their own DC, CPU, and Software serving your via Internet.
Can someone here talk about what desktop CPU's might look like for the consumer in 2029?<p>Like, I'm a gamer in 2029 and I'm looking for the equivalent of todays Intel Core i7 or AMD Ryzen. How much faster will it be? How different will it be from today? Etc.