TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Reading on esoteric side of “tick-tock” silicon business strategy?

2 点作者 entrepy123超过 2 年前
Question: Can HN please recommend specific things to read to understand any potentially lesser-known but interesting history of the &quot;tick tock&quot; strategy famously used for silicon chip improvements and rollouts for the past decades?<p># Background<p>The popularly known version is something along the lines of: like clockwork, CPU improvement cycles consist of &quot;tick&quot; (shrink the architecture a little), then &quot;tock&quot; (improve the architecture), then repeated.<p>This obviously resulted in a long term trend of upgrades and capability increases.<p># What I want to know<p>Is there documented history from the silicon industry of esoteric or lesser known business and engineering stories about &quot;tick tock&quot;? Specifically, could it have been possible or at least discussed to &quot;move ahead&quot; more rapidly, but instead, it made (far) more business and political sense to inch things along?<p># Notes<p>- I anticipate a popular refrain saying that you have to innovate little by little, by the very nature of the improvements.<p>- I also anticipate that even a &quot;broad and deep&quot; traditional understanding&#x2F;education of computer engineering and its history might not uncover this particular topic.<p>- Searching the web, it is sort of hard to find coverage of this exact topic I am looking for, even just to understand it better broadly.<p>Does HN understand the question, and, if so, has this topic been covered somewhere? Any books, oral histories, articles (academic or general), websites, or even documentary videos on the subject would be appreciated.<p>Thanks in advance!<p># Aside&#x2F;clarification<p>This question is only really about the hardware&#x2F;chip&#x2F;CPU aspects. There are related issues in software, systems, augmentation, etc. that I understand somewhat more about regarding issues of &quot;missed&#x2F;impossible leaps&quot; and computer-human evolution&#x2F;uptake, but this question is not about those issues really.<p>EDITS: 1) Minor wording update to specify focus area more clearly. 2) Improved formatting for readability.

2 条评论

PaulHoule超过 2 年前
The big story is that improving manufacturing nodes has had diminishing returns since 2005 or so.<p>The great revolution in personal computing performance came when the IBM PC got thoroughly entrenched. Unlike the other 1980s micros, the CPU clock was not locked to the video system so it was possible to raise the CPU speed incrementally. Up until 2005 each die shrink meant: (i) more transistors on the die, (ii) faster transistors on the die, (iii) lower power consumption per unit of work and (iv) lower cost per transistor. Note that the miniaturization (i) counteracted (iii) when it came to volume or area power density -- I remember going to a data center around that time which still had some Sun SPARC and IBM Power services and newer x86 servers and the remarkable thing was that the x86 servers were much warmer to be around because they packed so many CPU cores that were working so hard.<p>When all those factors are in play the next computer you buy will be dramatically better than any computer you&#x27;ve ever owned.<p>In 2005 the free ride stopped for (ii) so they adopted the secondary strategies of multicore, SIMD, and stream processing (GPU and friends). &quot;Tick Tock&quot; was a thing in this era up until the 14nm process came along. 14nm was seriously late, but 10nm was much later and marked the end beginning of the end of (iv).<p>Since 10nm was so late it was necessary to keep improving the micro architecture at 14nm (is that &quot;tock tock tock&quot; or &quot;tick tick tick?&quot;)<p>It is getting increasingly shambolic as (iv) goes to its death. Advanced packaging with multiple dies can make up for the slowdown in (i) but it does little to help (iv). So the pattern today is you can buy the best CPU or GPU you ever had, it is twice as powerful as the last one and roughly twice as expensive as the last one. In this market, Intel is decoupling the microarchitecture from the process so they might port a design from 10nm to 14nm or to 7nm, they might even farm the fabrication out to TSMC which means dealing with a whole different set of design rules.<p>Note that Intel really led the world in CPU volume in the 2000s and 2010s and that meant they could afford manufacturing superiority. Off-brand CPUs such MIPS, SPARC, the DEC Alpha, IBM&#x27;s POWER could not keep up. It wasn&#x27;t until the use in smartphones caused the volume of ARM chips to explode that vendors like TSMC pulled ahead in terms of volume and ability and <i>willingness</i> to invest.<p>You have to remember that the 14nm era was Intel&#x27;s most profitable, it was no disaster to their top or bottom line in the short term for the development of future nodes being stalled.
评论 #34150551 未加载
_448超过 2 年前
If I remember correctly, tick-tock was an Intel strategy and then probably followed by others. Because Intel owned both design and manufacturing, they came up with this lockstep approach to combine design and manufacturing cadence.