The reason CPUs aren't getting any faster is only tangentially mentioned in the article. Yes, it is heat dissipation. But why then did they get faster for so many decades?<p>As the process size drops, you can crank up the clock-speed while leaving the total heat dissipation constant. But the heat density is related to the voltage, resistance, and the amount of time the transistors spend partially on or off (we wish they were perfect switches, but they aren't really). So as you switch faster, unless you can lower the resistance or voltage (which requires changing your materials), you probably spend more and more time in a partially on or off state. This means your heat density rises to the point where you are just south of burning things out.<p>You make some engineering decision about the reliability you want in your chips, and calculate or test how high a heat density you can tolerate. But unless you change your materials so they use lower voltage, or invent new ways to move heat away faster, or use materials that are more conductive, you aren't upping the heat density or clock rate. But you can still make them smaller and use less power total for the same amount of computation.<p>This is why reversible computing (gives off less heat), diamond substrates (much higher thermal conductivity), microfluidic channels (moves heat away faster), and parallelism (larger chips = more computation) are being explored. And only the last one is practical THIS year.