Moore's law is more of an implementation detail than anything else. Nobody really cares about transistors. What we all care about is computational density, computational efficiency, and exponential progress. Thus the interesting metrics are something like flops/m^3 and flops/Watt. The cool thing about Moore's law is that by just shrinking transistors we would get both smaller and more energy efficient chips.<p>I think that economical forces will continue to drive progress, and probably exponential progress like we've enjoyed in the past, for at least another decade. Probably several.<p>The way this is going to happen is through a paradigm shift. To the layman this seems weird, but to anyone in the business this should be expected. We've already been through quite a few. The first computing devices were mechanical, then they were based on electrical relays, then came the vacuum tubes and finally we entered the era of the transistor. We can argue about what the next paradigm will be, but I have no doubt there will be one. And soon.
How will this affect the programmers reading HN today in the future?<p>For programming where high performance really matters (throughput not latency), adoption for parallel frameworks/languages like CUDA has been a no brainer, and people have embraced the new parallel programming model.<p>Since major speedups in consumer chips today are about adding parallelism (Intel AVX), versus the clock speedup of a decade ago and earlier...<p>What does HN think about mainstream effects? Will there be a noticeable migration to less computationally wasteful or less sequential languages/frameworks when mainstream applications increase their hunger for compute resources?<p>p.s.
(I'm assuming that the average application's future equivalent/spawn needs more and more computational resources, either bc of more load from customers using it, or because it gets more complex and ambitious as the space of applications gets more complex in the future)
This article is not hitting on one of the directions we can go to continue Moore's law. Namely, upwards. Intel demonstrated a 3d pentium 4 back in 2004, the High Bandwidth Memory packages on AMD's latest chips are 3d, nVidia's ramping up their own HBM technology for their next GPUs, and there are a ton of active research programs on how to keep yields high as dies are stacked. It may be the end of Moore's law on a flat surface, but there's still quite a few places to go.