A metric that really puts things in perspective is the following: take a common consumer CPU, clocked at 3.4Ghz. That means it executes 3,400,000,000 cycles per second. Divide the speed of light by this number, and you obtain approximatively 0,088m.<p>During the time your standard desktop CPU takes to finish a cycle, light only travels 9 centimeters.
The answer to this question is yes. People like to talk about how the real issue is economics, not physics, but the inability to make chips much smaller economically is very much a reflection of physical constraints on the processes we're currently using to manufacture chips (particularly the lithographic process), and as of right now there's no clear successor to those processes that's going to enable them to get smaller for cheaper.
Right now, the laws of economics are a bigger problem than the laws of physics. Field effect transistors have been shown to work 5nm and even 3nm. However, the new lithography technologies needed to reach those resolutions cheaply are nowhere near ready.
In June of this year HP announced its plan to build "The Machine". Regardless on how feasible their project is, I think they are right in pointing out that memory is the current bottleneck in computer engineering. We don't need faster processors. Focusing on the size of transistors, which are already insanely small when you think about it, may be a mistake.
The question I want to ask is, are generic CPUs now fast enough that people no longer need faster CPUs? I have an i7-4771 on my desktop (bought instead of 4770K because I wanted TSX... thanks Intel ;), and I can't really imagine much use for an even faster CPU unless I'm gaming or doing heavy compute work.
Accelerator based computing is a tell that this is happening already. Shrinking everything down alone is not bringing the big speedups in performance-per-watt anymore, so what chip manufacturers do is putting as much ALUs as they can on the same die size, curbing lots of built-in management features in the process, that our software programming models have been built upon over the decades. Hardware is still getting faster at Moore's law, but <i>only</i> given constantly adapting software, i.e. "The Free Lunch Is Over"[1].<p>[1] <a href="http://www.gotw.ca/publications/concurrency-ddj.htm" rel="nofollow">http://www.gotw.ca/publications/concurrency-ddj.htm</a>
They say we can't get smaller than an atom, but electrons are smaller than an atom, and we don't even have to use just the charge but can also use properties like spin and momentum to get more values out of them. I.e. spintronics. Then of course there are photons as well. The article mentions itself that we use light for etching features even less than the wavelength of the light nowadays. Sometimes you need a big read-write head or something, but then you can just push magnetic domains past it on a wire, etc. so that isn't necessarily the size of piece of computation in the device if we move beyond transistors.
Depends on exactly what you mean. You can keep adding cores until the cows come home. And I don't really buy the "we don't know how to use all those extra cores" argument. Multi-threaded code isn't the rocket science it's portrayed to be in the press.<p>One thing that may become practical is die stacking, depending on what they can do about extra heat.
And now people are rethinking processor architecture : <a href="http://millcomputing.com/docs/" rel="nofollow">http://millcomputing.com/docs/</a>
No. Every single time someone supposes that any piece of technology is approaching its limits, the answer is no. Technology will continue to improve and advance as we make new discoveries. The simple fact is we do not know the future, so pretending like we know what things will be like in 20 or 50 years is just pointless. What's with this obsession of taking today's technological knowledge and assuming that things won't drastically change? Nobody should ever pretend to know what the future holds.