Badly editorialized title.<p>Regarding the substance of the article, the curve from the 3 data points (1: 100x 2: 25x 3: 25x) could fit lots of different ways besides "growing 30x per generation".
The are all the same arguments that wrongly predicted that DNA sequencing would overtake hard drive storage capacity.<p>People aren't going to do truly uneconomic things just to scale language models exponentially.
Interesting Pull quote:<p>GPT-4 needed about 50 gigawatt-hours of energy to train. Using our scaling factor of 30x, we expect GPT-5 to need 1,500, GPT-6 to need 45,000, and GPT-7 to need 1.3 million.