> However, I do have the ability to read publicly-available data,<p>Maybe, but based on the egregious errors the author has made in previous articles, they probably don't have the ability to understand or reason about any of the data they read. Also note that despite what's implied by this statement, most of this article is not sourced, it's just the opinions of the author who admits they have no qualifications.<p>I didn't read the entire gish gallop, but spot-checked a few paragraphs here and there. It's just the kind of innumerate tripe that you should expect from Zitron based on their past performance.<p>> Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.<p>You can't reduce the cost of anything by more than 100%. At that point it's free.<p>But let's consider the author's own numbers: $4B in revenue, $4B in serving costs, $3B in training costs, $1.5B in payroll. To break even at the current revenue, OpenAI need to cut their serving costs and training costs by about 66% ($1.3B+$1B+$1.5B<$4B), not by "thousands of percent".<p>> As a result, OpenAI's revenue might climb, but it's likely going to climb by reducing the cost of its services rather than its own operating costs.<p>... Sorry, what?<p>Reducing operating costs does not increase revenue. And I don't know how the author thinks that reducing cost of services would not reduce operating costs.<p>> OpenAI's only real options are to reduce costs or the price of its offerings. It has not succeeded in reducing costs so far, and reducing prices would only increase costs.<p>Reducing prices does not increase costs.<p>> I see no signs that the transformer-based architecture can do significantly more than it currently does.<p>So, here's a prime example of the author basing the "analysis" on them personally "seeing no signs" of something they have no expertise to evaluate. There's no source for this claim, and it's pretty crucial for their conclusions that transformers have hit a wall.<p>> While there may be ways to reduce the costs of transformer-based models, the level of cost-reduction would be unprecedented,<p>But for a given quality of model, haven't the inference costs already gone down by like 90% this year?<p>> particularly from companies like Google, which saw its emissions increase by 48% in the last five years thanks to AI.<p>It should be pretty obvious to somebody who can read publicly available data that all of the increase over 5 years can't be attributed to AI.