I think Ed maybe buries the lede a little bit with all the ranting. About halfway through the piece he drops this chart[1] which personally left me shocked.<p>To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -<p>"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."<p>Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.<p>[1] <a href="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcTvV_KScCMt6maPCLAD5qACLXB0A9UFlxBy7vmR4oO6y99lGkJJOfKhOCB9a3-GeZrY83n6xFepHBUxmf4fUXpDuYAypFPjGBTV-i1B8xYCkhZ1JaIaiAkpUazxVVqwjbiVJF9?key=5iQ8LZzyYfTKSGqXLxb-sOQh" rel="nofollow">https://lh7-rt.googleusercontent.com/docsz/AD_4nXcTvV_KScCMt...</a>
> I apologize, this is going to be a little less reserved than usual.<p>It's a bit of an open question which comes first; the AI bubble bursting, or Ed Zitron exploding from pure indignation.
> sky rains blood<p>This reminds me of Neon Genesis Evangelion.<p>Of course, no one realized (at least publicly) that it is a metaphor for "everyone claps at the end" (also the ending of the original series).<p>Sound of rain sounds like an audience clapping. "Blood rain" means real claps, not some fake condescending simulacra of it.<p>"So that is how democracy dies? With thunderous applause." is also a reference to the same metaphor.<p>Both movies relate to the theme of sacrifice, worthiness, humanity survival.<p>Are you guys are too much into giant robots to even notice those things?
I think the author might need a reality check. In almost every company the usage of AI is becoming more and more valuable. Even if progress in models freezes and halts, the current state is too valuable. I have talked to multiple people in multiple companies and LLMs are becoming indispensable at so many things. Yes chatgpt might or might not conquer the world but the enterprise usage I don't see decreasing (and I am not talking about customer facing LLM usage which might be very stupid).
As other commenters have rightly pointed out - agents being a good product is orthogonal to agents being profitable. If you can start securing big contracts with enterprises by convincing them that the AI revolution is coming, that's enough for profit.<p>It will only be years down the road when people start realizing that they're spending millions on AI agents that are significantly less capable than human employees. But by that point OpenAI will already be filthy rich.<p>To put it a different way - AI is currently a gold rush. While Nvidia sells shovels, OpenAI sells automated prospecting machines. Both are good businesses to be in right now.
Seriously, this opening!<p>> people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside,<p>It took me 20+ ranty paragraphs to realise that this guy is not, actually, an AI doomer. Dear tech writers, there's camps and sides here, and they're all very deeply convinced of being right. Please make clear which one you're on before going down the deep end in anger.
OpenAI is a bubble. AI industry is a bubble. AI value is real (and probably underestimated). It would take at least a decade for all the relevant industries to incorporate the progress so far.<p>If Ed fees this strongly, he should short NVDA and laugh all the way to the bank when it pops.
Might be a bit naive, but by the time 2029 comes around, and AI companies have started 'monetising free users', won't a lot of people/companies have open-source models tuned and tailored to their needs running locally, no provider required?
OpenAI could very well be to this AI boom what Netscape was to the dotcom bubble. Even post dotcom crash, a lot of lasting value remained—and I believe the same will happen this time too.
"The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry!"<p>Seems to me like Ed is making a very elementary mistake here. I don't think anyone has ever claimed the total amount of money spent on inference would decrease. Claims about inference cost are always about cost per quality of output (admittedly there is no consensus on how to measure the latter). If usage goes up faster than cost per unit goes down, then you spend more, but the point is that you're also getting more.
OpenAI has a massive brand advantage because the general public equates their products with AI.<p>Even if they don't 100% figure out agents they are now big enough that they can acquire those that do.<p>If the future is mostly about the app layer then they'll be very aggressive in consolidating the same way Facebook did with social media, see for example Windsurf.
> I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing.<p>I can't tell whether this man actually believes that he is the only one critiquing AI? I mean.. I can barely walk 2 feet without tripping over anti-AI blogs, posts, news articles, youtube videos or comments.
The pivot of the article is this, at the end: "There are no other hypergrowth markets left in tech."<p>That's the key.<p>Tech is, at the moment, the only growth engine for capitalism at this time, that sustains the whole world economy (before, it used to be "just" IT [2015], credit [2008], oil [1973], coal, which all have demonstrated their limits to sustain a continuous growth).<p>AI is the only growth engine left within tech at this moment, supported by GPU/TPU/parallelization hardware.<p>Given what's at stake, if/when the AI bubble bursts, if there's no alternative growth engine to jump to, the dominos effect will not be pretty.<p>EDIT: clarified.
> Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful<p>Even if we assume this is true, it’s worth asking.. did the promised efficiency of the advertising economy ever need to be “real” to completely transform society?
I don't agree with Ed's general style of rhetoric but everything single thing he says is important and are active topics of avoidance and hand waving from language model advocates who are also obviously also upset by what he says.
I don't have the energy to dig back through this person's old posts, but I'm curious: did they correctly project that we'd be here two years ago? Because if not, why would I care about their insufferable ranting about where things are going from here?<p>It’s easy to be a critic and declare that every new fad is overhyped, and you’ll mostly be right. But when you’re wrong, damn do you look foolish.
Current AI could have produced this article in under 5 minutes.<p>Does knowing a human wrote this article over days of mulling increase or decrease its value?