I'm usually skeptical of doomer articles about new technology like this one, but reluctantly find myself agreeing with a lot of it. While AI is a great tool with many possibilities, I don't see the alignment of what many of these new AI startups are selling.<p>It makes my work more productive, yes. But it often slows me down too. Knowing when to push back on the response you get is often difficult to get right.<p>This quote in particular:<p>><i>Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity by increasing the volume of content and steps needed to complete a given task, and by frequently introducing errors that have to be checked and corrected.</i><p>This sort of mirrors my work as a SWE. It <i>does</i> increase productivity and can reduce lead times for task completion. But requires a lot of checking and pushback.<p>There's a large gap between increased productivity in the right field in the right hands vs copying and pasting a solution everywhere so companies don't need workers.<p>And that's really what most of these AI firms are selling. A solution to automate most workers out of existence.
I do worry about the viability of OpenAI in particular. So much of its talent went to other firms which then built up amazing capaibilities like Anthropic with Claude. And then they also have the threat of OpenSource models like DeepSeek v3.1 and soon DeepSeek R2 while at the same time OpenAI is raising its prices to absurd levels. I guess they are trying to be the Apple of the AI world... maybe...<p>That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese.
It's really hard to accurately assess the possibilities granted by LLMs, because they just <i>feel</i> like they have so much potential.<p>But ultimately I think Satya Nadella is right. We can speculate about the potential of these technologies all we want, but they are now <i>here</i>. If they are of value, then they'll start to significantly move the needle on GDP, beyond just the companies focused on creating them.<p>If they don't, then it's hype.
This journalist, Ed Zitron, is very skeptical of AI and his arguments border on polemic. But I find his perspective interesting - essentially, that very few players in the AI space are able to figure out a profitable business model:<p><a href="https://www.wheresyoured.at/core-incompetency/" rel="nofollow">https://www.wheresyoured.at/core-incompetency/</a>
I can't say for <i>all</i> possible implementations, but IMO (from industry experience) the content and consumer-focused benefits of AI/LLMs have been very much over-hyped. No one really wants to watch an AI-generated video of their favorite YouTuber, or pay for an AI-written newsletter. There is a ceiling to the direct usefulness of AI in the media industry, and the successful content creators will continue to be personality-driven and not anonymously generic.<p>Whether that also applies to B2B is a different question.
The end result of this wave looks increasingly like will get us an open web blogspam apocalypse, better search / information retrieval, better autocomplete for coders. All useful (well useful to bloggers/spammers at least), not trillions of dollars in value generated though.<p>Until a new architecture / approach takes root at least.
Confirming Ed Zitron's careful analysis of the situation with SoftBank, Microsoft, and OpenAI: <a href="https://www.wheresyoured.at/optimistic-cowardice/" rel="nofollow">https://www.wheresyoured.at/optimistic-cowardice/</a>.
"Outside of NVIDIA, nobody is making any profit off of generative AI, and once that narrative fully takes hold, I fear a cascade of events that gores a hole in the side of the stock market and leads to tens of thousands of people losing their jobs."<p>You'll note that the prospect story mentions Zitron multiple times.
I think the biggest evidence of the bubble can be seen in job postings. When there is little diversity in skills being asked for that is a very dangerous position for a sector. Imagine a town where every job posting was related to the horseshoe industry with Henry Ford approaching in the horizon. Not only would there be little available for new work after a disruption, there is little available for you at the moment to get experience in some different framework.<p>To say nothing at all about what the white house is doing which makes everything more precarious as companies get flighty due to economic instability.
I tend to agree with the article, but I do wonder if the operating costs of AI companies will decrease if they incorporate the more efficient methods of R1 and stop building so many fucking data centers.<p>I also expect AI to incorporate ads at some point, once they exit the dreamy phase that early tech products always go through. I know Sam says he doesn't want to, but they only have so much runway. Eventually they will rationalize their ads as fundamentally different - a consumer assistant, if you will.
I worry about the cultural shift in Tech to "what have you done for me lately" over patient innovation. Due to no more ZIRP, due to a shift to very top-down management, narcissistic CEO bros, and the new focus to please investors over all else... There's little appetite for actual innovation, which would require IMO a different culture and much more trust between management and employees. So instead, there's top-down AI death marches to "innovate" because that's the current trend.<p>But who is DEFINING the trend? Who is actually trying to stand out and do something different?<p>There's glimmers of hope in tiny bootstrapped startups now. That seems to be the sweet spot of not needing to obsess about investor sentiment, and instead focus on being lean and having a small team with the trust to actually try new things. Though this time with a focus early profitability where they can dictate terms to investors, not the other way around.
I still don’t buy that it is a bubble. Unlike other bubbles like crypto I see real life impact an utility.<p>I do think OpenAI is in deep trouble. They’re ahead but not nearly enough to justify their lofty position.
"It's evident that while AI presents transformative potential, the surrounding financial speculation warrants caution. The challenge lies in distinguishing between genuine technological advancements and market hype. As the industry evolves, a balanced approach that values innovation while remaining vigilant about speculative investments will be crucial to navigate the AI landscape effectively."
{comment by ChatGPT after reading the article and all the comments here}
That's a great article and a lot of the comments seem to resonate with the article. But somehow this is disappearing from the front page faster than anything else, it's hard not to think that "this is bad for business, so it must go"...
If I look at Nvidia stock from mid-June of last year, or the IYW index (Apple, Microsoft, Facebook, Google) - NVDA is down 10%, IYW is down maybe 2-3%. It doesn't feel like I'm in the middle of a huge bubble like, say, the beginning of 2000.
>Marcus ... bet Anthropic CEO Dario Amodei $100,000 that AGI would not be achieved by the end of 2027.<p>Has anyone seen any mention of this? I couldn't find it googling.
In the last few years we have seen unprecedented progress in AI. Relatively recently, LLMs like ChatGPT were regarded as pure science fiction. Current text-to-image models? Unthinkable. And then people <i>still</i> try to argue that it is just a bubble. People have the concerning tendency not to learn from evidence they previously judged as being extremely unlikely. The evidence is now clearly indicating that humanity is on the cusp of developing superhuman general intelligence. The remaining time is probably measured in years rather than decades or centuries.
> But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios<p>You gotta love how this paragraph reads like an unfolding battle scene from a Tolkien novel.
still loads of money to be made in being the company hosting models on your fleet of GPUs. open source models and training paradigms definitely have undercut the proprietary model moat, but you need a good chunk of compute to run these models and not everyone has or wants that compute themselves
I think AI is a bubble in the same sense that airline industry are.<p>Airline industry are notoriously hard to be profitable. (I heard it from the intelligent investor book)<p>So just because something is useful doesn't necessarily means that its profitable yet the VC's are funding it expecting such profitability without any signs of true profitability till now.<p>I mean, yes AI is a profitable, but most of the profitability doesn't come from real use case, but rather
the majority of the profitability comes from the (just slap AI sticker to increase your company valuation), and that's satisfying the VC right now. But they want returns as well.<p>And by definition if their returns is that a bigger fool / bigger VC is going to fund the AI company at a higher evaluation without too much profitability / very little profitability. Then THAT IS BUBBLE.<p>But being a bubble doesn't mean it doesn't have its use cases. AI is going to be useful, its just not going to be THAAT profitable, and the current profits are a bubble / show the characteristics of it.
It requires only a little bit of imagination to figure which industries are most likely to be disrupted by AI. If you calculate the size of those industries, the upside is huge. Let's pick transportation and healthcare: we already have Waymo and probably in the future Tesla offering competitive autonomous drivers, and our current LLMs already surpass doctors in diagnosis accuracy. So why, I ask, why would you possibly think those industries won't be disrupted? Do you really think people will stick with mediocre doctors if they can use AI? Or that they will pay a premium for a human driver? Come on.
We appreciate China for its strong push toward open-source AI. Without models like DeepSeek and Qwen, the U.S. was set to dominate AI with closed-source systems, charging tens of billions in rent every month while deciding who gets access based on politics.<p>"Hey, Eritrea, you're authoritarian—you can't use our democratic AI until you democratize."
"Hey, Saudi Arabia and Qatar, you're not authoritarian—you can have our AI."<p>Once again, thank you, Chairman Xi, for saving us from this nonsense.