I can see this being bad for OpenAI, Microsoft, and Anthropic, for sure, or at least their current valuations, but I just don't understand why people think this is popping the AI bubble.<p>This makes a bad assumption that the state of the art is not progressing, that we have exhausted all ideas about how to make models better, that the only way to make models better is to throw more GPUs to it, that there won't be a significant market for actually running the LLMs, and most importantly, that we have somehow exhausted all of the applications for AI.<p>Even if Deepseek / Deepseek-equivalent models were the limit of what we can do with models, and AI and Anthropic completely busted, we still have ten years at least of developing the most effective applications of them and combining them with other tools to improve productivity.<p>This feels like some people are too high on it, and some people are overreacting.