I think you could see this coming as soon as they implemented chain-of-thought in their previous models.<p>Chain-of-thought is well understood to be <i>the</i> way to squeeze out performance out of these models. Slow down, go step by step, use more tokens to get a little bit better output. This is so useful, that often in the comparision graphs you see between different models, the best one (the one I want to sell) is using chain of thought while others are not. Not always, but companies have been caught using this technique to hype up their models.<p><i>We’ve already seen that CoT (usually!) improves a model’s performance. So does prompting the model with examples of correct question-answer pairs (called in-context examples). But in reports, some models are evaluated with CoT, while others aren’t. The number of in-context examples is often different, and the prompts are almost always different.</i><p><a href="https://asteriskmag.com/issues/07/can-you-trust-an-ai-press-release" rel="nofollow">https://asteriskmag.com/issues/07/can-you-trust-an-ai-press-...</a><p>As soon as OpenAI was desparate for performance upgrade, they implemented this as an actual feature.
> The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.<p>The commonly-cited scaling laws[0] <i>predict</i> diminishing returns from scale. Fairly uncontroversial for instance that going from 1 GPU to 2 GPUs gives a larger improvement than going from 101 GPUs to 102 GPUs. Same with, say, computer graphics.<p>I feel scaling laws are getting conflated with the theory of an exponential "hard takeoff" of self-improvement (which isn't particuarly well-founded in my opinion).<p>[0]: <a href="https://arxiv.org/pdf/2001.08361" rel="nofollow">https://arxiv.org/pdf/2001.08361</a>
So where's that promised exponential improvement?<p>Oh right, it doesn't exist. The only thing that exponentially increases are the costs and no real, sustainable business model is in sight. AI companies have no moat and are ironically consistently threatened to be made redundant.<p>Businesses keep running around with the solution that is genAI but can barely find any problems to use it on.<p>We won't be getting what sama promised, that much is clear, I'd say. Thankfully.<p>I do think it's time to prepare for the post-AI-bubble age. Big changes are coming, after all, hundreds of billions have been wasted for what's essentially a toy.