Please read the article before posting comments, or at least read a summary. The article is saying that GPT-4o style models are reaching their peak, and are being replaced by o1 style models. The article does not make value judgements on the usefulness of existing AI or business viability of AI companies.
I started skimming about 1/3 through this article. Looks to be just a fluff piece about how cool the old AI models were and how they pale in comparison with what's in the works, with about 2 to 5 lines of shallow 'criticism' thrown in as an alibi?<p>Ten minutes and a teeny bit of mental real estate I will never get back.
> Although you can prompt such large language models to construct a different answer, those programs do not (and cannot) on their own look backward and evaluate what they’ve written for errors.<p>Given that the next token is always predicted based on everything that both the user <i>and the model</i> have typed so far, this seems like a false statement.<p>Practically, I've more than once seen an LLM go "actually, it seems like there's a contradiction in what I just said, let me try again". And has the author even heard about chain of thought reasoning?<p>It doesn't seem so hard to believe to me that quite interesting results can come out of a simple loop of writing down various statements, evaluating their logical soundness in whatever way (which can be formal derivation rules, statistical approaches etc.), and to repeat that various times.
Not sure if I got the gist of the article right, but are they trying to say that chain of thought prompting will lead us to AGI / be a substantial breakthrough? Are CoT techniques different to what o1 is doing?? not sure if I'm missing the technical details or if the technical details just aren't there.
The "GPT Era" ended with OpenAI resting on its junky models while Anthropic runs rings around it, but sure, place a puff piece in the Atlantic; at least it's disclosed sponsored content?
And presented in audio narration at the head of the written article: “Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.”
I like AIs with a personality; I like them to shoot from the hip. 4o does this better than o1.<p>o1 however is often better for coding and for puzzle-solving, which are not the vast majority of uses of LLMs.<p>o1 is so much more expensive than 4o that it makes zero sense for it to be a general replacement. This will never change because o1 will always use more tokens than 4o.
Insane cope. Emily Bender and Gary Marcus <i>still</i> trying to push "stochastic parrot", the day after o1 causes what was one of the last remaining credible LLM reasoning skeptics (Chollet) to admit defeat.
It ended because its a glorified search engine now. All of the more powerful functionality was limited or removed<p>My guess is to sell it to governments and anyone else willing to pay for it.