TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The GPT era is already ending

55 pointsby bergie5 months ago

11 comments

Skunkleton5 months ago
Please read the article before posting comments, or at least read a summary. The article is saying that GPT-4o style models are reaching their peak, and are being replaced by o1 style models. The article does not make value judgements on the usefulness of existing AI or business viability of AI companies.
评论 #42361821 未加载
评论 #42362263 未加载
评论 #42361241 未加载
Dilettante_5 months ago
I started skimming about 1&#x2F;3 through this article. Looks to be just a fluff piece about how cool the old AI models were and how they pale in comparison with what&#x27;s in the works, with about 2 to 5 lines of shallow &#x27;criticism&#x27; thrown in as an alibi?<p>Ten minutes and a teeny bit of mental real estate I will never get back.
aegypti5 months ago
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;xUJMG" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;xUJMG</a>
lxgr5 months ago
&gt; Although you can prompt such large language models to construct a different answer, those programs do not (and cannot) on their own look backward and evaluate what they’ve written for errors.<p>Given that the next token is always predicted based on everything that both the user <i>and the model</i> have typed so far, this seems like a false statement.<p>Practically, I&#x27;ve more than once seen an LLM go &quot;actually, it seems like there&#x27;s a contradiction in what I just said, let me try again&quot;. And has the author even heard about chain of thought reasoning?<p>It doesn&#x27;t seem so hard to believe to me that quite interesting results can come out of a simple loop of writing down various statements, evaluating their logical soundness in whatever way (which can be formal derivation rules, statistical approaches etc.), and to repeat that various times.
评论 #42362767 未加载
rtrgrd5 months ago
Not sure if I got the gist of the article right, but are they trying to say that chain of thought prompting will lead us to AGI &#x2F; be a substantial breakthrough? Are CoT techniques different to what o1 is doing?? not sure if I&#x27;m missing the technical details or if the technical details just aren&#x27;t there.
juped5 months ago
The &quot;GPT Era&quot; ended with OpenAI resting on its junky models while Anthropic runs rings around it, but sure, place a puff piece in the Atlantic; at least it&#x27;s disclosed sponsored content?
Zardoz895 months ago
And presented in audio narration at the head of the written article: “Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.”
评论 #42361671 未加载
OutOfHere5 months ago
I like AIs with a personality; I like them to shoot from the hip. 4o does this better than o1.<p>o1 however is often better for coding and for puzzle-solving, which are not the vast majority of uses of LLMs.<p>o1 is so much more expensive than 4o that it makes zero sense for it to be a general replacement. This will never change because o1 will always use more tokens than 4o.
评论 #42361230 未加载
talldayo5 months ago
With a whimper too, not the anticipated bang.
comeonbro5 months ago
Insane cope. Emily Bender and Gary Marcus <i>still</i> trying to push &quot;stochastic parrot&quot;, the day after o1 causes what was one of the last remaining credible LLM reasoning skeptics (Chollet) to admit defeat.
评论 #42361213 未加载
jazz9k5 months ago
It ended because its a glorified search engine now. All of the more powerful functionality was limited or removed<p>My guess is to sell it to governments and anyone else willing to pay for it.
评论 #42361286 未加载
评论 #42361174 未加载