TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ars Technica content is now available in OpenAI services

15 点作者 Liriel9 个月前

4 条评论

seydor9 个月前
I dont see this strategy ending up anywhere good for openAI. They don't have revenue and yet they ll be asked to subsidize the entire internet. They are priming their users to the idea that they won't use advertising, or manipulation. But when they do do that, their subscription model will collapse
评论 #41320037 未加载
rajnathani9 个月前
&gt; It&#x27;s worth noting that Condé Nast internal policy still forbids its publications from using text created by generative AI, which is consistent with its AI rules before the deal.<p>This is a key point. OpenAI is getting guaranteed non-AI generated text data; which is slowly becoming a very valuable resource. You can expect that future LLMs may not require trillions of tokens of text data to generalize if generalizability is picked up better by future model techniques (which IMO would involve picking up general pattern recognition via non-textual data too (i.e. other modals of the current multi-modal and beyond such as puzzles)), and thus even just millions to billions of tokens via high quality sources like news publications, books, and research papers, will be highly worth it.<p>Also, the SearchGPT integration for latest news is also a plus point. Think of all of those all-you-can-read news subscriptions such as Scroll, but that instead of paying for it that instead paying for SearchGPT covers not just a high quality search engine but also the dues to the news publishers via such payment (via these partnerships) from OpenAI to the news publishers.
verzali9 个月前
What is the reason LLM training requires <i>so</i> much data? It appears they require the entire output of human civilization in order to stay competitive, but why? Is it simply that their trainers can get that much data, or are they just very inefficient in how they learn?
评论 #41319359 未加载
评论 #41320835 未加载
wkat42429 个月前
I hope hacker news won&#x27;t do this