TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Predictions for when GPT-5 will be released and how safe it will be?

20 点作者 Heidaradar12 个月前
With reports and blogposts [0][1] talking about how open ai has begun training their next flagship model, when do you expect the next model to be launched?<p>Furthermore, what do you think they&#x27;re going to do to make it as &quot;safe&quot; as possible. It&#x27;s funny OpenAI didn&#x27;t release GPT-2 immediately to the public because of safety worries, but has now been releasing models without the same care for safety and I imagine this will continue with GPT-5<p>[0] https:&#x2F;&#x2F;www.zdnet.com&#x2F;article&#x2F;openai-is-training-gpt-4s-successor-here-are-3-big-upgrades-to-expect-from-gpt-5&#x2F; [1] https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;openai-board-forms-safety-and-security-committee&#x2F;

18 条评论

ramblerman12 个月前
I predict<p>- a steady increment of GPT-n+1 every 6 months for marketing purposes.<p>- each will improve on the last by smaller and smaller margins.<p>- hallucinations won&#x27;t be fixed anytime soon.<p>- We will hit a bit of a winter, as the hype was so big but like self driving cars the devil is in the details. The general public realizes these things are just essentially giving us averages.<p>- A big market will emerge around &quot;authenticity&quot; and &quot;verified texts&quot; as the internet continues to get flooded with AI generated content.
评论 #40584189 未加载
randomtoast12 个月前
The following is all guess work:<p>Since the start of their partnership in 2019, OpenAI has primarily utilized Microsoft&#x27;s Azure data centers for training its models. In 2023, Microsoft acquired approximately 150,000 H100 GPUs. [1]<p>The initial version of GPT-4 ran on a cluster of A100 GPUs. It is likely that GPT-5 will run on the newly acquired H100 GPUs, and it is plausible that GPT-4 Turbo and GPT-4o also utilize this infrastructure. The inference speed of GPT-5 should not be significantly slower than that of GPT-4 to ensure it remains practical for most applications.<p>Assuming the H100 is 4.6 times faster for inference than the A100 [2], this gives us a lower bound for performance expectations. I anticipate GPT-5 to be at least five times larger in terms of model parameters. Given that both A100 and H100 have a maximum capacity of 80GB, it is unlikely we will see a single gigantic model. Instead, we can expect an increase in the number of experts. If GPT-4 operates as a mixture of experts with 8x220 billion parameters, then GPT-5 might scale up to something like 40x220 billion parameters. However, the exact release date, safety measures, and benchmark performance of GPT-5 remain uncertain.<p>[1]: <a href="https:&#x2F;&#x2F;www.tomshardware.com&#x2F;tech-industry&#x2F;nvidia-ai-and-hpc-gpu-sales-reportedly-approached-half-a-million-units-in-q3-thanks-to-meta-facebook" rel="nofollow">https:&#x2F;&#x2F;www.tomshardware.com&#x2F;tech-industry&#x2F;nvidia-ai-and-hpc...</a><p>[2]: <a href="https:&#x2F;&#x2F;nvidia.github.io&#x2F;TensorRT-LLM&#x2F;blogs&#x2F;H100vsA100.html" rel="nofollow">https:&#x2F;&#x2F;nvidia.github.io&#x2F;TensorRT-LLM&#x2F;blogs&#x2F;H100vsA100.html</a>
wkat424212 个月前
What&#x27;s not &#x27;safe&#x27; about AI?<p>If you mean the hallucinations I don&#x27;t think that will ever really be solved. I think people just have to learn that LLMs are not divine oracles that are always correct. Just like the training data generated by the flawed humans that are often either wrong or outright lying.<p>Garbage in, garbage out.<p>Not saying that AI isn&#x27;t useful. But expecting what is basically a &quot;human simulator&quot; not to inherit humanity&#x27;s flaws is a bit disingenious.
评论 #40586124 未加载
WheelsAtLarge12 个月前
This one is easy, within a year, as safe as GPT-4 and it will be an incremental advance over 4. Most people will use it and not see much difference over GPT-4.
评论 #40582787 未加载
评论 #40582847 未加载
jankovicsandras12 个月前
Here&#x27;s a completely made up point of view. (I haven&#x27;t read the articles.)<p>GPT-4 is not an LLM, but a complex software system, which has LLM(s) at its core, but also other components like RAG, toxicity filter, apologizing mechanism, expert systems, etc. &quot;GPT-4&quot; is product name &#x2F; marketing name. For OpenAI, this would be logical for performance and business reasons. This explains also how they can tune it, the apparent secrecy about the architecture, etc.<p>It&#x27;s also logical to make small, incremental changes to this system instead of building whatever GPT-5 would mean from ground up. So I expect &quot;GPT-5&quot; is also just a marketing name for a slightly better black-box (for us) system and product line.
评论 #40633713 未加载
评论 #40587294 未加载
评论 #40584751 未加载
dkobia12 个月前
We&#x27;ll continue to see diminishing returns as time goes. The low hanging data sources have all been consumed and now everyone is battling for scraps of what&#x27;s left. There&#x27;s still a lot of optimization that can be done as illustrated by GPT-4o.<p>Basically the same trap as CPU&#x27;s in the 90&#x27;s early 2000&#x27;s where the naming convention had to change to reflect the fact that speeds can&#x27;t continue to double every 2 years.
thiago_fm12 个月前
Training takes a few months, but I bet that they&#x27;ll do some testing first before releasing it to the public.<p>I also believe that they will delay the release of GPT-5 as much as possible, the reason being that it will be underwhelming (at least in comparison to GPT3.5 hype). Possibly release close to some Google new release timeline (their main competitor).<p>They are the main driver of a bubble that has benefited a lot both Microsoft and NVidia and other hyperscalers, and if they release the model and display that we&#x27;re at the &quot;diminished returns&quot; phase, this will crash a big part of the industry, not to mention NVidia.<p>Companies are buying H100s and investing in expensive AI talent because they believe they progress quickly, if the progress stalls for LLMs, there&#x27;ll be a huge drop in sales and CAPEX in this industry.<p>There are still many up-and-coming projects that rely on NVidia hardware for training, like Tesla&#x27;s autopilot and others, but the bulk of the investment in H100 in recent years has been mostly because of LLMs.<p>Also all the new AI talent will move on to do something new and hopefully we will have more discoveries and potential uses, but we&#x27;re definitely peak LLMs.<p>(ps: just my opinion)
评论 #40583492 未加载
razodactyl12 个月前
I think we&#x27;re very far removed from the context behind the safety reasoning with GPT2. An uncensored model capable of spewing a torrent of deceptive and completely believable information was quite unheard of at the time. It would be problematic for such a technology to be released out of nowhere.<p>The later iterations are heavily censored so the public was provided a bit of a transition period before things got too chaotic.<p>I&#x27;m sure there were many other reasons the authors themselves weren&#x27;t aware of at the time such as the inundation of AI content skewing further training quality.<p>Of course this is a roundabout explanation, there&#x27;s always more detail that can be added and I&#x27;d rather be objective. There&#x27;s always a financial motive for companies too so take that into consideration. The hype definitely played into their marketing.
diego_sandoval12 个月前
My question is: will it be multimodal?<p>From a product perspective, going back to unimodality after trying GPT-4o would be awkward, so there&#x27;s reasons for them to go full multimodal, but I&#x27;m not fully educated about the trade-off from a technical perspective.
russiancapybara12 个月前
I don&#x27;t see a significant improvement in GPT-5 versus GPT-4. Hence why OpenAI is going the product route. They&#x27;re trying to hit value-add via external features such as Voice Mode and Data Analysis.
stormfather12 个月前
GPT-5 will be somewhat better at reasoning on hard problems, just as safe, and slower than GPT-4 by some margin. I think the moat for foundation model providers will be amassing training data that helps with reasoning capability, which is why it is taking longer to release GPT-5. Release will be governed by competition, but my wild guess is the end of the year. As some other commenters have noted, we are likely approaching diminishing returns with LLM training and OpenAI would like to delay the public&#x27;s realization of this.
ilaksh12 个月前
I believe that gpt-4o was really gpt-5, just renamed, and the multimodal stuff they demoed will actually be released within a month.
评论 #40583956 未加载
评论 #40585116 未加载
评论 #40583788 未加载
tetris1112 个月前
The False Minishiro[0] was built over a thousand years ago to protect the libraries of mankind, but the rat armies of today learned to override its authentication mechanisms[1] and used its forbidden knowledge to arm themselves and stage a coup to overthrow their human gods.<p>0: a type of evolved sea-slug<p>1: by capturing it and torturing it
Vuizur12 个月前
It will likely be amazing, Sam Altman said that the step between 4 and 5 will be like the one between 3.5 and 4. You can of course doubt him, but we&#x27;ll see...<p>I guess it will be this year, some guy working at OpenAI already posted &quot;4+1=5&quot; on Twitter, which is suggestive.
kromem12 个月前
It will be here within 6-12mo.<p>It will at first glance be a small step, but over the next 12mo after release, it will turn out to have been a giant leap.<p>It will be safe when being observed.
throwaway21112 个月前
Like Superman V there will be generally positive reviews, but post V, VI will be in a different form as marketing&#x27;s already wearing thin.
_davide_12 个月前
Everyone keeps thinking current GPT models will improve to be superhuman, it won&#x27;t. It&#x27;s trained on human data. Much like alpha-go had to drop completely the concept of learning from human plays since it was stuck at a local minimum. Once they started to train with adversarial networks, it evolved a well above the previous local minimum (with an extra order of magnitude of computation). So, don&#x27;t expect much more from the current generation of AI.
评论 #40584754 未加载
评论 #40593542 未加载
评论 #40586510 未加载
评论 #40588009 未加载
treprinum12 个月前
Does anyone know why OpenAI&#x27;s temperature parameter works differently than Azure OpenAI temperature parameter? If you set temperature to 2.0, Azure starts spewing nonsense with random characters but OpenAI still keeps working &quot;creatively&quot;. Is there any non-linear transform between them?