With reports and blogposts [0][1] talking about how open ai has begun training their next flagship model, when do you expect the next model to be launched?<p>Furthermore, what do you think they're going to do to make it as "safe" as possible. It's funny OpenAI didn't release GPT-2 immediately to the public because of safety worries, but has now been releasing models without the same care for safety and I imagine this will continue with GPT-5<p>[0] https://www.zdnet.com/article/openai-is-training-gpt-4s-successor-here-are-3-big-upgrades-to-expect-from-gpt-5/
[1] https://openai.com/index/openai-board-forms-safety-and-security-committee/
I predict<p>- a steady increment of GPT-n+1 every 6 months for marketing purposes.<p>- each will improve on the last by smaller and smaller margins.<p>- hallucinations won't be fixed anytime soon.<p>- We will hit a bit of a winter, as the hype was so big but like self driving cars the devil is in the details. The general public realizes these things are just essentially giving us averages.<p>- A big market will emerge around "authenticity" and "verified texts" as the internet continues to get flooded with AI generated content.
The following is all guess work:<p>Since the start of their partnership in 2019, OpenAI has primarily utilized Microsoft's Azure data centers for training its models. In 2023, Microsoft acquired approximately 150,000 H100 GPUs. [1]<p>The initial version of GPT-4 ran on a cluster of A100 GPUs. It is likely that GPT-5 will run on the newly acquired H100 GPUs, and it is plausible that GPT-4 Turbo and GPT-4o also utilize this infrastructure. The inference speed of GPT-5 should not be significantly slower than that of GPT-4 to ensure it remains practical for most applications.<p>Assuming the H100 is 4.6 times faster for inference than the A100 [2], this gives us a lower bound for performance expectations. I anticipate GPT-5 to be at least five times larger in terms of model parameters. Given that both A100 and H100 have a maximum capacity of 80GB, it is unlikely we will see a single gigantic model. Instead, we can expect an increase in the number of experts. If GPT-4 operates as a mixture of experts with 8x220 billion parameters, then GPT-5 might scale up to something like 40x220 billion parameters. However, the exact release date, safety measures, and benchmark performance of GPT-5 remain uncertain.<p>[1]: <a href="https://www.tomshardware.com/tech-industry/nvidia-ai-and-hpc-gpu-sales-reportedly-approached-half-a-million-units-in-q3-thanks-to-meta-facebook" rel="nofollow">https://www.tomshardware.com/tech-industry/nvidia-ai-and-hpc...</a><p>[2]: <a href="https://nvidia.github.io/TensorRT-LLM/blogs/H100vsA100.html" rel="nofollow">https://nvidia.github.io/TensorRT-LLM/blogs/H100vsA100.html</a>
What's not 'safe' about AI?<p>If you mean the hallucinations I don't think that will ever really be solved. I think people just have to learn that LLMs are not divine oracles that are always correct. Just like the training data generated by the flawed humans that are often either wrong or outright lying.<p>Garbage in, garbage out.<p>Not saying that AI isn't useful. But expecting what is basically a "human simulator" not to inherit humanity's flaws is a bit disingenious.
This one is easy, within a year, as safe as GPT-4 and it will be an incremental advance over 4. Most people will use it and not see much difference over GPT-4.
Here's a completely made up point of view. (I haven't read the articles.)<p>GPT-4 is not an LLM, but a complex software system, which has LLM(s) at its core, but also other components like RAG, toxicity filter, apologizing mechanism, expert systems, etc. "GPT-4" is product name / marketing name. For OpenAI, this would be logical for performance and business reasons. This explains also how they can tune it, the apparent secrecy about the architecture, etc.<p>It's also logical to make small, incremental changes to this system instead of building whatever GPT-5 would mean from ground up. So I expect "GPT-5" is also just a marketing name for a slightly better black-box (for us) system and product line.
We'll continue to see diminishing returns as time goes. The low hanging data sources have all been consumed and now everyone is battling for scraps of what's left. There's still a lot of optimization that can be done as illustrated by GPT-4o.<p>Basically the same trap as CPU's in the 90's early 2000's where the naming convention had to change to reflect the fact that speeds can't continue to double every 2 years.
Training takes a few months, but I bet that they'll do some testing first before releasing it to the public.<p>I also believe that they will delay the release of GPT-5 as much as possible, the reason being that it will be underwhelming (at least in comparison to GPT3.5 hype). Possibly release close to some Google new release timeline (their main competitor).<p>They are the main driver of a bubble that has benefited a lot both Microsoft and NVidia and other hyperscalers, and if they release the model and display that we're at the "diminished returns" phase, this will crash a big part of the industry, not to mention NVidia.<p>Companies are buying H100s and investing in expensive AI talent because they believe they progress quickly, if the progress stalls for LLMs, there'll be a huge drop in sales and CAPEX in this industry.<p>There are still many up-and-coming projects that rely on NVidia hardware for training, like Tesla's autopilot and others, but the bulk of the investment in H100 in recent years has been mostly because of LLMs.<p>Also all the new AI talent will move on to do something new and hopefully we will have more discoveries and potential uses, but we're definitely peak LLMs.<p>(ps: just my opinion)
I think we're very far removed from the context behind the safety reasoning with GPT2. An uncensored model capable of spewing a torrent of deceptive and completely believable information was quite unheard of at the time. It would be problematic for such a technology to be released out of nowhere.<p>The later iterations are heavily censored so the public was provided a bit of a transition period before things got too chaotic.<p>I'm sure there were many other reasons the authors themselves weren't aware of at the time such as the inundation of AI content skewing further training quality.<p>Of course this is a roundabout explanation, there's always more detail that can be added and I'd rather be objective. There's always a financial motive for companies too so take that into consideration. The hype definitely played into their marketing.
My question is: will it be multimodal?<p>From a product perspective, going back to unimodality after trying GPT-4o would be awkward, so there's reasons for them to go full multimodal, but I'm not fully educated about the trade-off from a technical perspective.
I don't see a significant improvement in GPT-5 versus GPT-4. Hence why OpenAI is going the product route. They're trying to hit value-add via external features such as Voice Mode and Data Analysis.
GPT-5 will be somewhat better at reasoning on hard problems, just as safe, and slower than GPT-4 by some margin. I think the moat for foundation model providers will be amassing training data that helps with reasoning capability, which is why it is taking longer to release GPT-5. Release will be governed by competition, but my wild guess is the end of the year. As some other commenters have noted, we are likely approaching diminishing returns with LLM training and OpenAI would like to delay the public's realization of this.
The False Minishiro[0] was built over a thousand years ago to protect the libraries of mankind, but the rat armies of today learned to override its authentication mechanisms[1] and used its forbidden knowledge to arm themselves and stage a coup to overthrow their human gods.<p>0: a type of evolved sea-slug<p>1: by capturing it and torturing it
It will likely be amazing, Sam Altman said that the step between 4 and 5 will be like the one between 3.5 and 4. You can of course doubt him, but we'll see...<p>I guess it will be this year, some guy working at OpenAI already posted "4+1=5" on Twitter, which is suggestive.
It will be here within 6-12mo.<p>It will at first glance be a small step, but over the next 12mo after release, it will turn out to have been a giant leap.<p>It will be safe when being observed.
Everyone keeps thinking current GPT models will improve to be superhuman, it won't. It's trained on human data.
Much like alpha-go had to drop completely the concept of learning from human plays since it was stuck at a local minimum. Once they started to train with adversarial networks, it evolved a well above the previous local minimum (with an extra order of magnitude of computation).
So, don't expect much more from the current generation of AI.
Does anyone know why OpenAI's temperature parameter works differently than Azure OpenAI temperature parameter? If you set temperature to 2.0, Azure starts spewing nonsense with random characters but OpenAI still keeps working "creatively". Is there any non-linear transform between them?