I don't trust this. The article cites semafor (<a href="https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai" rel="nofollow">https://www.semafor.com/article/03/24/2023/the-secret-histor...</a>), but semafor states the 1T parameter count without any source.
Has anyone seen a massive delta between GPT3.5 & 4?<p>For my use cases (writing code), I can't seem to detect any difference in performance. Certainly not 6x or whatever the actual figure is.
6x bigger than GPT3.5 as per multiple anonymous sources<p>100x smaller than the 100T meme that went around before release, which would cost too much to run and be too slow it was speculated.
They need to make GPT4 capable of gathering new information and updating itself further. Each time you make the model bigger it needs more information but they're limiting themselves to whatever data they've currently got from 2021. It should be reading the entire internet in real time and swallowing it up.