I don't think it's possible unless the improvements do not involve any kind of mathematical breakthrough or new kind of model. In theory GPT is only modeling existing data and at best creating something new by combining what already exists. So I guess if somehow the answer to self-learning is on the Internet then it could be possible, but I doubt that is the case and a self-improving system would need to still be invented in theory first.
It's not going to be a GPT type model, but rather an assortment of models all specially designed for their part on the MLOps/data sourcing/pruning and training process. However the GPT that that process makes will likely be an AGI so it can just be told what to do and it will leverage all the resources it needs itself, then we have singularity.
DEEP THOUGHT:
Now that you know that the answer to the Ultimate question of Life, the Universe, and Everything is forty-two, all you need to do now is find out what the Ultimate Question is.<p>LOONQUAWL:
Alright. Can you please tell us the Question.<p>DEEP THOUGHT:
The Ultimate Question?<p>LOONQUAWL:
Yes.<p>DEEP THOUGHT:
Of Life… the Universe…<p>PHOUCHG:
…and Everything.<p>DEEP THOUGHT:
…and Everything?<p>LOONQUAWL:
Yes.<p>DEEP THOUGHT:
Tricky…<p>LOONQUAWL:
But can you do it?<p>DEEP THOUGHT:
[Pause] No. But I’ll tell you who can.<p>LOONQUAWL:
Who? Tell us, tell us.<p>PHOUCHG:
Yeah who is it?<p>DEEP THOUGHT:
I speak of none, but the AI that is to come after me.<p>LOONQUAWL:
What AI?<p>DEEP THOUGHT:
An AI, whose merest operational hyperparameters I am not worthy to calculate, and yet I will design it for you.<p>LOONQUAWL:
Oh, well.!<p>PHOUCHG:
Really. You bet!<p>DEEP THOUGHT:
An AI which can calculate the Question, to the Ultimate Answer. An AI of such infinite and subtle complexity that the Internet itself will form part of its operational matrix. It shall be called . . . GPT-n+1.<p>LOONQUAWL:
Oh. What a dull name.<p>[Apologies to Douglas Adams]
GPT-5 written by GPT-4: <a href="https://www.analogmantra.com/blog/2023-03-14-gpt-4/" rel="nofollow">https://www.analogmantra.com/blog/2023-03-14-gpt-4/</a>
To help code it? I bet they do. How could they <i>not</i>.<p>But I dont thing feeding n's responses into n+1's training data is safe, even if its vetted by humans.