There was a story that came around about someone who "cloned" the Angry Birds game
just by interacting with a GPT system. The GPT system spit out the code, the person
would run it and then interact with the GPT to improve it. I presume the GPT was
using a language like Python or other traditional text-based language for the implementation.
Humans could understand the code.<p>Now lets assume that the GPT system created a "spawn of GPT" system that was a
newly formed neural network. The user interacts with the "parent GPT" giving
feedback. The parent creates input to the spawn and uses reinforcement learning
to "fix" the spawns behavior to match the required output.<p>Note that, while the goal was achieved, a human could not understand the result.<p>This is the most likely future.<p>I'll have to play with this idea.
A few thoughts. For starters, multiple AIs working together (or even against each other) to create an output is not a new thing. Adversarial networks are already pretty common in image-generation to discriminate against ugly or incorrect outputs. In those systems, the generation model competes with a discriminator model that demands higher-quality output.<p>That being said, my understanding is that this approach is quickly being obsoleted by zero-shot solutions instead. Running multiple models is extremely undesirable because it potentially doubles the scope, latency and cost of your system. In your example, I'd imagine such a system would be outperformed by a single model that does both the parent and the spawn's job.