Somewhat related, but I can't stop thinking about what's going to happen when GPT-4 reads articles like this.<p>Articles like these that discuss GPT-3's output, or comments and articles directly generated by GPT-3, are going to be fed into GPT-4 as part of a theoretical WebText3 dataset. Will that help or hurt GPT-4's learning? What effect will it have?<p>Theoretically you can feed GPT-3's responses back into itself and ask it to introspect about them; whether it was right or wrong and to comment on why it gave the response it did. But I doubt GPT-3 is particularly good at self introspection. GPT-3 was trained before GPT-3 existed, obviously. So it was never trained on articles analyzing the output of an AI of its caliber.<p>But GPT-4 is going to be trained on a corpus filled with people analyzing GPT-3's outputs, like this article. We would expect GPT-4 to be able to write an article like this. So it should be theoretically possible to give GPT-4 its own output, and then ask it to provide introspection, and for that introspection to be insightful.<p>EDIT: Follow up thought. It's almost as if the internet is being filled with a training corpus on GPT-3's failings. Every fact that GPT-3 failed to learn from WebText2 is now going to be repeated, alongside the correct answer, in WebText3. Humans are globally working, unknowingly, to build a curated dataset by which GPT-4 can learn from GPT-3's mistakes.