LLMs will generate data that already fits the model. You can't generate information out of thin air. But you can use a larger model or one with more training data to generate inputs for another model.<p>For image based networks, it's pretty common to increase the amount of training data by cropping, rotating, and adding noise to pictures and feeding them in multiple times. The larger the network, the less useful this is. It will quickly start to overfit and memorize the content of inputs that it sees multiple times. And I'm not aware of anyone doing something like that for large language models.