I often see synthetic data conflated as if it were equivalent to new real-world samples. In reality, when a model generates synthetic data from its own learned distribution, isn’t it just rearranging the information it already captured?
Could an expert explain—using principles of information theory—why synthetic data might still improve a model’s performance despite not providing genuinely ‘novel’ information
Yes, I don't get this either. Isn't it just weighting the output in the direction of the synthetic data that was produced? Why not weight that data?