I've been trying out developing my own diffusion models from scratch lately, to understand this approach better and to compare against similar trials I previously did with GAN. My impression from reading posts like these was that it would be relatively easy once you understand it.. with the advantage that you get a nice normal supervised MSE target to train against, instead of having to deal with the instabilities of GANs.<p>I have found in practice that they do not deliver on this front. The loss curve you get is often just a big thick noisy straight line, completely devoid of information about whether it's converging. And the convergence seems to be greatly dependent on model choices and the beta schedule you choose. It's not clear to me at all how to choose those things in a principled manner. Until you train for a <i>long</i> time, you just basically get noise, so it's hard to know when to restart an experiment or keep going. Do I need 10 steps, 100, 1000? I found that training even longer, and longer, and longer, it does get better and better, very slowly, even though this is not displayed in the loss curve, and there seems to be no indication of when the model has "converged" in any meaningful sense. My understanding of why this is the case is that due to the integrative nature of the sampling process, even tiny errors in approximating the noise add up to large divergences.<p>I've also tried making it conditional on vector quantization codes, and it seems to fail to use them nearly as well as VQGAN does. At least I haven't had much success doing it directly in the diffusion model. After reading more into it, I found that most diffusion-based models actually use a conditional GAN to develop a latent space and a decoder, and the diffusion model is used to generate samples in the latent space. This strikes me that the diffusion model then can never actually do better than the associated GAN's decoder, which surprised me to realize since it's usually proposed as an <i>alternative</i> to GAN.<p>So, overall I'm failing to grasp the advantages this approach really has over just using a GAN. Obviously it works fantastically for these large scale generative projects, but I don't understand why it's better, to be honest, despite having read every article out there telling me again and again the same things about how it works. E.g. DALLE-1 used VQGAN, not diffusion, and people were pretty wowed by it. I'm not sure why DALLE-2's improvements can be attributed to their change to a diffusion process, if they are still using a GAN to decode the output.<p>Looking for some intuition, if anyone can offer some. I understand that the nature of how it iteratively improves the image allows it to deduce large-scale and small-scale features progressively, but it seems to me that the many upscaling layers of a large GAN can do the same thing.