I wrote this post because I wanted to know what effect one of the main latent diffusion model parameters -- the classifier guidance scale (cfg_scale) -- was having on the sampling process.<p>As well as smoothly varying the cfg_scale, I think it would be fascinating to do some mechanistic interpretability on latent diffusion models. Something like the microscope tool OpenAI used to have for convnets: <a href="https://microscope.openai.com/models" rel="nofollow">https://microscope.openai.com/models</a>