TPUs are amazing for Stable Diffusion.<p>We've been doing training (Dreambooth) and inference on TPUs since the beginning of the year at <a href="https://dreamlook.ai" rel="nofollow noreferrer">https://dreamlook.ai</a>.<p>We basically get 2.5x the training speed for Stable Diffusion 1.5 compared to A100, a very nice "unfair advantage"!
That's very fast!<p>I just tried it on my RTX 3090, with a riced linux environment + pytorch/xformers nightly, and 4 images take 36.7 seconds on the ComfyUI backend (used by Fooocus-MRE).<p>...But the issue is, right now, you can either pick high quality tooling (The ComfyUI/automatic1111 backend UIs) or speed (diffusers-based UIs), not both. InvokeAI and VoltaML do not support SDXL as well as Fooocus at the moment, and all the other UIs use the slow Stability backend with no compilation support.
With some CPU offloading I'm able to run SDXL at 1.5it/s on an RTX2070S w/ 8GB VRAM.<p>When used with ControlNet it still runs but more layers offloaded at 1.4s/it