TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Accelerating Stable Diffusion XL Inference with Jax on Cloud TPU v5e

23 pointsby rayshanover 1 year ago

3 comments

MasterScratover 1 year ago
TPUs are amazing for Stable Diffusion.<p>We&#x27;ve been doing training (Dreambooth) and inference on TPUs since the beginning of the year at <a href="https:&#x2F;&#x2F;dreamlook.ai" rel="nofollow noreferrer">https:&#x2F;&#x2F;dreamlook.ai</a>.<p>We basically get 2.5x the training speed for Stable Diffusion 1.5 compared to A100, a very nice &quot;unfair advantage&quot;!
brucethemoose2over 1 year ago
That&#x27;s very fast!<p>I just tried it on my RTX 3090, with a riced linux environment + pytorch&#x2F;xformers nightly, and 4 images take 36.7 seconds on the ComfyUI backend (used by Fooocus-MRE).<p>...But the issue is, right now, you can either pick high quality tooling (The ComfyUI&#x2F;automatic1111 backend UIs) or speed (diffusers-based UIs), not both. InvokeAI and VoltaML do not support SDXL as well as Fooocus at the moment, and all the other UIs use the slow Stability backend with no compilation support.
评论 #37758506 未加载
iAkashPaulover 1 year ago
With some CPU offloading I&#x27;m able to run SDXL at 1.5it&#x2F;s on an RTX2070S w&#x2F; 8GB VRAM.<p>When used with ControlNet it still runs but more layers offloaded at 1.4s&#x2F;it