TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Stable Diffusion 2.0 on Mac and Linux via imaginAIry Python library

234 pointsby brycedover 2 years ago

17 comments

davelyover 2 years ago
I&#x27;ve been working on a web client[1] that interacts with a neat project called Stable Horde[2] to create a distributed cluster of GPUs that run Stable Diffusion. Just added support for SD 2.0:<p>[1] <a href="https:&#x2F;&#x2F;tinybots.net&#x2F;artbot?model=stable_diffusion_2.0" rel="nofollow">https:&#x2F;&#x2F;tinybots.net&#x2F;artbot?model=stable_diffusion_2.0</a><p>[2] <a href="https:&#x2F;&#x2F;stablehorde.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;stablehorde.net&#x2F;</a>
评论 #33732026 未加载
brycedover 2 years ago
Try out the pre-release like this:<p>`pip install imaginairy==6.0.0a0 --upgrade`<p>New 512x512 model supported with all samplers and inpainting<p>New 768x768 model supported with the DDIM sampler only<p>Not yet supported is the upscaling and depth maps.<p>To be honest I&#x27;m not sure the new model produces better images but maybe they will release some improved models in the future now that they have the pipeline open.
评论 #33732624 未加载
评论 #33731935 未加载
gregghover 2 years ago
This is awesome, but I still like using the GUI for m1&#x2F;m2 Macs, DiffusionBee.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;divamgupta&#x2F;diffusionbee-stable-diffusion-ui" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;divamgupta&#x2F;diffusionbee-stable-diffusion-...</a>
评论 #33734367 未加载
评论 #33737725 未加载
评论 #33734473 未加载
Smaug123over 2 years ago
Nicely done; this seems to work for me. In my own attempt, I got stock Stable Diffusion 2.0 &quot;working&quot; on M1 using the GPU but it&#x27;s producing some of the most cursed (and low-res) images I&#x27;ve ever seen, so I&#x27;ve definitely got it wrong somewhere. The reader can infer the usual rant about dynamic typing causing runtime misconfiguration in Python.
评论 #33738999 未加载
typestover 2 years ago
How much of this is stable diffusion 2, and how much is something else? For instance, the text based masks, the syntax like AND and OR, the face up scaling — are these all part of stable diffusion 2 (and can be used via other stable diffusion apis)?
评论 #33732466 未加载
yregover 2 years ago
As with previous macOS Stable Diffusion tools, this is Apple Silicon only.
评论 #33731486 未加载
fareeshover 2 years ago
What&#x27;s the minimum VRAM requirement?
gbighinover 2 years ago
Requirements:<p>&gt; A decent computer with either a CUDA supported graphics card or M1 processor.<p>Why so? How does an M1 processor replace CUDA in a way a x86_64 processor can&#x27;t? Do they use ARM assembly?
评论 #33731318 未加载
anothernewdudeover 2 years ago
2.0 is a mixed bag. It&#x27;s set making pixel art back entirely. I&#x27;m pretty sure this is down to the aesthetic filter - it has a very biased idea of what good images are. It&#x27;s silly to do that at the training stage, that should be something you do in the prompt.<p>Fine tuning is out of reach for me, so I&#x27;m sticking to 1.5.
lostintangentover 2 years ago
Wow, this looks awesome! I noticed that the sample notebook doesn’t include SD 2.0 by default, and says that it’s too big for Colab. Is that a disk size&#x2F;RAM limitation?<p>As an aside, it would be cool if you versioned that notebook in the repo, so that it could be easily opened with Codespaces.
评论 #33733027 未加载
egeozcanover 2 years ago
This would have been perfect if it worked on Windows too. I need to look into dual booting Linux (opening a can of worms) just to give it a try, as WSL doesn&#x27;t seem to cut it.
评论 #33732301 未加载
评论 #33735258 未加载
评论 #33740244 未加载
underlinesover 2 years ago
is it possible to add volta or xformers for a massive speed increase?<p><a href="https:&#x2F;&#x2F;github.com&#x2F;VoltaML&#x2F;voltaML-fast-stable-diffusion" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;VoltaML&#x2F;voltaML-fast-stable-diffusion</a>
评论 #33734505 未加载
superpope99over 2 years ago
This seems to work for me. Incredible work turning this around so quickly!
评论 #33730892 未加载
semicolon_stormover 2 years ago
Pretty slick, SD 2.0 performance actually seems to be better than 1.5?
评论 #33733940 未加载
algon33over 2 years ago
Nice, a friend was looking for something like this.
TekMolover 2 years ago
What is a good VM to try this out?<p>Something on AWS, Hetzner etc?
评论 #33732386 未加载
88stacksover 2 years ago
awesome library, I haven&#x27;t seen this before. I just added it to my stable diffusion api service so you can query stable diffusion 2.0 if you don&#x27;t GPUs setup currently: <a href="https:&#x2F;&#x2F;88stacks.com" rel="nofollow">https:&#x2F;&#x2F;88stacks.com</a>
评论 #33731994 未加载