Love it. If you don't have an M1 Mac, or don't want to wait, <a href="https://mage.space" rel="nofollow">https://mage.space</a> does unlimited generations currently. (Note: I am the creator!)
A.I projects (and maybe all python projects in general) seem to always
be ridiculously tedious, error-prone to get running, such that its a rare, celebratory thing when someone releases something that's easy to use like this?
What's the difference between this and Diffusion Bee besides a nicer website?<p><a href="https://github.com/divamgupta/diffusionbee-stable-diffusion-ui" rel="nofollow">https://github.com/divamgupta/diffusionbee-stable-diffusion-...</a>
This is really cool and a fun way to try out this stuff I've been hearing about. One thing that'd be cool is a "retry" button that picks a different seed. My first attempt didn't turn out so great (<a href="https://i.imgur.com/zV48hCV.png" rel="nofollow">https://i.imgur.com/zV48hCV.png</a>)
Seeing copyrighted/trademarked icons in the examples (Darth Vader, for example) really makes me wonder how these models are going to play out in the future.<p>Today, these models are far ahead of the trademark attorneys, but there are powerful interests that are going to want to litigate the inclusion of these entities in the trained models themselves.
Is there some comprehensive source about how to make the most of Stable Diffusion? I find the examples on websites much better than what I've been able to generate — they more closely convey the prompt and have less artifacts/clearly messed up parts
How fast does Stable Diffusion run on an M1 Max? I'm using an M1 Pro and I find it too slow. I'd rather use an online service that costs $0.01 per image but generates an image in a matter of seconds than wait 1 minute for a free one.
Speed has a massive effect on how willing I am to play around and develop better prompts. I can’t wait a full minute for an image, I just can’t.<p>What kind of computer specs would be required to generate typical SD images in less than a second?
It's awesome to see how much creativity, progress, and community involvement results from truly open AI development.<p>Congrats to the stable diffusion team for their openness and inclusiveness!
I downloaded this and tried out a few prompts like, "Mark Twain holding an iPhone", and got back an image of Mark Twain - once in some surrealist nightmare fashion and another more like a 3D render. Neither were holding anything, let alone an iPhone. Cranking up the DDIM slider didn't seem to do much. Trying the same prompt on mage.space (see the creators comment in this thread) produced exactly what I assumed it would.<p>Is there a trick to it?
Anyone have an M1 Ultra they can test this on? My 3080 Ti can render a 512x512 image in something like 7 seconds and I've love to compare against Apple Silicon.
I had Stable Diffusion running on m1 and intel macbooks vein the first few days, but the original repo would have done people some favors if they either created proper conda lock files for several platforms or just used conda-forge instead of mixing conda and unnecessarily (I think there was one dep which actually wasn’t on conda-forge, besides their own things)<p>(and actually made the code independent of cuda)
Love the 'we haven't managed to implement the ever so complex version checker logic yet - so give us your e-mail' ruse.<p>EDIT: I take it back - all the menus are the generic Electron ones, so it is quite possible that the author is finding this part tricky.
When I hit "generate" on my M1 Air with the default options, it just sits saying "initializing...0%" forever. Gave it five minutes, still nothing. Tried twice, same thing.<p>Is it... doing anything? Do I just need to wait 10 minutes? 20?
Can someone please explain how I can run this on my computer but something like GPT3 is too computational intensive to do the same? Isn’t text easier than images?