Sorry about the bugs, I've just released an update. The site's music should no longer shatter your eardrums until <i>after</i> you touch the unmute button.<p>The videos are generated from random <a href="https://lexica.art" rel="nofollow">https://lexica.art</a> prompts, with linear interpolation between two random seeds for each video, held at the same prompt, looped with ffmpeg filter_complex reverse/concat. Music from various creative commons / free sources.<p>Source code at <a href="https://github.com/lwneal/duckrabbit/" rel="nofollow">https://github.com/lwneal/duckrabbit/</a><p>Hosted on a single $7 node at <a href="https://www.digitalocean.com" rel="nofollow">https://www.digitalocean.com</a>
Bug report:<p>Sound permanently on after hitting next button once on Chromium 104.<p>Next button does move to next video, but also enables sound. Additionally, this breaks internal state, and the button still shows the muted symbol (despite the sound now being on). Hitting the muted sound button switches the sound button symbol to unmuted, sound still on as before. Hitting it again to mute the sound doesn't work, and doesn't change the symbol back to muted.
I know this is silly but I can't wait for games to have automatically generated "levels" that look like this. I guess 3d training and output is probably minimally researched at this point, and there is NERF research... at some point all of this research will truly show off its potential beyond pretty pictures.
As an aside, it would be cool if music could also be an input to these kinds of generative models, such that the generated image somehow matches the feeling or mood of the music.
This looks almost silly now. But I'd bet that in a few years, we will see a full movie, created mostly with the equivalent of Stable Diffusion, win an Oscar.<p>My bet is that this will happen in 8-9 years from now, but it's just a guess.<p>I think it's hard to challenge the fact that it WILL happen, at some point in our lifetimes.
what if there was a way to “increase frame rate” by adding in some type of logic checker between two generated images? kinda like a comparison between two generated frames that would lead to more generated images that mimic movement? so like a filler between frames that would predict how something got to one shape to another using a set of properties that a generated object has, those properties could be weight, speed, gravity etc etc, it just depends on what object it is conceptualizing or constructing
I've been following r/StableDiffusion on reddit for a while and was wondering whether this can also be used for anything that doesn't look like a cheap fantasy or science fiction novel cover.<p>This is an honest question, I haven't seen any example of anything else so I got to wonder whether the models they are using are specialized for sci-fi and fantasy "air brush/digital" style? Why?