The infancy period of this technology is fascinating.<p>Think about computer graphics 15 years ago. Beowulf came out in 2007, and was developed in the preceding years- let's call it 15+ years old. And it was right there in the uncanny valley where it didn't look real, but it looks realistic. It was interesting visually, but my brain told me "this isn't correct".<p>And now some modern game engines are doing more realistic rendering than that in real-time.<p>Now look at these generative models. Some state of the art ones with humans helping are pretty convincing, but it's slow work. The more general ones like these are making these wonderfully interesting images that our brains immediately say "That's not correct".<p>But where will this technology be in another 15 years? I think the possibilities for entertainment are really interesting. Imagine a D&D game where the GM is vocally telling the AI what to generate, then making small tweaks, and the players are seeing the results.
The related blog entry "AI doesn't understand scale" is hilarious: <a href="https://ai-weirdness.ghost.io/ai-doesnt-understand-scale/" rel="nofollow">https://ai-weirdness.ghost.io/ai-doesnt-understand-scale/</a>
Prompt design / learning how to talk to and communicate effectively with AIs is going to be the next decade’s programming superpower.<p>As an aside, are there any good approaches for producing this kind of generative art on a CPU only system that lacks a GPU?
VQGAN+CLIP seem to have this dream-like quality where they generate images that are evocative of your prompts but don't actually picture them.<p>I find it fascinating because in some cases it's not as obvious as "lump of white fluffy matter" = "sheep" but it still manages to evoke the prompt into our brains.<p>I'll sometimes get an unrecognizable blob but quickly asking my SO "what is this?" she will get it... unless she consciously looks at it!<p>Fascinating.
My city is in covid lockdown right now so I have been passing some of the time playing with these notebooks.<p>It is oddly addictive.<p><a href="https://photos.app.goo.gl/t41uLs3Wogmrgn887" rel="nofollow">https://photos.app.goo.gl/t41uLs3Wogmrgn887</a>
The underlying problem these elaborate prompts seem to solve is that the internet contains many pictures, few of which look very beautiful.<p>If you look at all internet pictures of sheep, many of them will not be very exciting and depict a low contrast sheep in a foggy landscape.<p>So to get a picture with strong saturation and clear lines, it helps to put text there that is usually associated with pictures that have these ... like "HD wallpaper" or "made with unreal engine". Most "wallpapers" might be of dubious artistic quality, but muted colors and a lack of saturation will generally not be their problem.<p>This is of course not the only problem with the model. It doesn't even produce a clear image of a sheep .... but that will probably get better with larger models and more training. Similarly it doesn't seem to have a sense of overall composition and tends towards fractal or tiling-like images. But those problems are probably orthogonal to the fact that the model doesn't per se try to make good pictures ... just average ones for the description you give it.
I played around with these notebooks a while back, and wondered what you get if you jointly optimize for several different prompts. Has anyone tried this? (Or is this what the article is about?)
Here's a video of an "infinite scroll" depiction of a poem using this technology: <a href="https://twitter.com/mewo2/status/1414649438581268486" rel="nofollow">https://twitter.com/mewo2/status/1414649438581268486</a><p><a href="https://www.youtube.com/watch?v=Jbn1aJuarIU" rel="nofollow">https://www.youtube.com/watch?v=Jbn1aJuarIU</a>
I'm kind of fascinated how internet hype speech is taking over categories of image representation and art styles.<p>[filed under: "ultra cool comment trending as a meme on reddit" ;-) ]