>The complaint includes a section attempting to explain how Stable Diffusion works. It argues that the Stable Diffusion model is basically just a giant archive of compressed images (similar to MP3 compression, for example) and that when Stable Diffusion is given a text prompt, it “interpolates” or combines the images in its archives to provide its output. The complaint literally calls Stable Diffusion nothing more than a “collage tool” throughout the document. It suggests that the output is just a mash-up of the training data.<p>I've seen the collage tool argument several times, and I don't agree with it. But I can understand <i>why</i> people believe it.<p>You see, there's a <i>very large</i> number of people who use AI art generators as a tracing tool. Like, to the point where someone who has never touched one might believe that it literally just photobashes existing images together.<p>The reality is that there's three ways to use art generators:<p>- You can tell it to generate an image with a non-copyright-infringing prompt. i.e. "a dog police officer holding a gun"<p>- You can ask it to replicate an existing style, by adding keywords like "in the style of <existing artist>"<p>- You can modify an existing image. This is in lieu of the <i>random seed image</i> that is normally provided to the AI.<p>That last one is confusing, because it makes people think that the AI itself is infringing when it's only the person using it. But I could see the courts deciding that letting someone chuck an image into the model gives you liability, especially with all of the "you have full commercial rights to everything you generate" messaging people keep slapping onto these.<p>Style prompting is one of those things that's also legally questionable, though for different reasons. As about 40,000 AI art generator users have shouted at me over the past year, you cannot copyright a style. But at the same time, producing "new" art that's substantially similar to copyrighted art is still illegal. So, say, "a man on a motorcycle in the style of Banksy" might be OK, but "girl holding a balloon in the style of Banksy" might not be. The latter is basically asking the AI to regurgitate an existing image, or trace over something it's already seen.<p>I think a better argument would be that, by training the AI to understand style prompts, Stability AI is inducing users to infringe upon other people's copyright.