TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

An IP attorney’s reading of the Stable Diffusion class action lawsuit

91 pointsby spiffageover 2 years ago

18 comments

dns_snekover 2 years ago
&gt; Stability AI has already announced that it is removing users’ ability to request images in a particular artist’s style and further, that future releases of Stable Diffusion will comply with any artist’s requests to remove their images from the training dataset. With that removal, the most outrage-inducing and troublesome output examples disappear from this case, leaving a much more complex and muddled set of facts for the jury to wade through.<p>How can this possibly be a valid good faith argument? Either they&#x27;re in breach of authors&#x27; copyright which extends to <i>every</i> piece of art that they included in the dataset without permission, or they&#x27;re in the clear and aren&#x27;t obligated to respond to removal requests.<p>This reads like damage control to me in an effort to temporarily silence the loudest critics.
评论 #34536511 未加载
评论 #34535307 未加载
评论 #34536948 未加载
评论 #34536284 未加载
评论 #34536750 未加载
评论 #34543844 未加载
评论 #34536117 未加载
acomjeanover 2 years ago
&gt;&quot;The output represents the model’s understanding of what is useful, aesthetic, pleasing, etc. and that, together with data filtering and cleaning that general image generating AI companies do,2 is what the companies consider most valuable, not the training data.3&quot;<p>This didn&#x27;t make any sense to me. Without the curated training data (images) how are they making the models?<p>No matter what, putting images into your machine then selling the output generated with them and not compensating the original creators is going to be seen as problematic. Machines aren&#x27;t people.
评论 #34535187 未加载
评论 #34533525 未加载
评论 #34543161 未加载
评论 #34536510 未加载
xeyowntover 2 years ago
I don&#x27;t understand how using an image as <i>input</i> to a model is a copyright infringement.<p>If the image is freely viewable (say you can browse to it), and you just look at it, are you violating any rights?<p>It seems that violation would only come if you would <i>use</i> the model to produce images that are derivative of that original image, the same way a counterfeiter would make a copy of it. Have the skill to copy is not the same as actually copying.
评论 #34536075 未加载
评论 #34536517 未加载
评论 #34536632 未加载
评论 #34536598 未加载
评论 #34539209 未加载
评论 #34536241 未加载
评论 #34537693 未加载
quititover 2 years ago
Interestingly it shows the tenuous nature of the plaintiffs case, even before getting into the plaintiff&#x27;s large errors.<p>Since reasonably simplified information about SD is available and&#x2F;or the plaintiff could have involved an expert to review his claims - it does raise a question if the function of the lawsuit is more about rattling chains rather than the merits of their argument. I.E. A deliberate ploy to extract a settlement.
评论 #34533914 未加载
sebstefanover 2 years ago
&gt; Stability AI has already announced that it is removing users’ ability to request images in a particular artist’s style<p>I hope it returns when they win and get rid of this legal bullying.
评论 #34534845 未加载
评论 #34537996 未加载
评论 #34539407 未加载
评论 #34535409 未加载
kmeisthaxover 2 years ago
&gt;The complaint includes a section attempting to explain how Stable Diffusion works. It argues that the Stable Diffusion model is basically just a giant archive of compressed images (similar to MP3 compression, for example) and that when Stable Diffusion is given a text prompt, it “interpolates” or combines the images in its archives to provide its output. The complaint literally calls Stable Diffusion nothing more than a “collage tool” throughout the document. It suggests that the output is just a mash-up of the training data.<p>I&#x27;ve seen the collage tool argument several times, and I don&#x27;t agree with it. But I can understand <i>why</i> people believe it.<p>You see, there&#x27;s a <i>very large</i> number of people who use AI art generators as a tracing tool. Like, to the point where someone who has never touched one might believe that it literally just photobashes existing images together.<p>The reality is that there&#x27;s three ways to use art generators:<p>- You can tell it to generate an image with a non-copyright-infringing prompt. i.e. &quot;a dog police officer holding a gun&quot;<p>- You can ask it to replicate an existing style, by adding keywords like &quot;in the style of &lt;existing artist&gt;&quot;<p>- You can modify an existing image. This is in lieu of the <i>random seed image</i> that is normally provided to the AI.<p>That last one is confusing, because it makes people think that the AI itself is infringing when it&#x27;s only the person using it. But I could see the courts deciding that letting someone chuck an image into the model gives you liability, especially with all of the &quot;you have full commercial rights to everything you generate&quot; messaging people keep slapping onto these.<p>Style prompting is one of those things that&#x27;s also legally questionable, though for different reasons. As about 40,000 AI art generator users have shouted at me over the past year, you cannot copyright a style. But at the same time, producing &quot;new&quot; art that&#x27;s substantially similar to copyrighted art is still illegal. So, say, &quot;a man on a motorcycle in the style of Banksy&quot; might be OK, but &quot;girl holding a balloon in the style of Banksy&quot; might not be. The latter is basically asking the AI to regurgitate an existing image, or trace over something it&#x27;s already seen.<p>I think a better argument would be that, by training the AI to understand style prompts, Stability AI is inducing users to infringe upon other people&#x27;s copyright.
scotty79over 2 years ago
&gt; Stability AI has already announced that it is removing users’ ability to request images in a particular artist’s style and further, that future releases of Stable Diffusion will comply with any artist’s requests to remove their images from the training dataset.<p>This is incredibly disheartening. Who knows how long will it take to progress the tech to the point where anyone will be able to train and run models unrestricted without dealing with lawyer nonsense.
评论 #34533173 未加载
评论 #34533414 未加载
评论 #34533300 未加载
评论 #34533123 未加载
评论 #34536744 未加载
评论 #34534008 未加载
评论 #34534042 未加载
评论 #34533270 未加载
consumer451over 2 years ago
A lawyer who works on YouTube channel Corridor Crew posted a decent breakdown on this lawsuit recently as well:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34479857" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34479857</a>
anigbrowlover 2 years ago
Great write-up. SD&#x27;s removing the ability to imitate styles will probably go a long way to quell objections, though it will be interesting to see if there&#x27;s a future legal split over the styles of living and dead artists. I don&#x27;t imagine that anyone would object to &#x27;autoseurat&#x27; for example.<p>I can see see a future dispute arising over outpainting (beginning with an existing copyrighted work) but there infringement and identity of the infringer (the user, not the toolmaker) is more clear.
shanebelloneover 2 years ago
I&#x27;ve been saying this since it came out...<p>Stable Diffusion is equivalent to hip-hop sampling in the 80s and 90s. The outcome is obvious.
评论 #34534044 未加载
评论 #34539460 未加载
cycomanicover 2 years ago
I don&#x27;t really understand the argument about danger mouses grey album being different from just a random &quot;mash up&quot; because the artistic merit behind the grey album. Sure the grey album is likely much more pleasant to listen to and would likely be considered worthy of copyright itself, where a random mash might not be. That doesn&#x27;t change the fact that danger mouse had to ask permission to use Jay Zs and the Beatles work (and likely had to pay), or otherwise would have violated copyright. So how is that argument relevant. Nobody is arguing that composing images via stable diffusion prompts (like making some collage) is not a creative process. The argument is does one have to have permission&#x2F;licence of the original creators.
philipwhiukover 2 years ago
It&#x27;s interesting the IP attorney cites The Grey Album as being an example of something that is legal, when the reality is that the case was never brought because the original artists wishes meant it was unattractive for EMI to pursue the case.
rafaleover 2 years ago
I hope the law will converge to this: As a human, I don&#x27;t need a license to look and get inspired by art. But I am not allowed to feed that same data to a machine as a training dataset without proper authorization from the owner.
评论 #34537527 未加载
评论 #34533703 未加载
评论 #34535314 未加载
评论 #34534256 未加载
gcoakesover 2 years ago
Am I the only one who thinks this just isn&#x27;t defined well enough to be decided by the judiciary? It should be legislated. My opinion is that ML training should be distinctly different from human learning.
评论 #34543459 未加载
layer8over 2 years ago
&gt; future releases of Stable Diffusion will comply with any artist’s requests to remove their images from the training dataset.<p>How does this work? Do they retrain the model from scratch every week? Or is it somehow possible to retroactively remove specific training-set items from the already-trained model?
mensetmanusmanover 2 years ago
“LLMs are illegal because anything they see is owned by other people”<p>The Disney protection act rears its head…
tshadleyover 2 years ago
&quot;[The complaint] argues that the Stable Diffusion model is basically just a giant archive of compressed images (similar to MP3 compression, for example) and that when Stable Diffusion is given a text prompt, it “interpolates” or combines the images in its archives to provide its output. The complaint literally calls Stable Diffusion nothing more than a “collage tool” throughout the document. It suggests that the output is just a mash-up of the training data.&quot;<p>As noted in OP, this is an outstandingly bad definition of Deep-Neural-Networks, and the lawsuit should fail when the court hears an explanation from any competent practitioner.<p>However, a correct definition would make the lawsuit far more interesting, imo. Diffusion models can be compared to a superhumanly talented artist that can be cloned in unlimited fashion by anyone having the software and hardware means. How does this entity affect social well-being, how should existing laws be modified--if at all-- with the welfare of humanity in mind, etc?
评论 #34534892 未加载
评论 #34533658 未加载
Animatsover 2 years ago
That author makes the point that copyright registration (which you do online with the Library of Congress in the US)[1] is required for copyright enforcement litigation. And, quite possibly, it may be required for DMCA enforcement.<p>Now, that could work out. Major movie studios and recording companies do file copyright registrations and submit a deposit copy. But few others bother. It seems that you can <i>send</i> a DMCA takedown request without a copyright registration, but you can&#x27;t <i>enforce it in court</i> without one.[2] This raises the question of, if you as a service receive a DMCA takedown request, should you ask the requestor to send proof of copyright registration, and if they don&#x27;t, ignore the request?<p>[1] <a href="https:&#x2F;&#x2F;www.copyright.gov&#x2F;registration&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.copyright.gov&#x2F;registration&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.traverselegal.com&#x2F;blog&#x2F;is-a-registered-copyright-necessary-for-a-dmca-takedown&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.traverselegal.com&#x2F;blog&#x2F;is-a-registered-copyright...</a>
评论 #34535729 未加载
评论 #34535898 未加载