TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

High Court rules that Getty vs. Stability AI case can proceed

110 点作者 AndrewDucker超过 1 年前

7 条评论

thorum超过 1 年前
Important to note this is not a ruling about copyright infringement: Stability has not yet even presented a defense on that subject. Instead they argued that although they are a UK-based company, no people or computers residing in the UK were involved in the development of Stable Diffusion, so a UK court has no jurisdiction on the matter. The judge didn’t find the evidence for this (which includes the CEO saying he asked around and is “confident” it’s true) conclusive enough, so things will move forward. It may turn out to be true later, but Getty will get a chance to present counter-evidence, and so on.
评论 #38580689 未加载
评论 #38580907 未加载
visarga超过 1 年前
I don&#x27;t think copyright has much meaning left in it anymore. Any work can be &quot;extracted&quot; into its elements, ideas and style, and recombined in million ways.<p>You could generate a billion images with SD and train the next model on them. Make sure they don&#x27;t look close to copyrighted works. Being AI generated, they have no copyright. You can still use real data as well if it is in the public domain.<p>If you do this enough the initial copyrighted dataset is going to be further removed from the model. The model can&#x27;t reproduce a copyrighted work because it hasn&#x27;t seen any of them during training.<p>But more importantly, this process strictly separates ideas from expression and trains only on ideas without copyrighted expression. If authors complain it means they want to own ideas and styles.<p>You can also use copyrighted works to train a classifier to rank the quality of training examples, and apply it to filter your synthetic data to be higher quality.<p>You can even train a RLHF model to say when two works are &quot;close enough&quot; to constitute an infringement, and double down on safety by ensuring you don&#x27;t generate or use risky works.<p>That&#x27;s why I was saying that I don&#x27;t think copyright has much meaning left in it anymore. Knowledge wants to be free, it travels, shape-shifts and evolves. It does not belong to any one of us except if we keep it to ourselves.
评论 #38579915 未加载
评论 #38580388 未加载
评论 #38579452 未加载
评论 #38579697 未加载
asperous超过 1 年前
The training jurisdiction one is interesting. Are future companies going to exclusively train their models in a copyright lack country?<p>Seems like jurisdiction would be based on the copyright of the allegedly infringed images, and UK-based users creating copyright infringing copies in the UK.<p>But that&#x27;s apparently not the law or case law in the UK yet.
评论 #38577961 未加载
wokwokwok超过 1 年前
&gt; The idea here is that Stable Diffusion &quot;memorised&quot; the Getty logo, and could place it on outputs on demand.<p>&gt; This is no longer possible as far as I can tell.<p>? Why would it no longer be possible?<p>Stability has exactly zero way of updating the weights once they have made them public. Is the suggestion that this was only possible on 1.4 and 1.5 and XL don&#x27;t have the issue?<p>...or somehow what the model was previously capable of doing is now no longer possible?<p>That seems like enormously unsubstantiated speculation.<p>We have proof from multiple independent studies that these models <i>in general</i> memorize a small percentage of their training data to the degree that reasonable reconstructions of the original training images can be recovered from the model.<p>There is, to my knowledge, no mitigation of this that has been implemented, or even <i>conceptualized</i> by either stability or anyone else.<p>Interesting read regarding the <i>actual ruling</i> the judge made, but this is some &quot;opinion here&quot; commentary, which seem out of place; what the court <i>actually rules</i> is the interesting part of this.
评论 #38578366 未加载
评论 #38580456 未加载
neya超过 1 年前
This is how a company whose failing business model collapsing fast would act. I never liked Stock photo sites since their inception - they always followed dark patterns, spammed the entire search results with keywords that had nothing to do with them. If you searched for something &quot;free&quot; they would show you images of something that was NOT free. But hey, &quot;royalty free&quot; is close enough, I guess?<p>They went on to sue Google Images that allowed the spirit of open web - where you could simply right click and save any image - from doing so.<p>And now that AI can pretty much produce stellar quality images with just a single line of text input, they don&#x27;t have anything else to do other than drag everyone into litigation.<p>I&#x27;m a photographer myself (Sony A7C2&#x2F;24-70mm G-Master) and I have learned to accept that this is going to be the future. And though there will always be a market for real photographers, AI will take over bulk of our jobs. And the programmer in me who spent ages trying to find the perfect background image for my website says that&#x27;s not really a bad thing.
评论 #38579392 未加载
评论 #38580386 未加载
georgebarnett超过 1 年前
Maybe I&#x27;ve got the completely wrong end of the stick here, but why isn&#x27;t an AI model treated as a fact, given it&#x27;s essentially a factual summary of the most likely bit sequences to occur given an input sequence?
评论 #38578817 未加载
评论 #38578832 未加载
评论 #38578723 未加载
rvz超过 1 年前
Very predictable and unsurprising. Stability knew they needed a license for training on Getty&#x27;s copyrighted images given the presence of their watermark. OpenAI partnered with Shutterstock for that case with DALL-E to avoid this legal headache.<p>The fair use excuses here are absolutely weak in Stability&#x27;s case and this will only end with a licensing deal being made.
评论 #38580487 未加载