TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Stable Attribution

747 点作者 mkeeter超过 2 年前

72 条评论

saurik超过 2 年前
I gave it a photo I had Stable Diffusion 1.4 generate from the prompt &quot;avatar for saurik&quot;. If you dig through the CLIP database, you will find that the model was trained on a ridiculously large number of copies of my Twitter profile photo due to it being included when people screenshot popular tweets I&#x27;ve posted (which, notably, also means that it is rather low resolution).<p><a href="https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=e89f1e94-067b-4ab8-b1f5-d601bb55825d" rel="nofollow">https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=e89f1e94-067b-4ab8-...</a><p><a href="https:&#x2F;&#x2F;pbs.twimg.com&#x2F;profile_images&#x2F;1434747464&#x2F;square_400x400.png" rel="nofollow">https:&#x2F;&#x2F;pbs.twimg.com&#x2F;profile_images&#x2F;1434747464&#x2F;square_400x4...</a><p>Given that I only said &quot;saurik&quot; and SD came up with something that not only looks more than quite a bit like me but is more than quite a bit similar to the pose of my profile photo, I&#x27;d say <i>clearly</i> that photo would be one of the most important photographs in the database to show up when asking &quot;which human-made source images were used by AI to generate this image&quot;...<p>...and yet, whatever algorithm is being used here--which I&#x27;m guessing is merely &quot;(full) images similar to this (full) image&quot; as opposed to &quot;images that were used to make this image&quot;--isn&#x27;t finding it; which, to me, means this website is adding more noise than signal to the discussion (in that I think people might learn the wrong lessons or draw the wrong conclusions).
评论 #34673687 未加载
评论 #34671562 未加载
评论 #34671726 未加载
评论 #34675608 未加载
评论 #34673296 未加载
评论 #34676105 未加载
version_five超过 2 年前
This appears to be just looking for the nearest neighbors of the image in embedding space and calling those the source data. This by definition would find similar looking images, but it&#x27;s not strictly correct to call it attribution. To some extent all of the training data is responsible for the result - as an example, the model is also learning from negative examples. The result here may feel satisfying, but it&#x27;s overly simplistic and it&#x27;s misrepresenting what it is to call it attribution.<p>(The silly story they have on the site doesn&#x27;t really score any points either, it reminds me of RIAA et al)
评论 #34674509 未加载
评论 #34671112 未加载
评论 #34671215 未加载
评论 #34671083 未加载
goldemerald超过 2 年前
This is a great website, but not in the way the authors intended. Based on some of the examples they explicitly provided, it is clear to me Stable Diffusion creates novel art. Here&#x27;s a random example <a href="https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=a2666aee-0a1a-411b-b0f9-0a06e39897c9" rel="nofollow">https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=a2666aee-0a1a-411b-...</a><p>I will admit this is a nice tool for verifying the creations of SD aren&#x27;t pure copies, so I think it will be useful for a time. But as AI-generated images start to taint future datasets, attribution is going to be significantly more complicated.
评论 #34671211 未加载
评论 #34671572 未加载
评论 #34675965 未加载
评论 #34672927 未加载
评论 #34672712 未加载
评论 #34671192 未加载
评论 #34673498 未加载
评论 #34671040 未加载
评论 #34671302 未加载
评论 #34671325 未加载
GaggiX超过 2 年前
Calling the nearest neighbors of the CLIP embeddings of an image &quot;attribution&quot; feels really misleading, the model has been influenced by the entire dataset it was trained on, just by finding the most semantically similar images does not mean the AI is just using that speficific group of images as references, they probably have almost no influence compared to the entire size of the dataset.<p>P.S. I&#x27;m having fun uploading actual photos and art just to see what the site tell me with confidence &quot;These human-made source images were used by AI to generate this image&quot;.<p>Edit: <a href="https:&#x2F;&#x2F;rom1504.github.io&#x2F;clip-retrieval" rel="nofollow">https:&#x2F;&#x2F;rom1504.github.io&#x2F;clip-retrieval</a>, this site has always been there to explore the LAION dataset using CLIP image&#x2F;text embeddings and without the need to mislead the user.<p>Edit 2: As it&#x27;s showed in this tweet: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;kurumuz&#x2F;status&#x2F;1622379532958285824" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;kurumuz&#x2F;status&#x2F;1622379532958285824</a>, they are just using CLIP-L&#x2F;14 and find the most semantically similar images.
评论 #34675849 未加载
评论 #34678990 未加载
fwlr超过 2 年前
Uploading works by real human artists gives you a batch of results that resemble a reference board (mood board, inspiration board, etc) the artist could have been looking at while creating their original work of art. Obviously it’s not the actual reference board, the only way to get that is to ask the artist yourself, but it sure looks like what you’d expect their reference board to look like.<p>This site is grift, of course, and I doubt its creators expect it to sway anyone who knows it’s just doing a nearest neighbor search of image embedding vectors. But it’s oddly humanizing to see that the AI uses reference boards too.
评论 #34671438 未加载
simonw超过 2 年前
Upload a photo you took to prove to yourself that this tool is misleading (if not straight up fraudulent), then downvote and move on.<p>You can already run exactly this kind of image similarity search against the Stable Diffusion training set using existing tools - <a href="https:&#x2F;&#x2F;rom1504.github.io&#x2F;clip-retrieval&#x2F;?back=https%3A%2F%2Fknn.laion.ai&amp;index=laion5B-H-14&amp;useMclip=false" rel="nofollow">https:&#x2F;&#x2F;rom1504.github.io&#x2F;clip-retrieval&#x2F;?back=https%3A%2F%2...</a> for example
评论 #34671796 未加载
评论 #34671909 未加载
whatshisface超过 2 年前
To actually accomplish something like this purports to be (the linked tool only searches for similar images and doesn&#x27;t tell you anything about how information ended up inside the model), you could try removing individual images or sets of images from the same artist from the training dataset to see what outputs the resulting model would lose the ability to create. It would be expensive to do that for more than a few images, but given how helpful it would be for the debate about copyright and AI, I think it would be great if some researchers could try it.
评论 #34671019 未加载
评论 #34671615 未加载
评论 #34671843 未加载
throwaway-blaze超过 2 年前
I was taught to paint by instructors, and then refined my abilities by studying paintings of the old masters, right down to their brushwork and core techniques visible in the paintings to all who see them.<p>Now I go and create a painting called Sunflowers. Does Van Gogh&#x27;s estate own some of my work?
评论 #34671023 未加载
评论 #34671611 未加载
评论 #34671543 未加载
评论 #34677585 未加载
评论 #34671146 未加载
评论 #34671024 未加载
nickvincent超过 2 年前
Yes, this doesn&#x27;t use attribution techniques like influence functions or Shapley values that are popular in machine learning research, but I am pretty convinced that even a nearest neighbors search is better than the current baseline offered by &quot;AI art systems&quot;: shrug our shoulders and say nothing about the role of human-created training data in producing the outputs.<p>As far as I know, nobody is even thinking about doing the very expensive experiments needed to get ground truth data for formal attribution techniques in the generative AI context (for a given prompt, retrain your model so you can see how the output changes when a particular training example or group of examples is omitted or added), so we&#x27;re nowhere near building true attribution systems for these very large models. Centering the training data will be net good for public discourse on the topic.<p>That said, I see why people want to push back on some of the language used here.
armchairhacker超过 2 年前
This is a really great approach and much better than &quot;ban all AI-generated content because we can&#x27;t find out who made what it was derived from&quot;.<p>Even if it only finds similar matches and not true attribution, I actually think that is better. Say I come up with a neat design but I&#x27;m not very famous, and later someone more famous comes up with the same design on their own. I don&#x27;t deserve <i>attribution</i>, but I would argue I deserve <i>recognition</i>. Regardless of whether or not the popular design was <i>inspired by</i> or <i>derived from</i> the original; having a model like this match the popular design with original, see that the original was created earlier, and give it recognition would be vindicating.<p>In fact, what if we create a neural network like this one to trace out huge DAGs linking every media with its similar-but-earlier and similar-but-later counterparts? It would show the evolution of culture on a large scale, how various memes and pieces of culture get created, where &quot;artistic geniuses&quot; likely get their inspirations from; and it would function as a great recommendation engine.<p>As for copyright and royalties - the site&#x27;s intro never mentioned them, just &quot;attribution&quot; and &quot;people&#x27;s identities&quot;. And honestly, I don&#x27;t think people deserve a cut from art generated from AI using their art unless the art is <i>extremely</i> similar. Because most of the time they are not that similar: the AI takes one artist&#x27;s work (which would not be enough training data on its own) and mixes it with many others, like humans do, and I don&#x27;t believe the two are different in a way that makes the AI mixer preserve copyright.
评论 #34671943 未加载
granularity超过 2 年前
I like the concept, but if you upload a photo you took, the page will tell you:<p><pre><code> &quot;These human-made source images were used by AI ... to generate this image.&quot; </code></pre> Where &quot;this image&quot; is your photo.
评论 #34670865 未加载
throwaway69123超过 2 年前
I just tried this with an image I took with my phone and it gave me 10-15 images that the &quot;ai&quot; used to generate my image, proving this is an absolute fraud of a concept.
评论 #34671056 未加载
评论 #34670891 未加载
convexfunction超过 2 年前
You ever feel like this specific propaganda war is actually unwinnable? Many people are extremely motivated to bullshit the public (usually sincerely though I kind of doubt it in this case), and from I&#x27;ve seen, the public are far more willing to believe the 3 extremely online artists who they&#x27;ve heard an opinion on the topic from than the 1 software engineer&#x2F;data scientist who actually knows half a thing about machine learning they&#x27;ve heard an opinion on the topic from, let alone the growing cornucopia papers and high-production-value websites that seem to say &quot;it&#x27;s just a plagiarism machine&quot; if you don&#x27;t know anything about the subject vs the approximately one website I&#x27;ve ever seen that says &quot;no, you are being lied to&quot;.<p>I&#x27;d like to believe this isn&#x27;t one of those things where we can only move on by everyone who believes the various correlated falsities dying, but I don&#x27;t think I can.
评论 #34690709 未加载
convexfunction超过 2 年前
Yeah, it&#x27;s bullshit, but digging into a specific point from their FAQ:<p>&gt; Usually, the image the model creates doesn’t exist in its training data - it’s new - but because of the training process, the most influential images are the most visually similar ones, especially in the details.<p>Would be cool if this were true, but I don&#x27;t think it is, because the prompt you used and the captions on the training images are being completely ignored. If two different words tend to be used in captions for very visually similar images, and you use just one of those words in your inference prompt, I&#x27;m pretty sure the images that were captioned with the word you used are much more &quot;influential&quot; on your output than the images that were captioned with the word you didn&#x27;t use. (Like, &quot;equestrian&quot; vs &quot;mountie&quot; or &quot;cowboy&quot; or something.)
评论 #34673831 未加载
dzink超过 2 年前
From the beginning of using Stable Diffusion in local and cloud instances, I’ve been promoting SD to generate objects I know nobody has ever drawn before. “Airplane by Tesla”, “Taylor Swift flying in the clouds”, “Little girl riding on an ira descent unicorn and chasing butterflies in the clouds”, “Turkey as a Judge” etc. I highly encourage everyone to try doing that. The results are absolutely atrocious in the beginning and it takes many many runs short and long, with seeds guiding the model to get closer and closer to what I ask. It took a long time to get one instance of SD to make the invention look plausible, and then trying on a new model copy&#x2F;instance takes the results back to crap. That makes me suspect your guidance trains the instance you are using and your prompts and feedback create the work substantially. The model truly generates novel content based on input of the generator and it takes effort to replicate a work it was trained on, likely by using multiple keywords the original was captioned with in many places. So you can get replicas if you try, but you can also draw replicas with brushes if you have enough skill as well. To reiterate: try to generate content you know nobody has ever drawn before (and google to verify it is truly original) and see how much effort it takes to get an actually good result. Now sub-trained models can be steered in different directions, so it’s possible that Midjourney or a heavily sub-trained cloud instance overfit to the originals, so this is not universal, but every copy of the model is likely different and molded by the prompts and feedback it’s been given.
评论 #34673716 未加载
评论 #34680321 未加载
steponlego超过 2 年前
This thing is running scripts from 30+ domains, I would classify it spyware at best. All my fans fired up, Canvas inspection, you name it.
评论 #34671100 未加载
geuis超过 2 年前
I agree with several other commenters on this. I went through maybe 20 different images, and in every case there were no clearly identifiable ties back to the &quot;sources&quot;. If anything, I&#x27;m more impressed at what SD is able to do.<p>However, I have definitely found at least a handful of generated images over the last few months that were almost 100% the same as the training image. I don&#x27;t have the references handy, but it shouldn&#x27;t be too hard to replicate. I was looking at different artists I liked on Artstation but that had few examples of their work. I then used a fairly standard prompt and only changed references to the artists I was testing out. In several of those instances, the generated images in SD were near 1-1 with one of the source images the AI was trained on.
f38zf5vdt超过 2 年前
This is a company that allows you to search for images from a training dataset that have a high cosine similarity with a given image. It appears to be the same as the open source software published by LAION.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;rom1504&#x2F;clip-retrieval">https:&#x2F;&#x2F;github.com&#x2F;rom1504&#x2F;clip-retrieval</a><p>It does not appear to actually show you how images were used to train a generative AI.
minimaxir超过 2 年前
This seems like <a href="https:&#x2F;&#x2F;haveibeentrained.com" rel="nofollow">https:&#x2F;&#x2F;haveibeentrained.com</a> with counterproductive pretentiousness.
cush超过 2 年前
What a gorgeous site. Love the aesthetic and the idea!<p>In Who Owns the Future, Jerron Lanier proposes that the only path to an economy with a sustainable middle class not ruled by Google and Facebook is through this kind of attribution and subsequent micro-royalties. It&#x27;s a fascinating read
mattbee超过 2 年前
I like this because they are trying to show how AI is a copyright laundry.<p>I can see other commenters picking apart its method of heuristically guessing at source images from training data. That obviously won&#x27;t be accurate, or a full picture, but I wonder if it would convince a judge.<p>An interesting challenge for these heuristics would be to take the picture under test along with its prompt, retrain the model without the training pictures it identifies, and regenerate using the same prompt to see whether the output is remotely similar.<p>Obviously that would be hilariously expensive and slow for a casual web service like this, but not beyond the realms of possibility for a wealthy copyright-holder.<p>e.g. if an prompt for an image includes &quot;in the style of Kincade&quot;, and you could subtract all of Kinkade&#x27;s copyrighted images from the training data, would the model still be able to produce anything like his work? If not, Thomas Kinkade might have a copyright case against people who publish AI art &quot;in the style of Kincade&quot;, because he could show that his input was the major contributor to any lucrative output, even if nobody could pin down the cause &amp; effect.
评论 #34671621 未加载
评论 #34671284 未加载
评论 #34680761 未加载
评论 #34671285 未加载
评论 #34676552 未加载
评论 #34673672 未加载
anothernewdude超过 2 年前
That&#x27;s not how the AI works. It also ignores all the work from the Language model that goes into the art. The language model can fill massive gaps in the image generation.<p>All the negative examples are also instructing the AI how to make an image, not just the most similar images.<p>This is a bad joke that reinforces poor understanding of how image generation works.
vivegi超过 2 年前
If the training data set is truly open, the raw inputs (or URLs to the sources) should be available. Isn&#x27;t a direct image lookup on the source data a better way to do attribution? (For example: using methods similar to a Google Image Search)<p>At least from a legal perspective, protection (i.e., indemnification) should be offered to those who are clearly attributing their sources (which means they shouldn&#x27;t be violating copyrights in the first place) and if they don&#x27;t use clear attribution of sources, they should hold the burden of proof to show that they are not violating copyrights.
0xrisk超过 2 年前
we built <a href="https:&#x2F;&#x2F;haveibeentrained.com" rel="nofollow">https:&#x2F;&#x2F;haveibeentrained.com</a> that does the same CLIP retrieval process, and arranged for artists to be able to opt out of future trainings with Stability and LAION.<p>If this is just CLIP retrieval, that raises some ethical problems with the pretense of this site. It could make artists look silly for depending on an overstated claim of provenance, or worse still have artists pursue AI artists because an image looked kind of like their own artwork, with nothing more behind it.
Thorentis超过 2 年前
So now we have to address the issue of whether or not all art is derivative. If I go to art school, and learn from the masters, do I need to be giving attribution to all the artists whose work I studied before creating my own masterpieces?<p>Underlying this whole push to give attribution via AIs, there seems to be the general understanding that these AIs like SD are doing something very different to &quot;creating&quot; art, but are merely &quot;combining art&quot;. I agree with this view, but it doesn&#x27;t seem to be explicitly said very much.
pk-protect-ai超过 2 年前
Utter bullshit. Even image of Pele on a main page of the site does not have any actual style dependencies from the images proposed by the site.<p>I have uploaded my image made by SD: <a href="https:&#x2F;&#x2F;ibb.co&#x2F;3NxPdNw" rel="nofollow">https:&#x2F;&#x2F;ibb.co&#x2F;3NxPdNw</a> The list of proposed sources seems to be very random.<p>EDIT: <a href="https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=541edf3c-6281-4177-b564-a13087e39ce9" rel="nofollow">https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=541edf3c-6281-4177-...</a><p>Bullshit.<p>People learning every day to draw they are learning from other images. They using others styles and some time they are coming up with a unique style which was never used before. SD does the same. You can&#x27;t recover original images. You can have similar style or composition, but you will never recover original image.
margorczynski超过 2 年前
When writing code on my n-th job I should pay something to the n-1 companies I&#x27;ve worked before as my skills were honed working on their proprietary code? If I ever hear that Oracle is searching for executives I&#x27;ll point them out to you, sure you&#x27;ll get along just fine
dang超过 2 年前
Related ongoing thread:<p><i>Getty Images v. Stability AI – Complaint</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34668565" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34668565</a> - (194 comments and counting)
qwerty456127超过 2 年前
Is there still any chance for a human to come up with something new an AI can&#x27;t? Haven&#x27;t all the possible ideas ans&#x2F;or &quot;sub-ideas&quot; already been implemented and fed into &quot;AIs&quot;?<p>I suspect the humanity is omniscient and it&#x27;s only problem is lack of possibility to keep everything it knows in a mind simultaneously and connect the dots. An &quot;AI&quot; seems to be a solution to this problem.<p>Even an individual human could be almost omniscient if he could simultaneously put everything he has ever knew&#x2F;thought&#x2F;seen into his working memory.
sinuhe69超过 2 年前
Haha, I just uploaded a photo of mine to test and SA promptly reported a dozens supposedly human-made &quot;sources&quot; for my photo! I find it hilarious! No, that&#x27;s not how attribution should work.
okamiueru超过 2 年前
So, this is just a reverse image search, or does do anything more clever, like finding stronger matches in the latent space? For example the &quot;style&quot; could match, but a composition is completely different, etc.<p>So, even if it fails at correctly attributing source data. I&#x27;m wondering if it doesn&#x27;t also fail at the concept of attribution. So far it just shows you some pictures with no attribution, and saying that whoever made <i>those</i>, made <i>that</i>.<p>Am I missing something? Why doesn&#x27;t it know who the &quot;human made sources&quot; were made by?
评论 #34675346 未加载
hayley-patton超过 2 年前
Here&#x27;s a fixed point of Cliff Click&#x27;s Twitter picture: <a href="https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=1e52d4cc-6ad4-4ac2-b034-99ea661d205b" rel="nofollow">https:&#x2F;&#x2F;www.stableattribution.com&#x2F;?image=1e52d4cc-6ad4-4ac2-...</a><p>Download the first input picture, search for it, and somehow the picture is AI generated ripping off itself, and somehow influenced by other images despite the first input and the output being bit-identical.
puppycodes超过 2 年前
I think we need to rethink the concept of attribution and how we can collectivize participation rather than define absolute owners. It feels like theres a lot of old ideas that we force on new systems. Sometimes they really just dont work that way. When art or design is generated, copied, or remixed it gains new contexts that often are just as meaningful as the original. In my opinion at least, its what makes the internet beautiful.
HenriTEL超过 2 年前
&gt; people who saw it knew who made it<p>Yeah for a small subset of insiders to a small subset of creators. Really that was already a fiction prior to stable diffusion.<p>And then why human-made artwork would not need attribution of prior work that influenced it but AI made work would require some. We could extend the right to cite there and if we can&#x27;t make it obvious that more than 10% of a given image was from another one, attribution makes no sense.
wiz21c超过 2 年前
Attribution is half of the problem. I&#x27;m not happy at all with people putting my code behind the closed wall of AI to regurgitate some other code inspired from it...<p>I say : my code as in &quot;the text of my program&quot;, not &quot;the idea of my code&quot; (I have no problem with people re-using my ideas as they are mostly ideas that I have learnt from someone else).
mnming超过 2 年前
Off topic, I really like the scrolling animation on the site, I wonder what tools did they use to make it.
jfoster超过 2 年前
This seems to be a similar image finder applied to the dataset used to train Stable Diffusion, I think? I uploaded a photo I took a few minutes earlier and it showed me similar photos.<p>My point is that it doesn&#x27;t seem to prove that any of the images it finds are actually the source images.
poxrud超过 2 年前
I find the original images that were used to generate the art to be much more beautiful and emotion evoking than the strange and lifeless AI images. Maybe it’s just my subconscious but for me something always seems off with the AI generated art.
nephanth超过 2 年前
Uh what? That&#x27;s not how it works, SD doesn&#x27;t just get inspiration from a few similar images. It uses the weights trained from every image, every time. If you want attribution, you need to give it to the whole dataset, or not at all
scotty79超过 2 年前
Very nice initiative but undermined by the fact that it doesn&#x27;t give attribution. Only asks for providing one and depends on the honesty of the users about that.<p>Maybe just do reverse image search in google for the first approximation of attribution?
mikewarot超过 2 年前
I would like to be able to take my 300,000 photos and throw them at something like at the embedded latent space behind StableDiffusion, to have it rate them, add keywords, etc. This would allow me to them put the top 1000 on Flickr.
评论 #34674182 未加载
can16358p超过 2 年前
How does it even work? AFAIK SD works in a &quot;convoluted&quot; latent space that is a result of all the training data, it&#x27;s not like it takes a few images and smashes them together to create a new one.
评论 #34673877 未加载
Hellmitioksldf超过 2 年前
Apparently A.I. has a double standard?<p>Have not seen Artists doing the same thing before very much. Perhaps an inspipred by but not all the images they have seen which lead them to be able to draw a new enough image.
jaimex2超过 2 年前
Everything is a remix.<p>I don&#x27;t know what it&#x27;s trying to achieve other than to waste everyone&#x27;s time.<p>Those human artists saw art from other artists who saw art from other artists who saw art from other artists etc.
intrasight超过 2 年前
As one of my best friends told me in 1971 (we were six!), every image and sound that we produce has already been produced somewhere else in the infinite universe.
评论 #34673657 未加载
esskay超过 2 年前
This is a bit flawed. By providing a non-ai photo it just picks up similar photos and claims they were used to generate it.<p>It&#x27;s a nice, but as I say, flawed concept.
waterproof超过 2 年前
This site don’t even provide actual attribution to the original artists? And yet they encourage you to share those same images without attribution. Wild.
gfodor超过 2 年前
Ah, a nice visual proof that these AI systems are actually synthesizing images with a degree of inspiration from prior art similar to the way humans do.
评论 #34671692 未加载
评论 #34671578 未加载
Imnimo超过 2 年前
How did this get to the front page of HN? This is so transparently asinine. In what universe does finding a nearest neighbor constitute attribution?
MisterBastahrd超过 2 年前
I gave it a photo I took of my dog. It gave me a bunch of images that had nothing to do with my dog. It&#x27;s interesting, but not all that useful.
braingenious超过 2 年前
I wonder how this Super Altruistic Startup is going to monetize outside of becoming the art equivalent of patent trolls, wildly throwing out lawsuits at anything that their algorithm pegs as AI-generated along with a positive “similar” outcome from a reverse image search.<p>I have yet to hear firsthand from any professional artist a single incident of Stable Diffusion causing them harm or lost revenue, but I have heard from a <i>lot</i> of armchair lawyers salivating about the idea of demonizing&#x2F;criminalizing anybody that uses a piece of novel software.
fromtheabyss超过 2 年前
Hip hop, rap, and dj mixes are derivatives of art. Each is legally allowed without attribution. AI will be legally permitted to do the same.
waffletower超过 2 年前
This site is broken from my vantage. I uploaded a Stable Diffusion render of a banal trash can sitting on a lawn. It returned a picture of fiery ruins floating in the sky. While the attributions it returned for this image I did not upload looked somewhat similar in style, they were definitely different enough to indicate that Stable Diffusion created something new and different -- assuming that Stable Attribution is definitive in the referencing relevant training sources. I don&#x27;t believe that it is.
alphatozeta超过 2 年前
basically an advert for chroma, the vector db company, since this is essentially running a semantic similarity search. wouldn&#x27;t be surprised if they&#x27;re running something like clip-interrogator or just clip itself and then an approximate vector similarity search over the database of image, vector pairs in their dataset.
评论 #34673406 未加载
xyproto超过 2 年前
Should human artists also strive to attribute every other artist they have been inspired by when publishing an image? No.
wellthisisgreat超过 2 年前
the copyright people are insufferable. In my experience those who complain the most about copyright and &quot;AI art theft&quot; and whatnot, either<p>1) have a vested business interest in not automating this kind of stuff (e.g. they churn out low-effort quickturnaround copy &#x2F; design), so the automation is coming for yet another mind-numbing half-automated (sorry non-stop CMD+C &#x2F; CMD+V isn&#x27;t really &quot;creating&quot; anything novel or of value). Yeah those SEO spam gigs will be written by robots now, how quaint.<p>2) are copyright justice warriors who are raging for the sake of it, meaning they aren&#x27;t even designers, artists, writers or anything like that.<p>3) do create some kind of art &#x2F; text etc., but of a quality that is not putting them at risk of getting the badge of honor that is &quot;in style of &lt;copyright justice warrior&gt;&quot;.<p>At the core of all this rage is just envy at someone being smarter and more successful than you are - in computer science, in statistics, in business, in art, in writing. Yeah someone did something so good it made it into Stable Diffision as a &quot;named prompt&quot;. Yeah someone was actually capable of creating Stable Diffusion.<p>Protectionists are disgusting.
madsmith超过 2 年前
I gave it a picture of the Mahi Mahi and jambalaya I was having in a restaurant. It showed me pictures of food. Fair enough. But it’s completely disingenuous to say it’s finding attribution to any images you give it.<p>Finding similar pictures in an embedding space does not mean any of those pictures are part of the attribution chain any more than anything else.
pdntspa超过 2 年前
Interesting... I get zero results for every image I give it, all made with SD 1.4&#x2F;1.5
评论 #34672318 未加载
low_tech_punk超过 2 年前
I wish there is similar tool for the text-based generative models such as GTP-3.
martopix超过 2 年前
Oops, the random image it chose as an example to show me was pretty nsfw :D
teaearlgraycold超过 2 年前
Even if this did work, why would people want such a piece of software?
评论 #34670983 未加载
LudwigNagasena超过 2 年前
The story sounds like either satire or blatant misrepresentation of reality. The Internet was full of images without attribution way before Stable Diffusion appeared. Why do people feel compelled to invent nonsensical narratives to demonize AI?
评论 #34671453 未加载
blitz_skull超过 2 年前
Am I the only person who looks at this and thinks, “So what?”<p>I mean it’s technically impressive, I guess. But why would anyone care or pay for this product?
评论 #34671177 未加载
评论 #34670978 未加载
sogen超过 2 年前
Isn’t AI art derivative work? If yes, they are infringing on copyrighted work.
spullara超过 2 年前
This is a great way to show that Stable Diffusion doesn&#x27;t copy.
评论 #34672320 未加载
EGreg超过 2 年前
How would it know?
anigbrowl超过 2 年前
無駄だ
breck超过 2 年前
First, it&#x27;s a beautiful site.<p>Second, a rant.<p>Look, if you are a photographer or artist or writer, your individual contributions to civilization are zero. I&#x27;m sorry I have to be the one to break the bad news to you. It&#x27;s just the mathematical truth.<p>We are still in the childish (c)opywrong era of civilization where people born in privilege were brainwashed into thinking they were special snowflakes and their creative contributions to humanity are far more valuable than they actually are.<p>Your contributions, I don&#x27;t care if you are a modern day Da Vinci, are but a grain of sand on a Mount Everest of human creation.<p>That photo you took? Far far far easier than the immense amount of effort it took to build that camera and get it into your hands. That book you wrote? Far far far easier than the thousands of years it took to evolve the letters and words you used to write it. That song you sang? Trivial compared to the collective efforts of hundreds of millions of people who pioneered music theory and instruments.<p>People who clamor for attribution ironically spend relatively little time digging into the details of all histories of all the ideas they are building upon.<p>I&#x27;m not saying go out and plagiarize. I&#x27;m not saying stop creating. I am saying to wake up and think from root principles about ideas and the absolute stupidity of the (c)opywrong regime. All these big AI models are ignoring (c)opywrong law, and you should too. Even better, contribute to the fight to pass and amendment to abolish (c)opywrong once and for all.<p>&lt;&#x2F;endrant&gt;
dqpb超过 2 年前
This is complete and utter bullshit.
sweetrobot2k超过 2 年前
Zzzzzz
raydiatian超过 2 年前
Rather than retroactively tell the community to self police, maybe we could ask our lawmakers to implement attribution legislation
strangescript超过 2 年前
I am influenced by everything I have ever seen, read, or heard. No one is asking me to attribute where my influences came from when I create something.<p>Yes, AI is still a bit crude now, but in 10 years this is going to like an old man yelling at the wind.<p><a href="https:&#x2F;&#x2F;www.gettyimages.co.uk&#x2F;detail&#x2F;news-photo&#x2F;an-unrestrained-demon-a-lightbulb-demon-is-displayed-as-a-news-photo&#x2F;517351124?adppopup=true" rel="nofollow">https:&#x2F;&#x2F;www.gettyimages.co.uk&#x2F;detail&#x2F;news-photo&#x2F;an-unrestrai...</a>