TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why we chose not to release Stable Diffusion 1.5 as quickly

298 点作者 dwynings超过 2 年前

47 条评论

machina_ex_deus超过 2 年前
I&#x27;m not a data hoarder, but from the moment Stable Diffusion was released I had a gut feeling that I should download everything available while it&#x27;s there.<p>Somewhat similar gut feeling to when popcorn time was released, although it might not be exactly the same.<p>While I really wish I&#x27;m wrong, my gut tells me that broadly trained machine learning models available to the general public won&#x27;t last and that intellectual property hawks are going to one day cancel and remove these models and code from all convenient access channels.<p>That somehow international legislation will converge on the strictest possible interpretation of intellectual property, and those models will become illegal by the mere fact they were trained on copyrighted material.<p>So reminder to everyone: Download! Get it and use it before they try to close the Stable doors after the horses Diffused. Do not be fooled by the illusion that just because it&#x27;s open source it will be there forever! Popcorn time lost a similar battle.<p>Get it now when there are trustworthy sources. Once these kinds of things go underground, it gets much harder to get a trustworthy version.
评论 #33286758 未加载
评论 #33285193 未加载
评论 #33284409 未加载
评论 #33284257 未加载
评论 #33284451 未加载
评论 #33286643 未加载
评论 #33284893 未加载
评论 #33284790 未加载
评论 #33284267 未加载
评论 #33284818 未加载
评论 #33285401 未加载
评论 #33287010 未加载
评论 #33286389 未加载
评论 #33288149 未加载
评论 #33284292 未加载
评论 #33288213 未加载
评论 #33288379 未加载
评论 #33289648 未加载
评论 #33286152 未加载
评论 #33288049 未加载
评论 #33287278 未加载
评论 #33291389 未加载
评论 #33286071 未加载
评论 #33285496 未加载
评论 #33292814 未加载
评论 #33284243 未加载
评论 #33284744 未加载
评论 #33284570 未加载
Satam超过 2 年前
Based on a Reddit post [1], the author of this is Stability AI&#x27;s chief information officer.<p>My very rough take on the situation: the company gained their notoriety by building on OpenAI&#x27;s pioneering research but with an important twist of releasing their models as unneutered open source. Now, their openness is starting to falter due to strong pressure from outside forces.<p>If they&#x27;re unable to continue playing the hardball game they themselves invented, I think their glory days will end as fast as they started. The competitive advantage was always their boldness. If they lose that, quickly others will take their place.<p>In general, I don&#x27;t think tech that&#x27;s as open, powerful and easily reproducible as these language models can be stopped. Sure, maybe regulations will delay it a bit, but give it a few years and any decent hacker or tinkerer will be dabbling with 5x better tech with 5x less effort.<p>[1] <a href="https:&#x2F;&#x2F;archive.ph&#x2F;Z5sU3" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;Z5sU3</a>
评论 #33284355 未加载
评论 #33290002 未加载
pr337h4m超过 2 年前
&quot;We’ve heard from regulators and the general public that we need to focus more strongly on security to ensure that we’re taking all the steps possible to make sure people don&#x27;t use Stable Diffusion for illegal purposes or hurting people.&quot;<p>&quot;What we do need to do is listen to society as a whole, listen to regulators, listen to the community.&quot;<p>&quot;So when Stability AI says we have to slow down just a little it&#x27;s because if we don&#x27;t deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won&#x27;t exist and nobody will be able to release powerful models.&quot;<p>Looks like someone is leaning on them :(
评论 #33285162 未加载
评论 #33284244 未加载
thorum超过 2 年前
The author (Stability.AI’s CIO) did an impromptu AMA on Reddit:<p><a href="https:&#x2F;&#x2F;reddit.com&#x2F;r&#x2F;StableDiffusion&#x2F;comments&#x2F;y9ga5s&#x2F;stability_ais_take_on_stable_diffusion_15_and_the&#x2F;" rel="nofollow">https:&#x2F;&#x2F;reddit.com&#x2F;r&#x2F;StableDiffusion&#x2F;comments&#x2F;y9ga5s&#x2F;stabili...</a><p>His comments regarding RunwayML’s release of 1.5 were especially interesting:<p>&gt; “No they did not. They supplied a single researcher, no data, not compute and none of the other reseachers. So it’s a nice thing to claim now but it’s basically BS. They also spoke to me on the phone, said they agreed about the bigger picture and then cut off communications and turned around and did the exact opposite which is negotiating in bad faith.”<p>&gt; “I’m saying they are bad faith actors who agreed to one thing, didn’t get the consent of other researchers who worked hard on the project and then turned around and did something else.”
评论 #33284479 未加载
评论 #33285070 未加载
评论 #33289646 未加载
icelancer超过 2 年前
His answers on reddit are downvoted and the redditors are correctly pointing out that most of these &quot;protections&quot; smack of the fact that his investors want to stop giving things away and to close up source &#x2F; resources for better monetization strategies.
评论 #33286505 未加载
minimaxir超过 2 年前
&gt; At Stability, we see ourselves more as a classical democracy, where every vote and voice counts, rather than just a company.<p>After taking $100M in venture capital and two distinct drama events due to disorganization, this is unlikely to last.
评论 #33283935 未加载
aortega超过 2 年前
Powerful people are pulling strings to control AI everywhere. OpenAI is exactly the opposite of open. Now someone is pushing on Stability AI to close it up, I believe those models are more powerful or dangerous than they seem, and it got some people scared in some way.<p>I read than when some guys from 4chan started running the leaked NovelAI model, they generated porn non-stop for 20 hs or more, no sleep, no eating.
评论 #33284450 未加载
评论 #33284779 未加载
评论 #33286949 未加载
fsociety999超过 2 年前
While they frame the post as if this is a positive and something they <i>want</i> to do, reading between the lines, it sounds to me like something has them rattled.<p>They mentioned regulators here, and I would be curious to hear the story behind that.<p>Don’t want to go too tin foil hat, but it makes you wonder if a certain other AI company that claims to be “open” may be afraid of a company that actually <i>is</i> open and is applying political pressure.
评论 #33284414 未加载
Roark66超过 2 年前
As always in such cases this is 100% bull**. Either something is not working out for them and they have to delay in which case they could&#x27;ve just said so, or this is some sort of pretense to show how &quot;responsibility minded&quot; they are.<p>The reality is that bad actors have the resources to train their own stable diffusion on a dataset of whatever they want to deep fake and such delays do not slow them down one bit.<p>What it does slow down is normal people using those models.<p>From the smallest thing like mobilenetv3 through whisper, stable diffusion, CodeGen, and bloom those are huge productivity equalisers between the huge corpos and the little guy.<p>Also the same thing can be said about frameworks like huggingface&#x27;s. Just recently I was looking for a way to classify image type (photo or not photo[clip art, cartoon, drawing]) in an android app. Of course first hits on Google stear towards Microsoft Azure&#x27;s paid API service. I was unhappy with having to use an over-the-Internet-API (with potentially sensitive end user&#x27;s private pictures) so in one day of work I managed to download a pretrained MobileNetV3. A couple of 10k+ image datasets and I wrote &lt;50 lines of python to tweak the last layer and fine tune the network. On rtx 2070 training took 10 minutes. Resulting accuracy on real data? 90%+. The model loads and infers in few hundreds of ms on modern phones(instantiating and loading takes longer than the inference BTW).This is priceless and 100% secure for end users. For thilose interested in the details I use ncnn and vulkan for gpu(mobile!) inference.<p>Every commercial model maker&#x27;s wet dream is to expose the model through an API, lock it behind a firewall and have people pay for access. This is not just hugely inefficient. It is insecure by design.<p>Take copilot by example. I&#x27;m perfectly happy for all my hobby-grade code to be streamed to Microsoft, but no chance in hell I&#x27;ll use it on any of my commercial projects. However faux pilot run locally is on my list of things to try.<p>The first AI revolution was creation of those super powerful models, the second is the ability to run them on the edge devices.
fxtentacle超过 2 年前
I think the most important part is this comment:<p><a href="https:&#x2F;&#x2F;danieljeffries.substack.com&#x2F;p&#x2F;why-the-future-of-open-source-ai&#x2F;comment&#x2F;9884263" rel="nofollow">https:&#x2F;&#x2F;danieljeffries.substack.com&#x2F;p&#x2F;why-the-future-of-open...</a><p>The people that he discredits as &quot;leak the model in order to draw some quick press to themselves&quot; are the researchers that are named in the Stable Diffusion paper. Yes, Stability.AI gave them lots of money. But no, they are not leaking the model, they are publishing their own work. It&#x27;s university researchers, after all. And Stability.AI does NOT own the model.
评论 #33285465 未加载
评论 #33287571 未加载
13of40超过 2 年前
Two thoughts I&#x27;ve had about Stable Diffusion:<p>1. The web UIs I have used are taking advantage of the same mental pathways as an electronic slot machine. Just like you can max out your bet on a slot machine and mash a button until you run out of credits, you can do the same on the hosted stable diffusion apps until you get a shareable hit.<p>2. Just like the dream you had last night, nobody wants to hear about it at breakfast, no matter how epic it was, because it&#x27;s not backed by any meaning.<p>That said, I love stable diffusion and am an addict to it almost every day.
评论 #33284123 未加载
评论 #33285429 未加载
评论 #33286757 未加载
notacanofsoda超过 2 年前
1) Who is Daniel Jeffries? There&#x27;s no explanation of how he&#x27;s related to Stability.<p>2) StabilityAI gave RunwayML compute time for them to train Stable Diffusion (they&#x27;re also the creators of the original model). It&#x27;s weird to categorize them as &quot; other groups leak the model&quot;. They&#x27;re the ones that created the model! (Source: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;runwayml&#x2F;stable-diffusion-v1-5&#x2F;discussions&#x2F;1#6351a36ca9a9ae18220726c7" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;runwayml&#x2F;stable-diffusion-v1-5&#x2F;discus...</a>)
评论 #33283982 未加载
评论 #33283961 未加载
lairv超过 2 年前
The discourse has already changed quite a bit since the first release, which was only 2 months ago, and is getting alarmingly close from OpenAI&#x27;s &quot;we must delay release of XXX for safety reasons&quot;. It was probably to be expected, OpenAI are not just morons who decided to freeze opensource progresses, there are likely legal reasons behind it. But adding to that last weeks dramas, I am not very bullish on StabilityAI, hope I&#x27;ll be proven wrong
Beaver117超过 2 年前
So you want it to be open source, but not too open, because then bad people will use it. Good luck with that. If you want to filter everything behind a SaaS like OpenAI go ahead, but then you can&#x27;t call it open source. And maybe that would have been the right choice. But Pandora&#x27;s box is open now.
评论 #33285391 未加载
dang超过 2 年前
We replaced the title, which has a whiff of corporate press release about it, with what appears to be a representative phrase from the article body. If there&#x27;s a more representative phrase, we can change it again.
p3opl3s超过 2 年前
Y&#x2F;ou can&#x27;t comment unless you pay to subscribe.. lol - isn&#x27;t that a company blog post?<p>Anyways.. this shit grinds me.. yet another &quot;open source&quot; AI proejct pretending to be fo rthe people.. finally get a massive valuation and now it&#x27;s all &quot;we must be security concious&quot;..<p>Hypocrtyes and here is an interview with the founder of Stable Diffusion stating the exact opposite approach by &quot;having faith in people&quot;!<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;YQ2QtKcK2dA?t=704" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;YQ2QtKcK2dA?t=704</a>
评论 #33287989 未加载
jackblemming超过 2 年前
Guys I&#x27;m going to release an invention called the car, but my security team needs to make sure it&#x27;s safe and won&#x27;t be abused by drunk drivers. Next I plan to release an invention called the gun, but please hold your horses, because it could be abused. I need to double check and make sure it&#x27;s safe to release this piece of equipment.
machinekob超过 2 年前
All this is PR talk after few dramas with immoral activities.<p>They got 100mil USD in founding and I feel like pressure squeeze them hard as they are trying to monetise models, but how you monetise open source models when someone can just fine-tune your weights and make better&#x2F;faster&#x2F;cleaner model and software without losing 10mil+ on training original.<p>You are always few mil behind rivals and after past few weeks which was PR nightmare they lost most of the &quot;community driven&quot; advantage.<p>I fell like they are extremely desperate for attention (drama was artificially created cause it clicks <i>conspiracy</i>) or they are just so chaotic and lack proper leaders that everything is burin.
评论 #33286284 未加载
评论 #33287139 未加载
imhoguy超过 2 年前
Well, models will be taken down anyway (at least attempted), save whatever you can put hands on. It is happening, govt is just catching up with this rapid situation:<p><a href="https:&#x2F;&#x2F;www.federalregister.gov&#x2F;documents&#x2F;2022&#x2F;10&#x2F;13&#x2F;2022-21658&#x2F;implementation-of-additional-export-controls-certain-advanced-computing-and-semiconductor" rel="nofollow">https:&#x2F;&#x2F;www.federalregister.gov&#x2F;documents&#x2F;2022&#x2F;10&#x2F;13&#x2F;2022-21...</a> (AI mentioned 4 times)<p><a href="https:&#x2F;&#x2F;eshoo.house.gov&#x2F;sites&#x2F;eshoo.house.gov&#x2F;files&#x2F;9.20.22LettertoNSCandOSTPonStabilityAI.pdf" rel="nofollow">https:&#x2F;&#x2F;eshoo.house.gov&#x2F;sites&#x2F;eshoo.house.gov&#x2F;files&#x2F;9.20.22L...</a> (at the very end &quot;export controls&quot; are mentioned multiple times)
评论 #33285319 未加载
charcircuit超过 2 年前
&gt;NSFW policies<p>Ugh. It feels like so many of these models are trying to censor NSFW material.
评论 #33284283 未加载
评论 #33284385 未加载
评论 #33284484 未加载
f0e4c2f7超过 2 年前
Once upon a time there was a company called OpenAI that was going to do for AI what open source did for software.<p>I think OpenAI changing their revenue model and corporate structure to better reflect how much money they were about to make really left a mark on the internet around trust in the AI space.<p>The default is going to be to assume that AI companies like stability have sold out, to that end it would not surprise me if even this minor incident leads to a splitting and a new open model that becomes popular.<p>I understand the point the author is trying to make. I understand what OpenAI is getting at with safety. I understand what the regulators are getting at.<p>But it is too late. The genie is already out of the bottle and granting wishes. What are you going to ban at this point? Math education?<p>It&#x27;s time to accept that it&#x27;s not that hard to come up with a few a100s and train models for harm if thats your goal. You can write code that harms people too. The answer is not to ban code. The answer is not to heavily regulate AI (not all countries will regulate it, it will be like banning gunpowder or electricity)<p>As for this particular release - what is being implied they were going to wait for? Figuring out the model? Regulation? The internet to start acting calm and reasonably? We don&#x27;t even know what these models fully do yet. It&#x27;s hard to imagine what you could know in 6 months vs now that would allow you to release with a big thumbs up.<p>More and more I&#x27;m realizing how politically controversial AI will become. Already today we&#x27;re starting to see that on various axies. I think weirdly in a few years it may be a top issue.
jerpint超过 2 年前
Isn&#x27;t the whole point of open source, so long as licenses and attributions are respected, that anyone is free to do whatever they please with these models and their redistribution?
评论 #33284023 未加载
评论 #33283943 未加载
jaimex2超过 2 年前
Looks like the fun police arrived.<p>Seriously, fire any coward lawyers erroring on the side of caution and get some that are versed in the NRA playbook.
ok123456超过 2 年前
I&#x27;m really tired of this infantilizing garbage. People are always going to use new technologies in ways the people didn&#x27;t anticipate.<p>So, they&#x27;re going to delay their release so that if you type a naughty word it won&#x27;t make a naughty image. You know what happens within hours? Someone releases a modified version of the weights that over corrects it back and makes it even more naughty.
roel_v超过 2 年前
Is there any work being done on trust-less (or maybe trust web) distributed model training? The main problem today is that training the model is being gatekept (is that a word?) by actors with 100&#x27;s of k&#x27;s of $. If there would be a way to run a client, like SETI@home, that will train models, then a few thousand unsophisticated users with 30x0&#x27;s and some weeks&#x2F;months of time will do to model training what bittorrent did to mp3 distribution. But for this to work, you need some way to feed images to users, ensure that images aren&#x27;t re-used, somehow guard against malicious actors injecting faulty data etc.
muaytimbo超过 2 年前
This guy already sounds, in his own words, &quot;neutered&quot;. &quot;We have to listen to regulators, we have to listen to the community, etc&quot; there are no regulations, and even if there were, imagine if Uber, Lyft, AirBnB, Tesla, or other startups had taken this position. Listening to regulators &#x2F; the community &#x2F; anyone without a stake in the company is literally the quickest way to get killed by regulators captured by incumbent competitors.
moneycantbuy超过 2 年前
download while you can. i really hope this isn’t the beginning of the end for stable diffusion or true open ai. it’s too good to not piss off powerful people. we must keep real open source ai alive, otherwise it’ll only be billionaires like zuck and elon force-feeding us poisonous saccharine.
评论 #33285167 未加载
fleddr超过 2 年前
&quot;So when Stability AI says we have to slow down just a little it&#x27;s because if we don&#x27;t deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won&#x27;t exist...&quot;<p>Yeah, you can stop pretending that the neutering is the right thing to do, clearly it&#x27;s something you somehow are forced to do, due to some serious threat your received.
Retr0id超过 2 年前
Depending on how the legislation plays out, I can foresee a &quot;pirate bay for ML models&quot; popping up.
yellow_lead超过 2 年前
&gt; We are forming an open source committee to decide on major issues like cleaning data, NSFW policies and formal guidelines for model release.<p>I don&#x27;t see how NSFW photos can easily be stopped from being generated, with the model being open source. Maybe the model could be heavily pre-filtered to remove any photos that could possibly be used for NSFW images.
评论 #33284319 未加载
ggm超过 2 年前
This appears to be BOTH an IPR statement and a social policy statement.<p>I tend to thinking they are co-joined, but clarity helps.<p>I think the social harms side, they need to be careful to under-promise and over-deliver. The likelihood of preventing social harms is frankly close to zero, what they can do is make it more complicated.<p>Think like this: use stable diffusion to make one &quot;actor&quot; dance a lambada in the left field and save it. in a new state, make a different &quot;actor&quot; dance a lambada in the &quot;right&quot; field. Now using alpha masks combine the two actors. Can this represent sexy dancing? you bet your sweet bippy.<p>Promising not to release &quot;two person sexy dancing&quot; in this situation would be over-promising. Sure, it was done outside of the AI by masks. Will the law makers care?<p>(for actor and lambada and sexy dance, substitute whatever contextually means &quot;harm&quot; in a two-actor situation, semantically)
评论 #33285584 未加载
julienreszka超过 2 年前
I really want to be kind but all I see in this article is some corpo speech.
diebeforei485超过 2 年前
&gt; We’ve heard from regulators<p>Who are these regulators?
评论 #33285249 未加载
obert超过 2 年前
all these companies being responsible and protecting us from “bad AI” are just delaying the inevitable.<p>With hardware prices going down and new GPUs and better algorithms coming to light, it’s only a matter of few years until anybody will be able to train custom versions as powerful as today’s AI, without protections, probably biased, etc.<p>Sure, they will be 5-10 years behind big corps, but it won’t matter once poorman AI will be good enough to matter.
marmada超过 2 年前
My hope (for codex, stable diffusion, etc.) is that the models become so popular that it will be impossible to legislate them for issues like copyright. I think there might be a limited window before legal repercussions start happening -- so hopefully the models are in extremely wide spread use by then
yieldcrv超过 2 年前
I think Daniel Jeffries believes everything they just wrote.<p>Their new handlers can do anything to the contrary and are incentivized to curb release as well. The market is saying their new handlers are going to do that.<p>So we enjoy you proving us wrong!
c7b超过 2 年前
Realistically, those guys are facing a choice between an option for a very comfortable early retirement and getting roadblocked &#x2F; litigated into oblivion. Can you really blame them?
pabs3超过 2 年前
I wouldn&#x27;t call Stable Diffusion &quot;Open Source AI&quot;, the training data isn&#x27;t publicly released under open source licenses. I like the Debian Deep Learning Team&#x27;s Machine Learning policy for evaluating these things:<p><a href="https:&#x2F;&#x2F;salsa.debian.org&#x2F;deeplearning-team&#x2F;ml-policy" rel="nofollow">https:&#x2F;&#x2F;salsa.debian.org&#x2F;deeplearning-team&#x2F;ml-policy</a>
seydor超过 2 年前
Have to find a way for content makers to make money&#x2F;jobs through the system. Google search solved that by providing ad revenue to content makers, or else they &#x27;d have removed all their content by now.
TheArcane超过 2 年前
&gt;Help us make AI truly open, rather than open in name only.<p>That has to be a dig at OpenAI
habibur超过 2 年前
This should have been expected.<p>Open source or not they are funded. And that funding needs to generate profit one way or the other.<p>This first release gave them the popular attention which they needed. It was successful.
can16358p超过 2 年前
&gt; Help us make AI truly open, rather than open in name only.
oth001超过 2 年前
Still don&#x27;t see them trying to use a dataset that they own the licenses to...
rafaelero超过 2 年前
And so it starts...
bgi_909超过 2 年前
seeks
chatterhead超过 2 年前
So if someone buys the rights to an artists work and that artist is dead can they start using Stable Diffusion to create new works of art they can claim as &quot;by the artist&quot;?
评论 #33285818 未加载
评论 #33285567 未加载
isitmadeofglass超过 2 年前
Its weird they don’t mention their horrendously failures or attempts to take over all the independent social media groups. I expect that slowed them down quite a bit as well.