TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Startup is setting a DALL-E 2-like AI free, consequences be damned

63 pointsby samfriedmanalmost 3 years ago

9 comments

samfriedmanalmost 3 years ago
I&#x27;ve been using the beta test of this model for a little while, and plan to use it in academic applications as well. It&#x27;s really remarkable in its performance and coherence: in my personal experience it&#x27;s easier and faster to get closer to an intended image than it is with DALL-E. Of course the big difference is the lack of censorship&#x2F;safeguards added to its operation (and the lack of a fee). If &amp; when the model weights are released (or leaked) I think this model will be used far and wide for everything from art to political messaging to pornography. If not this model, then the next one.<p>Interesting times.
评论 #32445059 未加载
jerojeroalmost 3 years ago
Very good article.<p>I think a lot of the fear around these things is not really conducent to a proper solution.<p>Some technology is so dangerous that it is banned, for example, nuclear weapons. But some countries still have nuclear weapons and that puts them on a very advantageous position compared to the rest of the world. Of course, we say we don&#x27;t want unstable countries to have them but then again... Russia does.<p>Every technology that is developed will get to be on the hands of the powerful; be it the richest or the ones with the best political connections and this will not be in any way an exception for content generation ai. Paying for it will not make you use it in a more responsible way, and of course, for these big companies to decide what is acceptable and what is not is a non democratic way of restricting speech. They&#x27;re obviously in their right to do so but I wouldn&#x27;t hail them as anything other than self interested entities.<p>I think the future many of these companies envision is the same locked on present that companies like apple have built. To me, thats not desirable.<p>I want people to be free to use these generation models to create as much as they want. Copyright be damned. But then again, I am also in the &quot;it&#x27;s not piracy it&#x27;s illegal copying&quot; boat. Ultimately I think this discussion is, and never has been, about the harm these tools could do to democracy but the harm they could do to intellectual property holders. I mean, people don&#x27;t need realistic deep fakes to believe outrageous things anyway and anyone who is critical will double check information but people don&#x27;t do that because they don&#x27;t care and so it really doesn&#x27;t matter how good deep fakes or fake news get because they already are good enough (and I&#x27;m talking about Q conspiracy tier lies).<p>The problem isn&#x27;t the tools, the problem isn&#x27;t the speech, the problem is that our education system doesn&#x27;t prepare us for true critical thinking.<p>Give us the tools and give us the knowledge.
评论 #32446783 未加载
xt00almost 3 years ago
The idea that the only safe way for a tool to be used is on the cloud &#x2F; servers so it potentially can be taken away is a terrible tone to take... the opposite should be the default -- saying &quot;somebody knows better&quot; is incredibly limiting to human creativity..
thorumalmost 3 years ago
Where does Stability AI gets its funding? 70+ employees, massive GPU compute resources, and the first commercial product isn’t released yet?
lioetersalmost 3 years ago
&gt; “We will provide more details of our sustainable business model soon with our official launch, but it is basically the commercial open source software playbook: services and scale infrastructure,” Mostaque said.<p>&gt; “We think AI will go the way of servers and databases, with open beating proprietary systems — particularly given the passion of our communities.”
Sateeshmalmost 3 years ago
Humanity as a whole need to become a whole lot smarter soon. I can imagine a 1000 different terrible things people can do to you with this kind of technology, if you aren&#x27;t vigilant.
评论 #32449931 未加载
ansnalmost 3 years ago
To truly believe in open source is to accept that your products can be used by bad actors.
spywaregorillaalmost 3 years ago
So this is releasing a pretrained model DALLE equivalent right? Where can I grab it?
origin_pathalmost 3 years ago
In practice there won&#x27;t be any negative consequences. It sounds very boring to say this, but the whole idea that the so-called OpenAI hyped - that AI can&#x27;t be open in case people abuse it - isn&#x27;t a well grounded argument. When looked at critically, it falls apart.<p>People have been able to create images of things that aren&#x27;t real for a very long time. Photoshop has been around for many decades but of course, photo fakery was around since the dawn of photography itself. How often do you encounter scams or crimes that were uniquely enabled by imaging software and, more importantly, that would have been prevented if Photoshop had been from the start a cloud SaaS monitored by armies of censors?<p>And look at DeepFakes. They&#x27;ve been around for years now but barely garner attention.<p>In practice, our society has not been broken by floods of fake images. When people try there are usually systems to handle it and the problem is manageable. There are occasional cases where it becomes a bigger issue - perhaps the best contemporary example is with the torrent of faked scientific papers, where the &quot;scientists&quot; are submitting e.g. doctored western blots. But that&#x27;s a symptom of a more general problem with dishonesty and unethical behavior in academic research. There are lots of ways for them to cheat and image manipulation is only one. Moreover, if we look at the details of these fakes and how they get detected, in reality Adobe would never have thought to write detectors for such images, and even if they had they&#x27;d have been flooded with false positives from legit scientists preparing their papers in legit ways. Trying to fix the problem at that level would have been totally wrong anyway.<p>That&#x27;s why as a society, we are not gripped by discussions about the many other tools that can be used to manipulate or even create images from nothing. There is no real problem here that OpenAI needs to solve.<p>So why are they so obsessed with the idea that unfiltered DALL-E is uniquely destructive in a way that Photoshop, Blender, Houdini etc are not? It&#x27;s not an argument built on evidence, for they have presented none. It&#x27;s instead an argument built on ideology. The sort of people who work there (and at Google etc) have succumbed to the temptation to conflate symbols with reality. History shows that it&#x27;s something of a job hazard for well educated people who spend all day working with abstractions - they start to believe that reality is derived from symbols, rather than the other way around. This is flattering to the ego and makes people feel powerful but can also lead to terrible injustices and actions.<p>In this case, OpenAI have a bunch of ideological goals rather specific to contemporary US middle class moral panics. DALL-E converts symbols from one form to another and as such, is not actually particularly influential or important. Its impact will likely be on the order of the impact of statistical machine translation. Highly useful, but just an optimization and cost reduction of tasks that could already be done more slowly by people anyway. The biggest impact will probably be in entertainment - an area OpenAI seems quite uninterested in.<p>One thing it <i>won&#x27;t</i> do is change demographics, rewrite the ideas or mentalities of entire populations, or bring about social change. It won&#x27;t make the world a better place except in the small (yet important) ways that any useful product does, but it also won&#x27;t make the world a worse place. It will just ... draw things. Sometimes that will be useful. People will throw AI generated art into PowerPoints to make them more interesting. Later generations of the tech will create 3D objects, textures and characters for new game-engine based movies and TV shows. Sometimes it will just be for the memes. Some people may find applications in business, like logo generation. And the world will eventually look back at OpenAI and wonder how they could be so arrogant as to assume that their judgement about how to use the tech was so superior to the billions of other people in the world, many of whom are much smarter than any OpenAI employee.