In practice there won't be any negative consequences. It sounds very boring to say this, but the whole idea that the so-called OpenAI hyped - that AI can't be open in case people abuse it - isn't a well grounded argument. When looked at critically, it falls apart.<p>People have been able to create images of things that aren't real for a very long time. Photoshop has been around for many decades but of course, photo fakery was around since the dawn of photography itself. How often do you encounter scams or crimes that were uniquely enabled by imaging software and, more importantly, that would have been prevented if Photoshop had been from the start a cloud SaaS monitored by armies of censors?<p>And look at DeepFakes. They've been around for years now but barely garner attention.<p>In practice, our society has not been broken by floods of fake images. When people try there are usually systems to handle it and the problem is manageable. There are occasional cases where it becomes a bigger issue - perhaps the best contemporary example is with the torrent of faked scientific papers, where the "scientists" are submitting e.g. doctored western blots. But that's a symptom of a more general problem with dishonesty and unethical behavior in academic research. There are lots of ways for them to cheat and image manipulation is only one. Moreover, if we look at the details of these fakes and how they get detected, in reality Adobe would never have thought to write detectors for such images, and even if they had they'd have been flooded with false positives from legit scientists preparing their papers in legit ways. Trying to fix the problem at that level would have been totally wrong anyway.<p>That's why as a society, we are not gripped by discussions about the many other tools that can be used to manipulate or even create images from nothing. There is no real problem here that OpenAI needs to solve.<p>So why are they so obsessed with the idea that unfiltered DALL-E is uniquely destructive in a way that Photoshop, Blender, Houdini etc are not? It's not an argument built on evidence, for they have presented none. It's instead an argument built on ideology. The sort of people who work there (and at Google etc) have succumbed to the temptation to conflate symbols with reality. History shows that it's something of a job hazard for well educated people who spend all day working with abstractions - they start to believe that reality is derived from symbols, rather than the other way around. This is flattering to the ego and makes people feel powerful but can also lead to terrible injustices and actions.<p>In this case, OpenAI have a bunch of ideological goals rather specific to contemporary US middle class moral panics. DALL-E converts symbols from one form to another and as such, is not actually particularly influential or important. Its impact will likely be on the order of the impact of statistical machine translation. Highly useful, but just an optimization and cost reduction of tasks that could already be done more slowly by people anyway. The biggest impact will probably be in entertainment - an area OpenAI seems quite uninterested in.<p>One thing it <i>won't</i> do is change demographics, rewrite the ideas or mentalities of entire populations, or bring about social change. It won't make the world a better place except in the small (yet important) ways that any useful product does, but it also won't make the world a worse place. It will just ... draw things. Sometimes that will be useful. People will throw AI generated art into PowerPoints to make them more interesting. Later generations of the tech will create 3D objects, textures and characters for new game-engine based movies and TV shows. Sometimes it will just be for the memes. Some people may find applications in business, like logo generation. And the world will eventually look back at OpenAI and wonder how they could be so arrogant as to assume that their judgement about how to use the tech was so superior to the billions of other people in the world, many of whom are much smarter than any OpenAI employee.