IMO the big deal is not the small thing that is driving "debate" and backlash. The small thing is, we have now more or less solved the automation of the idea to image pipeline.<p>The big deal, or rather deals, are still emerging; and they are big because they are going to be far more disruptive and transformational than the relatively minor crises for working artists and designers that are relatively small. Relatively.<p>The big things are the things that you can see if you pay attention to the rough edge of what people are experimenting with and doing with these tools, beyond simply using them as drop-in replacements for talented artists/designers.<p>The first thing people do with new tools is what they are often conceived of and intended to be used for by their makers: drop-in replacements for existing tools, but "better" for some values of better. Faster, cheaper, more reliable, etc.<p>The <i>interesting</i> things that people do with new tools, are all the new things that can <i>only be done with those with those tools</i>. There is a multidimensional territory that StableDiffusion and MidJourney etc are opening up, and it's not just about media objects proper. It's <i>also</i> about renegotiations of our relationship to various classes of media object.<p>The word "flood" is used to decry the inundation of generated content. The real story though is that the cost, and <i>half-life</i>, of images of <whatever> quality is rapidly going to zero.<p>We are watching in real time as striking, thought-provoking, captivating imagery [and beyond], is becoming ephemeral and on-demand and tailored to an audience of one.<p>Another longer-term but even bigger non-linear projection of where these systems are going, is about the <i>role</i> they occupy.<p>In the last few days the stratechery.com article on AI "unbundling" has been making rounds.<p>The analysis is basically sound. But it falls short of identifying the <i>big</i> story, that the step by step automation of the entire "content generation" chain has not reached it its end.<p>Now, we are in the last moment when humans are necessary as collaborators at all. We still provide executive functions: intention, discrimination, filtering, ideation.<p>But automation of those things wrt to the media stream and cultural discourse (aka the Zeitgeist) which individuals consume permute and articulate their "own" ideas about, is quite obviously not just inevitable, but going to be done "better" for various values of better, by these seems types of systems.<p>Will that be "soon"? Well, no; but it's "a simple matter of engineering"—and it will happen.<p>So... the <i>big</i> story here is that in the near (sic) future, we will live in a world in which superhuman art (and design) is generated for us on the fly. Influenced by and steered, sort of, when we try, to our tastes—but mostly devised for us, on the basis of superhuman discrimination...<p>...and with intentions and aims we better get a hold of.<p>The Alignment Problem here is going to redefine society.<p>This is what I mean: consider for a moment the surveillance society we live in today; and what that surveillance is for: to understand and steer our behavior, for venal economic purposes—and venal political ones.<p>Now extrapolate to a world in which the discernment, and <i>steering</i>, are effected by superhuman tools.<p>That's the biggest deal IMO.