I keep rattling my brain trying to discern what the implications of hyper advanced generative models like this will be. It's a double edged sword. While there's obvious tangible benefits from such models such as democratising art, the flip side seems like pure science fiction dystopia.<p>In my mind, the main eras of content on the internet look something like this:<p><i>Epoch 1</i>: Pure, unblemished user generated content. Message boards and forums rule.<p><i>Epoch 2</i>: More user generated content + a healthy mix of recycled user generated content. e.g. Reddit.<p><i>Epoch 3</i> (Now): Fake user generated content (limits to how much because humans still have to generate it). e.g. Amazon reviews, Cambridge Analytica.<p><i>Epoch 4</i>: Advanced generative models means (essentially) zero friction for creating picture and text content. GPT3, Dalle-2.<p><i>Epoch 5</i>: Generative models for videos, game over.<p>IMO, the future of the internet feels like a totally disastrous (un)reality. If addictive content recommended by the likes of TikTok has proven anything, it's that users ultimately don't care _what_ the content is, as long as it keeps their attention. It doesn't matter if it comes from a human or a machine. The difference is that in a world where the marginal cost of generating content is essentially zero, that content can and will be created and manipulated by large malicious actors to sway public opinion.<p>The Dead Internet Theory will fast become reality. This terrifies me.<p>[1] <a href="https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/" rel="nofollow">https://www.theatlantic.com/technology/archive/2021/08/dead-...</a>