I keep seeing comments online saying AI, especially generative models, are just fancy summarizers. They feed on humanity's past works, and once that runs out, they'll be stuck in a loop, churning out derivative junk. As an AI engineer, I find this pretty amusing because we humans only create "new" stuff by constantly reworking past ideas.<p>Take artists, for example. Before they develop their own style, they spend years studying and imitating others. Their "new" ideas aren't really new, just different enough from what we remember to feel fresh. Sure, there are painting elephants who aren't "standing on the shoulders of giants," but their "art" is like saying, "Wow, GPT-2 finally wrote a decent email."<p>Even before AI, people were saying, "All the good music's been written," "All the great films have been made," "All the best books..." You get the idea. We're mostly just mixing and matching now, but that's exactly what AI does too.<p>So either we can't create anything truly "new" without building on our "old" stuff (which used to be "new" once upon a time), or generative models can create "new" things just like we do. Your thoughts?