This is wild to me. Now we have meta-ai that is surrounding other forms of ai, analyzing the user submitted input as well as the image output, using ai to infer intent, identify nouns etc... and yet all of this is stipulated on the initial datasets that these initial text->img robots were trained on which may or may not be a true representation of our actual culture. So we are lava/magma layering all of these approximations on top of each other and gluing them with scrambled eggs. I think this is all really cool, for the record, it's just something I have been thinking about. For art, I love it, for a self-driving vehicle, lmao.