A related thread yesterday was about how to programatically determine whether a text has been generated by an ML model and how it's very difficult to do it. There are still some aesthetic cues but we will likely become economically indifferent to them in coming months.<p>I've been working on a related idea from a different direction about ascertaining whether a belief is the product of an underlying ideology, and in particular, whether there is a difference between ideas formulated by the filter of ideology - and direct experiences. It's a similar error prone aesthetic judgment. The rationale is that I think LLM's can illuminate how these cultural differences are not just political opinions or religious pieties, but divides in entire theories of mind. The basic idea is that a belief that is the effect of interpreting an experience through ideology is equivalent to a text produced by a LLM. They aren't the real, they are just artifacts of language.<p>What I think happened is that some early 20th century intellectuals codified an older theory of mind that reduced the self-itself to the artifacts of language. Without either spiritual belief or physical competence or experience to anchor an identity to and resist it, they figured out how to install entirely new ontologies into vulnerable minds, which subordinated people to their 'enlightened' critics.<p>Think of it as inventing "self-as-a-service," where you place your identity and sense of self and worth in the hands of a priest, a mentor, an officer, a pimp, a professor, a leader, a therapist, or lately, an activist, and in exchange for subordination to them, you get Pavlovian drips of approval and rewards, in a theoretical Skinner box. The techniques are ancient, but transmitting them through texts is modern.<p>In the philosophy of mind, (I'm trying to source it, probably Dennet) there was an idea that the self-itself reduces to how it expresses itself through language and that language was consciousness.<p>What i think LLM's are demonstrating today is, given we can simulate all the consistent artifacts of language with some code and a computer, language has an arbitrary substrate. Therefore it is not the real, even if your experiences are affected by it. Your identity and self-itself is not the artifact of language or symbols, because experiences that are the internally consistent artifacts of language are easily simulated. Unless <i>you</i> can also be easily simulated, either you aren't real, or they aren't. The message of the medium here is that the existence of LLM's means the end of subjectivity, but also I hope the beginning of a common theory of mind that has some innoculation against being subordinated through indoctrination.