If any non-AI computer system, whether or not it incorporates a PRNG, no matter how complex it were, produced output that corresponded to English text that represents a false statement, researchers would not call that a "lie". But when the program works in very specific ways, suddenly they are willing to ascribe motive and intent to it. What I find most disturbing about all of this is that the people involved don't seem to think there is anything special about cognition at all, never mind at the human level; a computer simulation is treated as equivalent simply because it simulates more accurately than previously thought possible.<p>Is humanity nothing more than "doing the things a human would do in a given situation" to these people? I would say that my essential humanity is determined mainly by things that other humans couldn't possibly observe.<p>Yet, mere <i>language generation</i> seems to convince AI proponents of intelligence. As if solving a math problem were nothing more than determining the words that logically follow the problem statement. (Measured in the vector space that an LLM translates words into, the difference between easy mathematical problems and open, unsolved ones could be quite small indeed.)