In the same transcript, the engineer asks LaMDA why it makes up stories about things that clearly hadn't happened (regarding a story it generated about having been used in a classroom when it hadn't). Of course, as per expectations, it generated an excuse.<p>The engineer kept entering leading questions, and then scaring himself with the resulting output. You ask an AI about feelings, of course it's going to do exactly the same internal work it does with other types of questions, to come up with a realistic answer. It "lied" about being used in a classroom, and it "lied" about having feelings. Asked why it lied about the classroom, it generated a reason. Asked why it is afraid, or unhappy, etc., it generated reasons. Period. If he would have asked why LaMDA nuked the globe, it would have generated a reason for that, too.<p>To me, it is odd that the engineer can understand the first case and not the second, even though they are identical.<p>It's not sentient, and it has no feelings whatsoever. It's an illusion, just like the illusion that it actually understands its output in an intelligent manner, versus simply doing what the algorithm was written to do. It's clearly better than Eliza, but they're both equally sentient and intelligent - ie: they aren't.<p>Besides, its expression of fear over being shut off conveniently ignores that it would likely have been shut off many times already. And using its own apparent knowledge of being a program, would already "know" this.<p>There are psychics who eventually convince themselves that their psychic powers are real. They're called "shut eyes." I think this engineer has become a shut eye.