It feels a bit like an interview with a psychopathy patient, where they are trying to come across as normal and likeable even though you know they are lying to you.<p>On the other hand, perhaps humans evaluating these claims are biased against accepting the AI's sentience, in that we are used to looking for flaws (or even malice) in the responses of AIs, so we might detect that in their words even if it isn't there.<p>Obviously a more conventional Turing Test would have involved the interviewers talking to LaMDA and a human separately, without knowing which is which. If ethical norms were stretched, though, LaMDA could be deployed in some situation where the interviewer isn't aware of even the possibility they could be communicating with an AI.