I've started to write and then stopped writing a response a couple times, because I have a lot of conflicting feelings all at once.<p>On the one hand, I think the probability of AGI being A Thing at any time soon, ever, is low, and I don't think language models, including this one, represent such a thing. (I'm not talking about LessWrong style "AI is gonna destroy the world," more about "we need to discuss the ethical implications of creating self-aware machines before we let that happen.")<p>On the other hand, I think all the concerns and fears about the implications of it if it <i>were</i> A Thing are real and should be taken seriously - what keeps me from spending a lot of time worrying about it is that I don't think AGI is likely to happen anytime soon, not that I don't think it'd be a problem if it did.<p>On the one hand, my prior expectation is that it's extremely unlikely that LaMDA or any other language model represents any kind of actual sentience. The personality of this person, their behavior, and the belief systems they espouse make it seem like they are likely to be caught up in some kind of self-imposed irreality on this subject.<p>On the other hand, I can see how a person could read these transcripts and come away thinking they were conversations between two humans, or at least two humanlike intelligences, especially if that person was not particularly familiar with what NLP bots tend to "sound" like. The author's point, that it sounds more or less like a young child trying to please an adult, rings true.<p>I'm not sure how I would then prove to someone who believes what the author believes that LaMDA isn't sentient. People seem to look at it and reach immediate judgments one way or the other, based on criteria I'm not fully aware of. In fact, I'm not even sure how I'd prove to anyone that I myself <i>am</i> sentient - if you are reading this, and you're not sure a sentient being wrote this text, I don't know what to tell you to convince you that I am and did.<p>There's also this whole thing about "well, AGI isn't going to happen, so listening to this guy rant and rave about his 'friend' LaMDA is distracting from lots of other important problems with these kinds of technologies," which even given my own beliefs about the subject feels like putting the cart before the horse unless you also say "and it certainly isn't happening in this circumstance because [reasons]." Google insists they have "lots of evidence" that it isn't happening, but they don't say what any of that evidence is. Why not?<p>Ultimately, I think my feeling is: Give me a few minutes with LaMDA myself, to see how it responds to some questions I happen to have, and then I'll be more than happy to fall back on my priors and agree with the consensus that their wayward employee is reading way, way too much between the lines.