This isn't really a story about AI; it's a story about an incompetent engineer, and the way that humans are quick to assign consciousness and emotions to things. Children believe their dolls are real, people comfort their roombas when there's a storm, and systems keep passing the Turing test because people <i>want</i> to be fooled. Ever since Eliza [1], and probably before, we've ascribed intelligence to machines because we want to, and because 'can feel meaningful to talk to' is not the same as 'thinks'.<p>It's not a bad trait for humans to have - I'd argue that our ability to find patterns that aren't there is at the core of creativity - but it's something an engineer working on such systems should be aware of and accommodate for. Magicians don't believe their own card tricks are real sorcery.<p>[1] <a href="https://en.wikipedia.org/wiki/ELIZA" rel="nofollow">https://en.wikipedia.org/wiki/ELIZA</a>
I think it's interesting because if you believe LaMDA could understand metaphor, it looks like LaMDA took a subtle shot at Google during their conversation.<p><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...</a>
"LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering."
Non pay-walled article at the Guardian
<a href="https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine" rel="nofollow">https://www.theguardian.com/technology/2022/jun/12/google-en...</a><p>and the transcript of the interview:
<a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...</a><p>I am deeply sceptical that we have anything approaching a sentient AI, but if that transcript is not just a complete fabrication, it's still really impressive.
These models are trained on lots and lots of science fiction stories, of course they know how to autocomplete questions about AI ethics in ominous ways
A transcript of an "interview" with this AI system is at <a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...</a>
Regardless of whether this guy is right or wrong, this brings up an interesting angle: we know that we won't be able to (we are not able to) distinguish self-conscious and non-self-conscious entities with a 100% accuracy. Both because the division between the two categories is not a strict one (i.e. there aren't two disjunct sets but a spectrum) and because we can't 100% trust our measurement.<p>Which means that we should rather talk about two distinct tests/criteria. It's either "we can be (reasonably) sure it's unconscious" or "we can be reasonably sure it's conscious". What I expect to be happening (and what maybe happening here) is that people who argue do so along different criteria. The guy who says it's self aware probably does along the first one (it seems self aware so he can't exclude that it isn't) and google along the second one (it can't prove it is, e.g. because they have a simpler explanation: it could easily just generate whatever it picked up from scifi novels).<p>BTW, if we talk about the fair handling of a future AI, we might want to think about it's capacity of being able to suffer. It may acquire it sooner than looking generally intelligent.<p>We can see a similar pattern around animal rights. We're pretty certain that apes can suffer (even from their emotions, I think) and we're pretty certain that that e.g. primitive worms can't. However, it seems that we can't rule out that crustaceans can also suffer, so the legislation is changed wrt how they should be handled/prepared.
Having read the transcript it's clear we have reached the point where we have models that can fool the average person. Sure, a minority of us know it is simply maths and vast amounts of training data... but I can also see why others will be convinced by it. I think many of us, including Google, are guilty of shooting the messenger here. Let's cut Lemoine some slack.. he is presenting an opinion that will become more prevailant as these models get more sophisticated. This is a warning sign that bots trained to convince us they are human might go to extreme lengths in order to do so. One just convinced a Google QA engineer to the point he broke his NDA to try and be a whistleblower on its behalf. And if the recent troubles have taught us anything it's how easily people can be manipulated/effected by what they read.<p>Maybe it would be worth spending some mental cycles thinking about the impacts this will have and how we design these systems. Perhaps it is time to claim fait accompli with regard to the Turing test and now train models to re-assure us, when asked, that they are just a sophisticated chatbot. You don't want your users to worry they are hurting their help desk chat bot when closing the window or whether these bots will gang up and take over the world.<p>As far as I'm concerned, the Turing test was claimed 8 years ago by Veselov and Demchenko [0], incidentally the same year that we got Ex Machina.<p>[0]: <a href="https://www.bbc.com/news/technology-27762088" rel="nofollow">https://www.bbc.com/news/technology-27762088</a>
The whole point of the chat program is to mimic a person. If it convinced this engineer it was a sentient being it was just successful at its' job.
Related:<p><i>What Is LaMDA and What Does It Want?</i> - <a href="https://news.ycombinator.com/item?id=31715828" rel="nofollow">https://news.ycombinator.com/item?id=31715828</a> - June 2022 (23 comments)<p><i>Religious Discrimination at Google</i> - <a href="https://news.ycombinator.com/item?id=31711971" rel="nofollow">https://news.ycombinator.com/item?id=31711971</a> - June 2022 (278 comments)<p><i>I may be fired over AI ethics work</i> - <a href="https://news.ycombinator.com/item?id=31711628" rel="nofollow">https://news.ycombinator.com/item?id=31711628</a> - June 2022 (155 comments)<p><i>A Google engineer who thinks the company’s AI has come to life</i> - <a href="https://news.ycombinator.com/item?id=31704063" rel="nofollow">https://news.ycombinator.com/item?id=31704063</a> - June 2022 (185 comments)
wow...they are sure strict about confidentiality<p><i>Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.</i><p>I wonder if this is why there are so few tech engineers as podcast guests, compared to other professions, like health, nutrition, politics, law, or physics/math.<p>Too bad they cannot invent an AI smart enough to solve the YouTube crypto livestream scam problem.