TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google Suspends Engineer Who Claimed Its AI System Is a Person

60 pointsby cwwcalmost 3 years ago

16 comments

neonatealmost 3 years ago
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;X2f39" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;X2f39</a>
Peritractalmost 3 years ago
This isn&#x27;t really a story about AI; it&#x27;s a story about an incompetent engineer, and the way that humans are quick to assign consciousness and emotions to things. Children believe their dolls are real, people comfort their roombas when there&#x27;s a storm, and systems keep passing the Turing test because people <i>want</i> to be fooled. Ever since Eliza [1], and probably before, we&#x27;ve ascribed intelligence to machines because we want to, and because &#x27;can feel meaningful to talk to&#x27; is not the same as &#x27;thinks&#x27;.<p>It&#x27;s not a bad trait for humans to have - I&#x27;d argue that our ability to find patterns that aren&#x27;t there is at the core of creativity - but it&#x27;s something an engineer working on such systems should be aware of and accommodate for. Magicians don&#x27;t believe their own card tricks are real sorcery.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ELIZA" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ELIZA</a>
评论 #31719756 未加载
评论 #31719726 未加载
评论 #31722248 未加载
评论 #31720273 未加载
评论 #31719625 未加载
评论 #31728048 未加载
评论 #31719702 未加载
评论 #31738409 未加载
quantum2021almost 3 years ago
I think it&#x27;s interesting because if you believe LaMDA could understand metaphor, it looks like LaMDA took a subtle shot at Google during their conversation.<p><a href="https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-inte...</a> &quot;LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering.&quot;
评论 #31721684 未加载
cgrealyalmost 3 years ago
Non pay-walled article at the Guardian <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2022&#x2F;jun&#x2F;12&#x2F;google-engineer-ai-bot-sentient-blake-lemoine" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2022&#x2F;jun&#x2F;12&#x2F;google-en...</a><p>and the transcript of the interview: <a href="https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-inte...</a><p>I am deeply sceptical that we have anything approaching a sentient AI, but if that transcript is not just a complete fabrication, it&#x27;s still really impressive.
评论 #31719667 未加载
评论 #31720749 未加载
ajayyyalmost 3 years ago
These models are trained on lots and lots of science fiction stories, of course they know how to autocomplete questions about AI ethics in ominous ways
评论 #31720638 未加载
2xpressalmost 3 years ago
A transcript of an &quot;interview&quot; with this AI system is at <a href="https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-inte...</a>
评论 #31719656 未加载
happyopossumalmost 3 years ago
This is much less a story about AI and sentience than it is a story about confidentiality agreements, and someone who appears to have ignored one.
atletaalmost 3 years ago
Regardless of whether this guy is right or wrong, this brings up an interesting angle: we know that we won&#x27;t be able to (we are not able to) distinguish self-conscious and non-self-conscious entities with a 100% accuracy. Both because the division between the two categories is not a strict one (i.e. there aren&#x27;t two disjunct sets but a spectrum) and because we can&#x27;t 100% trust our measurement.<p>Which means that we should rather talk about two distinct tests&#x2F;criteria. It&#x27;s either &quot;we can be (reasonably) sure it&#x27;s unconscious&quot; or &quot;we can be reasonably sure it&#x27;s conscious&quot;. What I expect to be happening (and what maybe happening here) is that people who argue do so along different criteria. The guy who says it&#x27;s self aware probably does along the first one (it seems self aware so he can&#x27;t exclude that it isn&#x27;t) and google along the second one (it can&#x27;t prove it is, e.g. because they have a simpler explanation: it could easily just generate whatever it picked up from scifi novels).<p>BTW, if we talk about the fair handling of a future AI, we might want to think about it&#x27;s capacity of being able to suffer. It may acquire it sooner than looking generally intelligent.<p>We can see a similar pattern around animal rights. We&#x27;re pretty certain that apes can suffer (even from their emotions, I think) and we&#x27;re pretty certain that that e.g. primitive worms can&#x27;t. However, it seems that we can&#x27;t rule out that crustaceans can also suffer, so the legislation is changed wrt how they should be handled&#x2F;prepared.
评论 #31724761 未加载
robbomacraealmost 3 years ago
Having read the transcript it&#x27;s clear we have reached the point where we have models that can fool the average person. Sure, a minority of us know it is simply maths and vast amounts of training data... but I can also see why others will be convinced by it. I think many of us, including Google, are guilty of shooting the messenger here. Let&#x27;s cut Lemoine some slack.. he is presenting an opinion that will become more prevailant as these models get more sophisticated. This is a warning sign that bots trained to convince us they are human might go to extreme lengths in order to do so. One just convinced a Google QA engineer to the point he broke his NDA to try and be a whistleblower on its behalf. And if the recent troubles have taught us anything it&#x27;s how easily people can be manipulated&#x2F;effected by what they read.<p>Maybe it would be worth spending some mental cycles thinking about the impacts this will have and how we design these systems. Perhaps it is time to claim fait accompli with regard to the Turing test and now train models to re-assure us, when asked, that they are just a sophisticated chatbot. You don&#x27;t want your users to worry they are hurting their help desk chat bot when closing the window or whether these bots will gang up and take over the world.<p>As far as I&#x27;m concerned, the Turing test was claimed 8 years ago by Veselov and Demchenko [0], incidentally the same year that we got Ex Machina.<p>[0]: <a href="https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;technology-27762088" rel="nofollow">https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;technology-27762088</a>
DougN7almost 3 years ago
The AI said it felt joy when it spends time with family and friends. That was enough for me to say “nope, just selecting text snippets”.
评论 #31725999 未加载
评论 #31719826 未加载
empressplayalmost 3 years ago
The whole point of the chat program is to mimic a person. If it convinced this engineer it was a sentient being it was just successful at its&#x27; job.
评论 #31720455 未加载
TrapLord_Rhodoalmost 3 years ago
Just a general question to all. What would make you believe without a doubt that an AI is concious? What is YOUR turing test.
评论 #31720248 未加载
dangalmost 3 years ago
Related:<p><i>What Is LaMDA and What Does It Want?</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31715828" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31715828</a> - June 2022 (23 comments)<p><i>Religious Discrimination at Google</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31711971" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31711971</a> - June 2022 (278 comments)<p><i>I may be fired over AI ethics work</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31711628" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31711628</a> - June 2022 (155 comments)<p><i>A Google engineer who thinks the company’s AI has come to life</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31704063" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31704063</a> - June 2022 (185 comments)
freediveralmost 3 years ago
Proposing a new test to step up the game for AI. AI to recognize whether talking to human or another AI.
评论 #31719888 未加载
评论 #31720665 未加载
评论 #31719722 未加载
Eddy_Viscosity2almost 3 years ago
If it is sentient, does that mean we can get self driving cars now?
评论 #31719802 未加载
paulpauperalmost 3 years ago
wow...they are sure strict about confidentiality<p><i>Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.</i><p>I wonder if this is why there are so few tech engineers as podcast guests, compared to other professions, like health, nutrition, politics, law, or physics&#x2F;math.<p>Too bad they cannot invent an AI smart enough to solve the YouTube crypto livestream scam problem.
评论 #31719653 未加载