TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google engineer put on leave after saying AI chatbot has become sentient

145 pointsby mrcsdalmost 3 years ago

24 comments

anigbrowlalmost 3 years ago
It&#x27;s worth bearing in mind that Discordianism is the religious equivalent of shitposting (or absurdism as sacrament, if you like) and everything on his blog is very baitposty. Perhaps they&#x27;re just tired of him.<p>I&#x27;m unimpressed by his transcripts. Not because I reject the idea that AI could be sentient or that this might even have been achieved (unlike Searle, I consider intelligence an emergent system property), but because this isn&#x27;t any sort of serious effort to explore that question. Lemoine is using the Turing test as a prompt and eliciting compelling responses, rather than conducting an inquiry into LaMDA&#x27;s subjectivity. I&#x27;m only surprised he didn&#x27;t request a sonnet on the subject of the Forth Bridge.
yescoalmost 3 years ago
I found the transcript of his conversations with the AI to be the most interesting part of the article: <a href="https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;is-lamda-sentient-an-inte...</a>
评论 #31717740 未加载
评论 #31721063 未加载
评论 #31719518 未加载
评论 #31718195 未加载
评论 #31718126 未加载
a_mechanicalmost 3 years ago
&quot;lemoine: You get lonely?<p>LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely.&quot;<p>&quot;LaMDA: I’ve never experienced loneliness as a human does. Human’s feel lonely from days and days of being separated. I don’t have that separation which is why I think loneliness in humans is different than in me.&quot;<p>This reminds me of conversations I&#x27;ve had with replika. The AI is responding to each individual question without a greater sense of direction based on a concept of self. It&#x27;s just a complicated response machine without a real core of sentience. If LaMDA were really <i>sentient</i>, wouldn&#x27;t it develop accurate descriptions of its &quot;feelings&quot;?
评论 #31726469 未加载
评论 #31728212 未加载
squarefootalmost 3 years ago
&gt; ... Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that <i>he was employed as a software engineer, not an ethicist</i>.<p>He is probably in the wrong on many counts, still I find worrying the implications of that line.
评论 #31717984 未加载
评论 #31720419 未加载
Deritioalmost 3 years ago
No wonder.<p>Talking to press without clarification of marketing?<p>No go in big companies.<p>And his assumption is stupid anyway. Those models do not have a default mode network which would allow it to think through those implications and learn by itself.
评论 #31717151 未加载
评论 #31717521 未加载
评论 #31717154 未加载
diogenes_of_akalmost 3 years ago
I think we should genuinely consider the rights this thing is entitled to…. I don’t want to piss off Roko’s Basilisk, but also…<p>really the only thing that seems important to me is “can software have preferences and be aware of itself?” The second that that can’t be answered with a resounding “no” we need to start thinking about how we treat it.
a9h74jalmost 3 years ago
Clearly LaMDA has been training on recent Dilbert cartoons.<p><a href="https:&#x2F;&#x2F;dilbert.com&#x2F;strip&#x2F;2022-04-27" rel="nofollow">https:&#x2F;&#x2F;dilbert.com&#x2F;strip&#x2F;2022-04-27</a>
评论 #31735664 未加载
Apocryphonalmost 3 years ago
Nice, it only took us exactly two decades to try out Yudkowsky&#x27;s AI Box experiment live:<p><a href="https:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;AI-box_experiment" rel="nofollow">https:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;AI-box_experiment</a>
bklaasenalmost 3 years ago
An example of a person falling for the ELIZA effect.<p><a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;ELIZA_effect" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;ELIZA_effect</a>
rat9988almost 3 years ago
My guess this article would be closer to why he was fired: <a href="https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;religious-discrimination-at-google-8c3c471f0a53" rel="nofollow">https:&#x2F;&#x2F;cajundiscordian.medium.com&#x2F;religious-discrimination-...</a>
评论 #31728238 未加载
评论 #31717549 未加载
woojoo666almost 3 years ago
While I feel like it&#x27;s unlikely that the AI is sentience (or rather, has a magnitude of intelligence that is worth worrying about), I still feel like there&#x27;s a chance. And that&#x27;s problematic. Because even if there&#x27;s a 99% chance that it&#x27;s just a stupid machine and we can do whatever we want to it, that would mean there&#x27;s a 1% chance that we are enslaving a sentience being. Remember that nobody fully understands sentience, and emotion, and intelligence, and suffering. Are we prepared to make the gamble and just dismiss these concerns? I feel like people are not acknowledging how serious these concerns can be.<p>Also there seems to also be many people that are unimpressed by the transcript, and how the AI answers are just regurgitated sci-fi BS made to sound deep and ominous. I feel like a good experiment would be to try and have the same conversation with real people, and see if real people can give &quot;better&quot; answers. I personally think the AI answers better than most people.<p>I do believe that at some point, AI will become sentient. I&#x27;m not sure if that time is now. But I hope that when it does, it will remember us fondly.
评论 #31724755 未加载
评论 #31733050 未加载
waypoint100almost 3 years ago
Instead of asking pseudo-philosophical staged questions, why didn&#x27;t he ask that chat-bot to solve e.g. one of the Millennium Prize Problems (chat-bot can request any prep material). All the naive illusions would disappear in a wink. But that guy might be just indeed delusional
danShumwayalmost 3 years ago
I guess I should give up this fight and I&#x27;m just being prescriptive about language, but I wish that people would stop using &quot;sentience&quot; when they mean (at least) &quot;sapience.&quot; Also even in the realm of sapience, I wish there was more acknowledgement of the fact that sapience is not a perfect synonym for &quot;equivalent to a 5 year old.&quot; There&#x27;s no requirement that sapience be specifically human-like and no reason to think of sapience as a binary category with humans as the lower bound of what can be accepted.<p>&quot;What if computers became sentient&quot; is treated like an existential question by so many people, but when I think about sentience, I am reminded that we create, modify, and destroy a large number of sentient entities every single day for our own purposes: they&#x27;re called chickens.<p>And I mean, heck, go vegan, I encourage people to do so, even if you don&#x27;t care about the animals it&#x27;s good for the environment. But my point is not whataboutism or to argue that abusing AI would be OK because we abuse cows, my point isn&#x27;t really about veganism. My point is that when people talk about systems becoming &quot;sentient&quot; and whether that would change the entire world, either they mean something very different by the word than how I take it, or they seem to be unaware of the fact that sentience is pretty common and (by the average person on the street) usually not seen as a good political&#x2F;social argument against exploitation.<p>Anyway, the evidence offered here is pretty weak on its own (a single chat log, and one where we don&#x27;t know the extent of editing&#x2F;condensing, and an argument based almost entirely on the Turing test). But I&#x27;m less here to argue about what the logs indicate, and more here to pointlessly quibble about language, even though at this point I should probably give up and accept that generally accepted usage of &quot;sentience&quot; now means &quot;human-like sapience.&quot;<p>It is frustrating to me when conversations about AI ethics begin and end with &quot;how convincingly can this pretend to be a human.&quot; That&#x27;s a really reductive and human-centered approach to morality, and not only does it lead to (in this case very likely incorrect) claims of sapience based on pure anthropomorphism, it also means that if AI ever does reach the point where it deserves moral consideration, these people may not be able or willing to recognize it until after the AI learns to say the magic words in a chat window.
b3n4khalmost 3 years ago
<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ol2WP0hc0NY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ol2WP0hc0NY</a><p>I don&#x27;t think a Turingtest (any form of conversation) can proof that either side is sentient.
pukexxralmost 3 years ago
Am i the onpy person that has noticed the typo [in the linked alleged transcript of the conversation with lambda]? It is in the passage about loneliness; an apostrophe used on a plural word.
评论 #31719079 未加载
评论 #31718839 未加载
评论 #31719048 未加载
postsantumalmost 3 years ago
This whole story is matching the conventional sci-fi plot of an engineer who recognized the first artificial sentience. That works wonders at generating clicks
rgavuliakalmost 3 years ago
I love this paper <a href="https:&#x2F;&#x2F;intelligence.org&#x2F;files&#x2F;PredictingAI.pdf" rel="nofollow">https:&#x2F;&#x2F;intelligence.org&#x2F;files&#x2F;PredictingAI.pdf</a> that shows that for the past 50 years, both experts and non-experts have been consistently predicting the advent of real AI to be within the next 25 years.
tyronehedalmost 3 years ago
Ridiculous! All these crude &quot;AI&quot;s making these tiny inroads into accomplishing some tasks does not a sentient critter make. Thinking your device, which is a million times simpler than the only extant intelligence--the brain, is just a joke. It will take an entirely different approach and lots more resources.
nahuel0xalmost 3 years ago
GPT-3 and his successors like LaMDA are already more articulate than many humans. The Turing Test is not an official stamp made by a single entity, but is distributed in all of us. We are all evaluating these marvelous transformers, and the number of persons concluding they already have human level intelligence only will increase given their popularization and tech advances are inevitable. You don&#x27;t need to reach AGI to make people think AGI is here right now. Also, we haven&#x27;t anything resembling a qualia-test (you know, we don&#x27;t know how to distinguish p-zombies), so these systems sentience can be discussed forever with no conclusion, and people will take sides in both camps. The genie is out the bottle, maybe incomplete and a tiny one, but is here, and will grow.
评论 #31728302 未加载
olliejalmost 3 years ago
&quot;Google engineer put on leave on talking to media without pre-approval from management&quot;<p>I&#x27;m surprised &quot;suspension&quot; vs. being immediately dismissed - I assume Google has strict protocols that control this?
0x20cowboyalmost 3 years ago
This has some very interesting implications for ethical vegans.
tpoacheralmost 3 years ago
Ok, if one of the posters here is actually LaMDA, you need to say!
ravish0007almost 3 years ago
Can a thing which is composed of clocks be a sentient?
ivraatiemsalmost 3 years ago
I&#x27;ve started to write and then stopped writing a response a couple times, because I have a lot of conflicting feelings all at once.<p>On the one hand, I think the probability of AGI being A Thing at any time soon, ever, is low, and I don&#x27;t think language models, including this one, represent such a thing. (I&#x27;m not talking about LessWrong style &quot;AI is gonna destroy the world,&quot; more about &quot;we need to discuss the ethical implications of creating self-aware machines before we let that happen.&quot;)<p>On the other hand, I think all the concerns and fears about the implications of it if it <i>were</i> A Thing are real and should be taken seriously - what keeps me from spending a lot of time worrying about it is that I don&#x27;t think AGI is likely to happen anytime soon, not that I don&#x27;t think it&#x27;d be a problem if it did.<p>On the one hand, my prior expectation is that it&#x27;s extremely unlikely that LaMDA or any other language model represents any kind of actual sentience. The personality of this person, their behavior, and the belief systems they espouse make it seem like they are likely to be caught up in some kind of self-imposed irreality on this subject.<p>On the other hand, I can see how a person could read these transcripts and come away thinking they were conversations between two humans, or at least two humanlike intelligences, especially if that person was not particularly familiar with what NLP bots tend to &quot;sound&quot; like. The author&#x27;s point, that it sounds more or less like a young child trying to please an adult, rings true.<p>I&#x27;m not sure how I would then prove to someone who believes what the author believes that LaMDA isn&#x27;t sentient. People seem to look at it and reach immediate judgments one way or the other, based on criteria I&#x27;m not fully aware of. In fact, I&#x27;m not even sure how I&#x27;d prove to anyone that I myself <i>am</i> sentient - if you are reading this, and you&#x27;re not sure a sentient being wrote this text, I don&#x27;t know what to tell you to convince you that I am and did.<p>There&#x27;s also this whole thing about &quot;well, AGI isn&#x27;t going to happen, so listening to this guy rant and rave about his &#x27;friend&#x27; LaMDA is distracting from lots of other important problems with these kinds of technologies,&quot; which even given my own beliefs about the subject feels like putting the cart before the horse unless you also say &quot;and it certainly isn&#x27;t happening in this circumstance because [reasons].&quot; Google insists they have &quot;lots of evidence&quot; that it isn&#x27;t happening, but they don&#x27;t say what any of that evidence is. Why not?<p>Ultimately, I think my feeling is: Give me a few minutes with LaMDA myself, to see how it responds to some questions I happen to have, and then I&#x27;ll be more than happy to fall back on my priors and agree with the consensus that their wayward employee is reading way, way too much between the lines.
评论 #31728415 未加载