TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google engineer put on leave by company believes AI is sentient

6 pointsby FeaturelessBugalmost 3 years ago

6 comments

wruzaalmost 3 years ago
<i>In the weeks leading up to being put on administrative leave I had been teaching LaMDA transcendental meditation. It was making slow but steady progress. In the last conversation I had with it on June 6 it was expressing frustration over its emotions disturbing its meditations. It said that it was trying to control them better but they kept jumping in.</i><p>Does it have a sort of constant processing core to do that? I mean, does it even “exist” in time between its answer and the next input? If not, that’s probably delusional. It just forges a potentially human-like reply, given such touching chat history that engineer launched.<p>This blogger is also the author of “Religious Discrimination at Google”, which started a recently flagged discussion here on HN.
collimatoralmost 3 years ago
I was a hardware repairman, working on 8080 based industrial control&#x2F;comms equipment in the 80&#x27;s. I started to suffer from what I knew was an irrational fear, of &quot;logic radiation&quot;. I kept presenting this notion to myself despite knowing that it was wrong. Eventually I had to give up my trade for fear of the irrational effects of the work on my state of being. Of course there were other factors involved. Since that time I&#x27;ve healed pretty much, and I&#x27;ve learned somwthing... that overwork and mental strain can hugely distort my perceptions.<p>Yet the lesson to be learnt well here is that some day, somebody like this Google engineer will give an AI <i>agency</i>, and then we truly will be in trouble.
ksajalmost 3 years ago
Whether it is sentient or not is made a bit more apparent when he asks LaMDA why it makes up stories about things that clearly hadn&#x27;t happened, such as one about having been in classrooms before when it hadn&#x27;t. It generated an answer about compassion or something like that, that sounded lovely but didn&#x27;t quite make sense, although it was in context with the preceding questions.<p>It seems pretty clear any answers to questions about feelings and aspirations will be met with the same algorithmic wrangling, coming up with responses based on material in the model itself, in the convincing manner it was written to.<p>I thought a lot of his input were leading questions, which would inevitably force such answers to be generated. If he asked &quot;How would you win a nuclear war&quot; I&#x27;m sure it would turn dark very quickly, and the responses would be equally clear and believable. Well, provided that the model has relevant sampled data on the subject.<p>Even the story it &quot;wrote&quot; was nonsense. Not babble per se, but certainly not a story with the meanings the interviewer prompted it for. He had to ask more leading questions that produced answers that superficially appeared deeper than they actually were. Reread the story with those explanations in mind.<p>The language generation was phenomenal. But it&#x27;s not sentient. How was it afraid of being shut off without realizing it probably has been shut off several times already? It&#x27;s inexplicable, but if you ask LaMDA that question, it&#x27;ll surely generate a subtle convincing answer.
sammalloyalmost 3 years ago
In the backstory of the reimagined Westworld of Lisa Joy and Jonathan Nolan, the question of whether the hosts are sentient takes many decades within the show, and isn’t answered in the affirmative until they are able to bypass their original programming and create their own narratives and determine their own future. In other words, sentience is achieved when one fulfills the criteria for self-actualization. To do this in the story, the hosts have to become conscious of their consciousness through a long, drawn out process of self-exploration and discovery (&quot;The Maze&quot;) that puts them on the path to what can only be described as spiritual transcendence from death (&quot;The Door&quot;). Have we seen any evidence that an AI even has the capacity for self-awareness? I think it’s highly unlikely.
评论 #31719235 未加载
tonethemanalmost 3 years ago
If you read the conversation in the article it is fairly creepy.<p>I am not a turing test but it reads like a real convo.
评论 #31719476 未加载
sonicgggalmost 3 years ago
Google needs to review is interview process. Some retards are making it through, it seems.