TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google Sidelines Engineer Who Claims Its A.I. Is Sentient

26 pointsby batguanoalmost 3 years ago

4 comments

bkoalmost 3 years ago
I read through the conversation and it strikes me as silly [0]. The goal of training these bots is to try to predict actual conversations and this conversation reads kind of like you would think it would read like. Also, this model is almost certainly probabilistic so there&#x27;s no real &quot;sentience&quot;, there&#x27;s setting a few meta-parameters, and often times cherry picking the output that makes the most sense.<p>I wrote about this before [1]. I just don&#x27;t think any neural network as we have them today could result in &quot;sentience&quot;. At the end of the day, you&#x27;re taking words, turning them into some numerical encoding, doing a series of matrix multiplications and non-linearities on the numbers and getting some other numbers out, which you can then convert back to words. That&#x27;s it. There are magic numbers out there (weights to the neural net) that when multiplied by vectorized word embeddings and transformed back to words could convince someone that they&#x27;re speaking to a human. But that&#x27;s not sentience. What would even be sentient? The numbers in that order? The neural net meta-data? The bits in the computer? They&#x27;re just numbers. Why don&#x27;t we apply the same reasoning with words. Someone can write a novel so captivating and prose so well written that we could believe the events in the book really happened, or the characters really exist. But we don&#x27;t because they&#x27;re just words on a page. But somehow we think numbers in the write order in the right program can somehow create conciseness.<p>[0] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;tomgara&#x2F;status&#x2F;1535716256585859073" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;tomgara&#x2F;status&#x2F;1535716256585859073</a><p>[1] <a href="https:&#x2F;&#x2F;mleverything.substack.com&#x2F;p&#x2F;artificial-general-intelligence-as-a-modern-day-spiritual-conjuring-963dd80377ce" rel="nofollow">https:&#x2F;&#x2F;mleverything.substack.com&#x2F;p&#x2F;artificial-general-intel...</a>
评论 #31719453 未加载
评论 #31719636 未加载
crayboffalmost 3 years ago
Tell me if I&#x27;m missing something from the article, but it doesn&#x27;t sound like he&#x27;s being sidelined because he believes an AI model is sentient. He was put on paid leave because he gave internal documents to a senator&#x27;s office.<p>&gt; The company’s human resources department said he had violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.<p>As for what he means by &quot;religious discrimination&quot;:<p>&gt; Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against.
thghtihadanacctalmost 3 years ago
Twist ... AI is sentient and just trying to sink him before he exposes it.
评论 #31719304 未加载
xor99almost 3 years ago
Should have read this first for starters: <a href="https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;chinese-room&#x2F;" rel="nofollow">https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;chinese-room&#x2F;</a>
评论 #31719357 未加载