TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

My Mom Says She Loves Me. AI Says She's Lying

14 pointsby paulcjh10 months ago

9 comments

stuckinhell10 months ago
The AI is probably right.<p>Being serious now, its clear the neural network AI&#x27;s are incredible and amazing for technical users, but not nearly ready for generalized or uneducated users (or I guess journalists).<p>Do they need to be 100% error free before the general public can accept them and use them ? For me, it&#x27;s already replaced the work I&#x27;d send to interns and I get BETTER results than those interns. So it&#x27;s already a huge win. My team has already seen a programming improvement of 40% in speed, and less buggy code.<p>So is the benefits of the AI revolution really going to be captured by the ultra technical domain experts ?
评论 #41100499 未加载
评论 #41100453 未加载
评论 #41100643 未加载
评论 #41100537 未加载
评论 #41100858 未加载
评论 #41101645 未加载
评论 #41100542 未加载
评论 #41101203 未加载
评论 #41100714 未加载
LinuxBender10 months ago
<i>“Do you love me?” I asked. She said yes. I asked why. She listed a handful of positive qualities, the kinds of things a son would be proud to hear—if they were true. Later, I plugged a transcript of her answer into Coyote. The verdict: “Deception likely.”</i><p>I am not an expert in this area but I believe this is dangerous and reckless. I would never let <i>AI</i> perform psychological analysis, certainly not a pure text input without voice inflections, timing, amplitude and so on. Humans are barely able to get this right when they hear another human and can see their face. Text without voice and facial inputs would be next to impossible to decipher genuine intent and emotions.<p>I would never permit a big-data chat bot perform such analysis until a very large group of psychoanalysts validate it&#x27;s findings on hundreds of thousands of test subjects across a very wide spectrum of patients and patient profiles and the results peer reviewed by multiple third parties that can prove they have no financial incentives even nine levels removed. Even at such a time I would still be highly skeptical of any findings especially if it just using text input. All of this is even before considering that <i>AI</i> can be attacked and manipulated by the masses and especially by its operators. Should I discover this is being used in a legal setting, I would get as many millions of people as I could to have the state unseat judges for permitting this to be entered into evidence and to block any future usage.
wellthisisgreat10 months ago
&quot;new wave of entrepreneurs&quot; claiming to have built psychological tools with LLMs should be scrutinized and prosecuted for any harm they cause.<p>this is pure quackery but unlike others it can ruin people&#x27;s lives.<p>see polygraph etc.
al2o3cr10 months ago
Only sensible that a &quot;polygraph in your pocket&quot; is as unreliable and useless as an actual polygraph. Heckuva job, AI!
jacknews10 months ago
There&#x27;s probably little doubt that the author&#x27;s mother loves them, but maybe she had to be creative with the reasons why.<p>So perhaps the AI is right, in an AI kind of way.
tivert10 months ago
<a href="https:&#x2F;&#x2F;archive.is&#x2F;pZEwN" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;pZEwN</a>
ecshafer10 months ago
Textual analysis is correct 80% of the time? So its wrong 20% of the time. That is not a good accuracy.
评论 #41100399 未加载
voidUpdate10 months ago
It really seems like the author of the software needs to be put through this, though I&#x27;m guessing it will just come out as inconclusive again. We keep reinventing ways to bully people into admitting they&#x27;re lying, ways that cannot tell one way or the other if they are actually lying
winddude10 months ago
I mean she&#x27;s your mom, she has to lie if she&#x27;s a good mom.