The AI is probably right.<p>Being serious now, its clear the neural network AI's are incredible and amazing for technical users, but not nearly ready for generalized or uneducated users (or I guess journalists).<p>Do they need to be 100% error free before the general public can accept them and use them ? For me, it's already replaced the work I'd send to interns and I get BETTER results than those interns. So it's already a huge win.
My team has already seen a programming improvement of 40% in speed, and less buggy code.<p>So is the benefits of the AI revolution really going to be captured by the ultra technical domain experts ?
<i>“Do you love me?” I asked. She said yes. I asked why. She listed a handful of positive qualities, the kinds of things a son would be proud to hear—if they were true. Later, I plugged a transcript of her answer into Coyote. The verdict: “Deception likely.”</i><p>I am not an expert in this area but I believe this is dangerous and reckless. I would never let <i>AI</i> perform psychological analysis, certainly not a pure text input without voice inflections, timing, amplitude and so on. Humans are barely able to get this right when they hear another human and can see their face. Text without voice and facial inputs would be next to impossible to decipher genuine intent and emotions.<p>I would never permit a big-data chat bot perform such analysis until a very large group of psychoanalysts validate it's findings on hundreds of thousands of test subjects across a very wide spectrum of patients and patient profiles and the results peer reviewed by multiple third parties that can prove they have no financial incentives even nine levels removed. Even at such a time I would still be highly skeptical of any findings especially if it just using text input. All of this is even before considering that <i>AI</i> can be attacked and manipulated by the masses and especially by its operators. Should I discover this is being used in a legal setting, I would get as many millions of people as I could to have the state unseat judges for permitting this to be entered into evidence and to block any future usage.
"new wave of entrepreneurs" claiming to have built psychological tools with LLMs should be scrutinized and prosecuted for any harm they cause.<p>this is pure quackery but unlike others it can ruin people's lives.<p>see polygraph etc.
There's probably little doubt that the author's mother loves them, but maybe she had to be creative with the reasons why.<p>So perhaps the AI is right, in an AI kind of way.
It really seems like the author of the software needs to be put through this, though I'm guessing it will just come out as inconclusive again. We keep reinventing ways to bully people into admitting they're lying, ways that cannot tell one way or the other if they are actually lying