Home
24 comments
hy55512 days ago
Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.<p>Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.
评论 #43947393 未加载
评论 #43948522 未加载
评论 #43947376 未加载
评论 #43948478 未加载
评论 #43950587 未加载
评论 #43947337 未加载
评论 #43949609 未加载
评论 #43949480 未加载
评论 #43948984 未加载
caseyy12 days ago
I know many pro-LLM people here are very smart, but sometimes it's wise to heed the words of world-renowned experts on a subject.<p>Otherwise, you may end up defending this and it's really foolish:<p>> “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.
评论 #43950334 未加载
评论 #43950546 未加载
评论 #43956082 未加载
评论 #43952241 未加载
lurk212 days ago
I tried Replika years ago after reading a Guardian article about it. The story passed it off as an AI model that had been adapted from one a woman had programmed to remember her deceased friend using text messages he had sent her. It ended up being a gamified version of Smarter Child with a slightly longer memory span (4 messages instead of 2) that constantly harangued the user to divulge preferences that were then no-doubt used for marketing purposes. I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).<p>Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s <i>Her</i> came out.<p>From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).<p>I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
评论 #43947639 未加载
评论 #43947827 未加载
评论 #43952062 未加载
mrcsharp12 days ago
> "I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”<p>He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.<p>I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?
评论 #43950979 未加载
评论 #43954129 未加载
评论 #43957122 未加载
Xcelerate12 days ago
I have two lines of thought on this:<p>1) Chatbots are never going to be perceived as safe or effective as humans by default, primarily due to human fiat. Professionals like counselors (and lawyers, doctors, software engineers, etc.) will <i>always</i> claim that an LLM cannot do their job, namely because acknowledging such threatens their livelihood. Determining whether LLMs genuinely provide therapeutic value to humans would require rigorous, carefully controlled experiments conducted over many years.<p>2) Chatbots definitely cannot replace human therapists <i>in their current state</i>. That much seems quite obvious to me for various reasons already argued well by others on here. But I had to highlight point #1 as devil's advocate, because adopting the mindset that "humans are inherently better by default" due to some magical or scientifically unjustifiable reason will prevent forward progress. The goal is to eliminate the (quite reasonable) fear people have of eventually losing their job to AI by enacting societal change now rather than denying into perpetuity that chatbots are necessarily inferior, at which point everyone will in fact lose their jobs because we had no plan in place.
评论 #43954146 未加载
评论 #43953118 未加载
评论 #43950361 未加载
jdietrich12 days ago
In the UK (and many other jurisdictions outside the US), psychotherapy is completely unregulated. Literally anyone can advertise their services as a psychotherapist or counsellor, regardless of qualifications, experience or their suitability to work with potentially vulnerable people.<p>Compared to that status quo, I'm not sure that LLMs are meaningfully more risky - unlike a human, at least it can't physically assault you.<p><a href="https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-government-update-on-statutory-regulation-of-counsellors-and-psychotherapists/" rel="nofollow">https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-gove...</a><p><a href="https://www.theguardian.com/society/2024/oct/19/psychotherapists-in-england-must-be-regulated-experts-say-after-abuse-claims-rise" rel="nofollow">https://www.theguardian.com/society/2024/oct/19/psychotherap...</a>
评论 #43950139 未加载
James_K12 days ago
Respectfully, no sh*t. I've talked to a few of these things, and they are feckless yes-men. It's honestly creepy, they sound like they want something from you. Which I suppose they do: continual use of their services. I know a few people who use these things for therapy (I think it is the most popular use now) and I'm downright horrified at the sort of stuff they say. I even know a person who uses the AI to date. They will paste conversations from apps into the AI and ask it how to respond. I've set a rule for myself; I will never speak to machines. Sure, right now it's obvious that they are trying to inflate my ego and keep using the service, but one day they might get good enough to trick me. I already find social media algorithms quite addictive, and so I have minimise them in my life. I shudder to think what a trained agent like these may be capable of.
评论 #43948260 未加载
kbelder12 days ago
I think a lot of human therapists are unsafe.<p>We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
评论 #43948958 未加载
评论 #43948889 未加载
sheepscreek12 days ago
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.<p>AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.<p>The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.
评论 #43950278 未加载
评论 #43950064 未加载
评论 #43949591 未加载
评论 #43949589 未加载
评论 #43950547 未加载
drdunce12 days ago
As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment. We could start by not using "Artificial Intelligence" - that makes it sound like a some infallible omniscient being with endless compassion and wisdom that can always be trusted. It's not intelligent, it's a large language model, a convoluted next word prediction machine. It's a fun trick, but shouldn't be trusted with Python code, let alone life advice. Armed with that simple bit of information, the user is free to choose how they use it for help, whether it be medical, legal, work etc.
评论 #43947397 未加载
评论 #43947490 未加载
评论 #43949846 未加载
评论 #43949886 未加载
HPsquared12 days ago
Sometimes an "unsafe" option is better than the alternative of nothing at all.
评论 #43947201 未加载
评论 #43950906 未加载
citizenkeen12 days ago
Look, make the companies offering AI therapy carry medical malpractice insurance at the same risk as human therapists. If they tell someone to go off their meds, let a jury see those transcripts and see if the company still thinks that’s profitable and feasible.
pavel_lishin12 days ago
A recent Garbage Day newsletter spoke about this as well, worth reading: <a href="https://www.garbageday.email/p/this-is-what-chatgpt-is-actually-for" rel="nofollow">https://www.garbageday.email/p/this-is-what-chatgpt-is-actua...</a>
j4512 days ago
Where the experts are the ones who's incomes would be threatened, there is likely some merit in what they're saying, but also some digital literacy skills.<p>I don't know that AI "advisory" chatbots can replace humans.<p>Could they help an individual organize their thoughts for more productive time with professionals? Probably.<p>Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.<p>Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?<p>Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.
评论 #43948605 未加载
arvinsim6 days ago
It will be hard to fight against the tendency of people to use LLMs as therapists when LLMs are relatively free compared to paying up for a human therapist.
rdm_blackhole12 days ago
I think the core of the problem here is that the people who turn to chat bots for therapy sometimes have no choice as getting access to a human therapist is simply not possible without spending a lot of money or waiting 6 months before a spot becomes available.<p>Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
评论 #43947926 未加载
评论 #43947979 未加载
评论 #43949536 未加载
miki12321112 days ago
So here's my nuanced take on this:<p>1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."<p>AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.<p>The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are <i>all</i> human therapists licensed in country / state X more competent than a good AI?"<p>2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).<p>It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. <i>This is true even if AI is net beneficial because of the increased access to therapy.</i><p>3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.<p>4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?<p>5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?
deadbabe12 days ago
I used ChatGPT for therapy and it seems fine, I feel like it helped, and I have plenty of things fucked up about myself. Can’t be much worse than other forms of “therapy” that people chase.
bigmattystyles12 days ago
The problem is they are cheap and immediately available.
评论 #43947124 未加载
评论 #43948263 未加载
nickdothutton12 days ago
Perhaps experts can somehow moderate or contribute training data awarded higher weights. Dont let perfect be the enemy of good.
emptyfile12 days ago
The idea of people talking to LLMs in this way genuinely disturbs me.
bitwize12 days ago
I dunno, man, M-x doctor made me take a real hard long look at my life.
Buttons84012 days ago
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
评论 #43947280 未加载
评论 #43947233 未加载
评论 #43947234 未加载
评论 #43948484 未加载