So here's my nuanced take on this:<p>1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."<p>AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.<p>The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are <i>all</i> human therapists licensed in country / state X more competent than a good AI?"<p>2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).<p>It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. <i>This is true even if AI is net beneficial because of the increased access to therapy.</i><p>3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.<p>4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?<p>5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?