I think risk zero, and it's mitigations, is poorly analyzed.<p>Describing deep problems as if the AI has empathy is a very effective way to gain insight into those problems. It responds the way a caring person would respond if you had self disclosed a complex situation - often suggesting effective solutions.<p>Without the self disclosure and empathic context, those solutions don't appear.<p>Am I aware that I'm disclosing sensitive information to a company?<p>Yeah, of course.<p>Do I have another choice?<p>Yes, let the problem languish without any external assistance, usually to my detriment.<p>Is it a reasonable mitigation to not use AI to solve problems that require empathy and disclosure of sensitive info?<p>Not for me. I am too mentally marginal and economically poor to have people that relate well to me, or to afford advocates. For me, self advocating with the help of AI is, at times, the only thing keeping me from being homeless.<p>This has led to an incredible increase to my Independence and quality of life as an autistic person. It helped me get sober, and find stable housing. And it has helped me reconnect with social life, family, and friends.<p>I can't mitigate by not disclosing or not acting as if it has effective empathy. My personal issues and interpersonal conflicts just don't get resolved that way.<p>Instead, I believe it's important to recognize that some users, like me, actually depend on AI anthropomorphism. And we must be honest - the main risk of anthropomorphism is that we'll disclose secrets to companies.<p>That means users like me could mitigate the anthro risk by building confidential models which are safe to confide in.<p>There's nothing wrong with acting like an AI can hear you and give you empathic and effective responses. It is a huge loss of AI usage efficacy when we act otherwise. There are people with real problems they could solve easily by treating AI like a caring person, but those people will continue to suffer because they think AI is just a useless parrot.<p>Thus the way to mitigate anthro risk is not to make people cold and unfeeling in conversations with AI. The real mitigation is trustworthy AI that won't tell secrets to third parties.<p>And until we have that, we have to accept that for some people, the benefit of using untrusted commercial AI for personal problems far outweighs the risks of not using AI, which can include things like homelessness and inability to send advocate or solve critical quality of life issues.