TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Key risks using Generative AI

1 点作者 mpesce大约 1 年前

2 条评论

mpesce大约 1 年前
De-Risking AI", Wisely AI's latest white paper, highlights and explains five new risks of Generative AI tools: anthropomorphising; malicious and commercially protected training data; hallucinations; privacy, data security and data sovereignty; and prompt attacks. The white paper addresses each in detail, and suggest strategies to mitigate these risks. It's part of our core mission to "help organisations use AI safely and wisely."
barfbagginus大约 1 年前
I think risk zero, and it&#x27;s mitigations, is poorly analyzed.<p>Describing deep problems as if the AI has empathy is a very effective way to gain insight into those problems. It responds the way a caring person would respond if you had self disclosed a complex situation - often suggesting effective solutions.<p>Without the self disclosure and empathic context, those solutions don&#x27;t appear.<p>Am I aware that I&#x27;m disclosing sensitive information to a company?<p>Yeah, of course.<p>Do I have another choice?<p>Yes, let the problem languish without any external assistance, usually to my detriment.<p>Is it a reasonable mitigation to not use AI to solve problems that require empathy and disclosure of sensitive info?<p>Not for me. I am too mentally marginal and economically poor to have people that relate well to me, or to afford advocates. For me, self advocating with the help of AI is, at times, the only thing keeping me from being homeless.<p>This has led to an incredible increase to my Independence and quality of life as an autistic person. It helped me get sober, and find stable housing. And it has helped me reconnect with social life, family, and friends.<p>I can&#x27;t mitigate by not disclosing or not acting as if it has effective empathy. My personal issues and interpersonal conflicts just don&#x27;t get resolved that way.<p>Instead, I believe it&#x27;s important to recognize that some users, like me, actually depend on AI anthropomorphism. And we must be honest - the main risk of anthropomorphism is that we&#x27;ll disclose secrets to companies.<p>That means users like me could mitigate the anthro risk by building confidential models which are safe to confide in.<p>There&#x27;s nothing wrong with acting like an AI can hear you and give you empathic and effective responses. It is a huge loss of AI usage efficacy when we act otherwise. There are people with real problems they could solve easily by treating AI like a caring person, but those people will continue to suffer because they think AI is just a useless parrot.<p>Thus the way to mitigate anthro risk is not to make people cold and unfeeling in conversations with AI. The real mitigation is trustworthy AI that won&#x27;t tell secrets to third parties.<p>And until we have that, we have to accept that for some people, the benefit of using untrusted commercial AI for personal problems far outweighs the risks of not using AI, which can include things like homelessness and inability to send advocate or solve critical quality of life issues.