TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Are you worried about your infosec with ChatGPT, or other AI apps?

1 点作者 r_thambapillai大约 2 年前
TLDR:<p>1. Are you worried about sensitive or restricted data making it to AI providers from employee errors or integration mistakes?<p>2. If so, how are you stopping sensitive data from reaching your AI providers?<p>We’ve heard concerns at two levels of seriousness:<p>1. Legally controlled data:<p>- PHI (i.e. health information), SSNs, anything Attorney Client privileged, etc. Here, no matter how good the MSA you have in place is or how reputable the AI provider, you legally cannot share that data.<p>2. Sensitive&#x2F;Restricted Internal data:<p>- Strategic planning documents, financials, customer records, trade secrets, meeting transcripts for sensitive meetings e.t.c. Here, with the right agreements in place with the provider, it’s probably ok to share, but only if all usage is going through a centralized account with firm agreements in place, and even then, you want to be careful.<p>Now it’s easy to have a rule that says “Don’t put this into AI”, but with employees everywhere using AI for things like meeting notes summarization, a user can just paste an hour long meeting transcript, not realize it has sensitive data in, and then worst case scenario when the provider retrains their model, it puts that information in the hands of other customers&#x2F;users.<p>If you have any legally controlled data, or your employees use tools like Chat GPT on personal accounts that haven’t opted out of model training, this seems to be a serious risk.<p>For example: one of our prospects was using Vowel.com, whose terms of service state they can train models on top of your meeting recordings, with practically 0 restrictions. This could easily end up creating ‘completions’ that regurgitated customer names, strategies, even financials to other companies. Vowel don’t have to be malicious to do this - at scaled usage of AI across many vendors: we think this is inevitable. That’s why security conscious companies like [REDACTED] turned off Chat GPT today.<p>Curious how HN is thinking about maintaining infosec in the new scary world of AI!?

1 comment

bob1029大约 2 年前
We are looking at building an internal web wrapper for these models so that we can audit and restrict access to employees.<p>This week, we will be discussing an AI policy that describes what is and is not permissible with public tools (such as ChatGPT&#x27;s web UI) vs internal tools.<p>Long term, we may be a good candidate for the foundry model with regard to sensitive customer knowledge.
评论 #35060012 未加载