TLDR:<p>1. Are you worried about sensitive or restricted data making it to AI providers from employee errors or integration mistakes?<p>2. If so, how are you stopping sensitive data from reaching your AI providers?<p>We’ve heard concerns at two levels of seriousness:<p>1. Legally controlled data:<p>- PHI (i.e. health information), SSNs, anything Attorney Client privileged, etc. Here, no matter how good the MSA you have in place is or how reputable the AI provider, you legally cannot share that data.<p>2. Sensitive/Restricted Internal data:<p>- Strategic planning documents, financials, customer records, trade secrets, meeting transcripts for sensitive meetings e.t.c. Here, with the right agreements in place with the provider, it’s probably ok to share, but only if all usage is going through a centralized account with firm agreements in place, and even then, you want to be careful.<p>Now it’s easy to have a rule that says “Don’t put this into AI”, but with employees everywhere using AI for things like meeting notes summarization, a user can just paste an hour long meeting transcript, not realize it has sensitive data in, and then worst case scenario when the provider retrains their model, it puts that information in the hands of other customers/users.<p>If you have any legally controlled data, or your employees use tools like Chat GPT on personal accounts that haven’t opted out of model training, this seems to be a serious risk.<p>For example: one of our prospects was using Vowel.com, whose terms of service state they can train models on top of your meeting recordings, with practically 0 restrictions. This could easily end up creating ‘completions’ that regurgitated customer names, strategies, even financials to other companies. Vowel don’t have to be malicious to do this - at scaled usage of AI across many vendors: we think this is inevitable. That’s why security conscious companies like [REDACTED] turned off Chat GPT today.<p>Curious how HN is thinking about maintaining infosec in the new scary world of AI!?