I've recently seen some teams claim to use third-party AI assistants like Claude or ChatGPT for coding. Don't they consider it a problem to feed their proprietary commercial code into the services of these third-party companies?<p>If you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?<p>Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?
In theory, these companies all claim they don’t use data from API calls for training. Whether or not they adhere to that is… TBD, I guess.<p>So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.
Especially under current US administration and geopolitical climate?<p>Yeah, we're not doing that.<p>Also moved our private git repos and CIs to self-managed.