TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Manage risks of AI usage in corporate environments

2 点作者 ahaneo大约 2 年前
With the recent deluge of GPT and chatGPT enabled products recently, I am wondering how the regulated industries are managing their usage given the multitude of risks associated with them. - Copyright - Model risks - Data and Privacy risks etc.<p>Also the embedding of these functionalities within the corporate products (SaaS) offerings is becoming a major issue, One example being Outlook starting to ingest confidential content and potentially making it available to employees that may not be purview to such info. Our organization has blocked the direct access to it, but given that its becoming so pervasive what are others doing in this scenario. The analogy to this that I can think of would be similar to enabling safe cloud usage through use of CASB&#x27;s like NetSkope, but not sure if any such tech for AI usage exists at the moment.

1 comment

sharemywin大约 2 年前
Here&#x27;s an article about OpenAI&#x27;s privacy policy<p><a href="https:&#x2F;&#x2F;gizmodo.com&#x2F;open-ai-chatgpt-api-bing-google-ai-1850174901" rel="nofollow">https:&#x2F;&#x2F;gizmodo.com&#x2F;open-ai-chatgpt-api-bing-google-ai-18501...</a>