The article says "now in the wild after being leaked" but then it says "the data is impossible to retrieve as it is now stored on the servers belonging to OpenAI." So did the source code leak out of OpenAI into the wild, or are they saying that OpenAI itself is "the wild"? As far as I see from the article, it's not accessible to the general public.
I think the market is going to explode (if it hasn't already) for on-prem, or at least private, LLMs on par with ChatGPT. This could be served by companies building their own, or by open-source projects, or by OpenAI or OpenAI's competitors<p>As a side-effect, this feels like a bright spot in the potentially authoritarian trajectory that AI could take as labor becomes less and less valuable. It encourages development of LLMs that compete with the current default option and can be run on more and more limited hardware. Enterprises might even want separate departments, or separate individuals, to be able to run their own models to prevent leakage
Confusing article. It appears the company discovered employees were pasting confidential information into ChatGPT and are assuming that data is now comprised given OpenAI policies stating conversations are periodically reviewed and used for training. The data doesn't appear to be accessible to the public directly.
How did they gain access to ChatGPT from their offices?<p>I worked in the ATX factory about a decade ago and the network was <i>very</i> locked-down at the time. You can't even get your phone into the building without a security guard doing things to it. Taking basic stuff like paper in/out is also disallowed.<p>I would have expected a total ban on personal computing devices leaving the parking lot if this happened during my time there.
This is exactly why Wall St. and the major banks have banned their employees using ChatGPT [0] [1] [2]. It is called regulation and compliance.<p>Some companies drinking the AI koolaid just seem to love learning the hard way.<p>[0] <a href="https://www.wsj.com/articles/jpmorgan-restricts-employees-from-using-chatgpt-2da5dc34" rel="nofollow">https://www.wsj.com/articles/jpmorgan-restricts-employees-fr...</a><p>[1] <a href="https://tech.co/news/wall-street-banks-ban-ai-chatgpt" rel="nofollow">https://tech.co/news/wall-street-banks-ban-ai-chatgpt</a><p>[2] <a href="https://www.bloomberg.com/news/articles/2023-02-24/citigroup-goldman-sachs-join-chatgpt-crackdown-fn-reports?leadSource=uverify%20wall" rel="nofollow">https://www.bloomberg.com/news/articles/2023-02-24/citigroup...</a>
Is there any written policy from OpenAI about how they protect chat data? How long they retain it? How they prevent chats from being hallucinated across user sessions?<p>OpenAI should be worried and self regulate before the gov’t steps in and does it for them.