I've worked with/for a lot of org over the past few decades, and personal experience proves there are a _lot_ of incidents that go unreported.<p>The usual is that if there's no logs saying something bad actually happened, there's certainly nothing to say that <i>it did</i>, even though some terribly guessable credentials were used for ages on something publicly exposed. I know, they know, but told in no uncertain terms to drop it.<p>Nothing to see here, move along. Work to be done, money to be made.
It's hard enough to report issues <i>to</i> OpenAI. Not surprising that information coming out of the company is equally constrained.<p>Right now my ChatGPT4 history is full of chats I didn't create, on subjects ranging from corporate governance to Roblox scripting to somebody's math homework. It will be only a matter of time before this bug causes them to leak sensitive personal data. I spent 10 minutes looking for a way to report it, but they have successfully insulated themselves from any contact with their (paying) customers.<p>Pretty annoying, and not something you expect from a supposedly security-savvy company... although that expectation is certainly changing.
Ya I hope people are not putting any sensitive information when using Chat GPT. Anything that can get stolen will get stolen. Just a matter of when not if. On device LLMs with no network transmissions are the only way to keep things safe if you really care.
Post headline has been editorialised yet still terrible clickbait.
> OpenAI’s internal messaging systems early last year, stealing details of how OpenAI's technologies work from employees. Although the hacker did not access the systems housing key AI technologies, […]
Enough said. It’s completely normal to not disclose a breach if there’s no proof or great likelihood that customers were implicated.<p>A poorly written article regurgitating the NYT story with uninformed alarmist shitty podcast tier ‘analysis’.<p>Jog on.
As someome who hoped that OpenAI would be consistently candid, this certainly comes as a disappointment.<p>If the internal culture is to keep problems under wraps to maintain appearances, this seems like it might backfire at some point.
> OpenAI's systems, where the company keeps its training data, algorithms, results, and customer data, were not compromised<p>Article just rambles about some unnamed uninformed AI-phobes being concerned about US national security in relation to China because of some unknown OpenAI internal information that might have leaked.