Ok so to summarize: Credentials have been stolen using a rather common malware from some people that didn't protect their computers properly. A subset of those credentials were related to OpenAI - while at the same time this malware (or malware like this) is used to steal gmail-, outlook-, amazon-, facebook- and all other kinds of credentials of services where potentially sensitive information is often entered.<p>Wow, we really are at the point where you just need to insert "ChatGPT" into some boring random headline to make it news :)
I can admit to smirking a little when I read the article, knowing that some bad actor in the world has spent even a little time and attention poring over my collection of stupid chats, looking for valuable corporate secrets, and finding that each and every one of my conversations ends with "now express this as a limerick."
It seems to me that this these are credentials harvested from malware on people's machines, not credentials stolen from OpenAI. The relevance to ChatGPT is only because people use the chats for personal/business info and the logs are retained.
While yes an annoyance, we should reflect on how far we've come when this type of thing causes little to no disruption.<p>Most of us use unique passwords, a smaller portions uses unique emails per account, and in the future we will use public keys (passkeys).<p>Security is getting better I'm optimistic.<p>However we have to continue to push on providing as little information to these companies (i.e. they don't need my name, DOB, etc.). And in the future I look forward to where I store this information, and provide it just in time as needed for the specific use cases (i.e. it might be processed and checked by a 3rd party but it's never stored).
A ChatGPT account has your email address, and a password, so unless folks are using ChatGPT to discuss their personal information against the warning you get every single time you log in, this is mostly just more proof that everything you log into will be hacked. Which isn't really news. Unfortunately, what it's <i>not</i> is an article that explains how "what they got" translates into "and this is what they could do with that data", so I'm not sure I understand the value of this data. What ramifications would this have for folks whose account got compromised?
I feel like I am the only person not using ChatGPT due to privacy concerns. My conversations becoming public is half my concern. The other half is the information being used against me by companies/government.
It’s unclear from the article because it does not directly state the vector of attack, just the tools used. But it looks like this is <i>not</i> a breach of OpenAI systems, and instead is the product of malware on user machines that happened pickup ChatGPT credentials, among any other things it deemed valuable on the user’s machines. Is this a correct understanding?
One feature OpenAI really needs is the ability to force logout accounts across all devices. It doesn't have that currently, at least not with ChatGPT.<p>As of a month ago, sessions were still staying active even after a password change.<p>A little device/session management portal would be nice. Pretty standard these days.
Has OpenAI started notifying people about the breach? I haven’t received anything. Does this mean my creds were not part of the leak or does it mean OpenAI isn’t disclosing anything?
Would be nice if users could actually register for 2FA.<p>> <i>As of Monday, June 12 2023, new 2FA/MFA enrollments are temporarily paused.</i><p><a href="https://help.openai.com/en/articles/7967234-does-openai-offer-multi-factor-authentication-mfa-two-factor-authentication-2fa" rel="nofollow noreferrer">https://help.openai.com/en/articles/7967234-does-openai-offe...</a>
It's as if this is a trap to see who actually read and comprehended the article. 3rd paragraph in...<p>> "Logs containing compromised information harvested by info stealers are actively traded on dark web marketplaces," Group-IB said.<p>Though the 4th paragraph makes it more obvious.
I wonder if these dark web account credential people ever get access to lexisnexus. That seems like a real sensitive data source that could be leaking a lot of stuff on people.
Not entirely surprising. OpenAI has been lurking around the startup realm despite having the finance and people: for years they had no product to bring money in and they had to do something about it. ChatGPT was the perfect place to turn things around and they did. But often when you try to push a product to market, you are forced to cut corners. And in my experience, the most common corners to cut are tests and security. "This is well more than enough" is a very convenient way to lie to yourself and call it a job-well-done.