Source article: <a href="http://archive.today/g6Irs" rel="nofollow">http://archive.today/g6Irs</a><p>This doesn’t mention an outright ban, just that ChatGPT use has been restricted (whatever that means).
OpenAI have a good business model here, though possibly a bit unethical.<p>Shopify (who recently laid me off but I still speak highly of) locked down the public access to ChatGPT's website. But you <i>could</i> use Shopify's internal tool (built using <a href="https://github.com/mckaywrigley/chatbot-ui">https://github.com/mckaywrigley/chatbot-ui</a>) to access the APIs, with access to GPT4. And it was great!<p>So look at this from OpenAI's perspective. They could put up a big banner saying "Hey everyone, we use <i>everything</i> you tell ChatGPT to train it to be smarter. Please don't tell it anything confidential!". And then also say "By the way, we have private API access that doesn't use anything you say as training inputs- maybe your company would prefer that?"<p>The louder they shout those two things, the more businesses will line up to pay them.<p>And the reason they can do this: they've built a brilliant product that everyone wants to use, everyone <i>is going to use</i>.
ChatGPT is disallowed by default at pretty much every large company, just like every other external service that isn't explicitly approved and hasn't signed a contract. Apple employees aren't allowed to use their personal Dropbox account for storing corporate documents either, for example.<p>All such articles you see are just security teams clarifying the existing policy – you weren't allowed to use it before and you are not allowed to use it now. It's only noteworthy because it has ChatGPT in the title.
For a company of Apple's size, banning ChatGPT entirely is probably the only effective way of preventing people from training ChatGPT on their internal data.
PSA: My own employer (not Apple) restricts the same, and is pushing employees to use an internal Azure GPT3.5 deployment instead.<p>Unlike OpenAI, we do not disclose to employees that all prompts are logged and forwarded to both the security <i>and analytics</i> teams. Everything is being logged and replicated in plaintext with no oversight.<p>So be careful about code snippets with embedded creds or asking EmployerGPT really stupid questions about how to do your job. The priests are recording your confessions so you never know how or if they'll get used against you later.
I feel like users need to be better educated and held accountable.<p>Would you post proprietary data on Stackoverflow? No. You would formulate a generic question with any IP removed. That’s how we should use public ChatGPT.<p>So I think there’s an argument for a monitored portal of ChatGPT usage, where you are audited and can get in trouble. Heck even an LLM system itself can help identify proprietary data! Then use that to educated people and hold them accountable.
I'm a bit surprised by the comments here. Looks like people really find LLM useful for their day to day work. I'm surprised because I (maybe natively) thought the level of hallucinations in these tool was too prohibitive to get real value.<p>I personally don't mind that Apple bans ChatGPT. The interesting stuff in this news to me is how many people seems to get real value from it. To the point where company invest into getting private instances/versions.<p>How do you uses these LLM, for what kind of task?
Do you feel AI enhanced?
The innovation with generative AI is fundamentally legal: they're copyright laundering systems. You put a few images (books, etc.) in, mix it around, and "poof" there disappears the copyright.<p>I think this largely accounts for when it will be useful. (And why companies will and will not use it).
If you asked any of the CEOs from those companies banning ChatGPT we're in for a glorious AI-shaped future, until it comes to them. It doesn't really inspire any confidence.<p>Can you imagine if Marlboro or Philip Morris forbade their employees from smoking for health reasons?
My company is huge 100k and lucky enough they also see it a s critical and make it centrally available to us soon.<p>But this will be a problem for big companies: small ones normally care less about these types of things. This means big companies have to do something otherwise they will compete with ml enhanced developers.
A lot of companies, big and small, have been following this strategy. It makes a lot of sense to me. Companies should never blindly jump into the bandwagon of every novelty
I'm probably the slow guy but how does this work? If I obtain any data I'm allowed to do anything I want with it? Law doesn't seem to work like that? Q: Where did you find the data? A: Someone uploaded it! People upload things to [say] the piratebay or [say] youtube all the time.<p>Could we then look at this type of automation as a kind of cryptographic data store where no one knows what is inside which instance?<p>The whole process of teaching it to keep things secret from the humans seems like a terrific idea. It only prevents people from checking if it knows something. It will just happily continue using it as long as possible deniability is satisfied.<p>If one can't provide a copy of the data about an EU citizen (and everything derived from it) the EU citizen should be entitled to receive the whole thing? And request it to be deleted?<p>Say I steal everyone's chat log, does rolling a giant ball of data from it absolve my sins? If I allow others to pay not to make their upload public does that grant me absolution? The events don't seem remotely related.<p>This is going to be the new cryptocurrency bubble. People are going to give speeches about the revolutionary new system while the room fills with sinister motives for personal gain until the toxicity is dense enough to crush any positive effort.
How long till there's a self-hosted CoPilot based on llama or one of the other plethora of open source models? That will be a sizable market I predict.<p>I'd jump into it if I had time/resources
Apple has the financial and technical depth to create their own private GPT.<p>Business opportunity here is for anyone who can figure out how to do privacy preserving LLMs (without the need to trust the service provider) effectively (in terms of performance and cost).<p>I would keep my eye on this space:<p><a href="https://medium.com/optalysys/fhe-and-machine-learning-a-student-perspective-with-examples-88d70664a6cb" rel="nofollow">https://medium.com/optalysys/fhe-and-machine-learning-a-stud...</a>
All of the "ChatGPT banned" discussions seem to fail to differential ChatGPT (the D2C interface) from the B2B products. From a security and compliance perspective, they're essentially two completely different risk profiles.<p>* ChatGPT is the equivalent of putting your info on PasteBin or some other public sharing site. Anything you send there, you should not expect to remain private.<p>* The B2B side has much better controls and limits to minimize risk.
Why does OpenAI not address this concern directly? I’m sure it’s better for both Apple and OpenAI if there was a business agreement to not use their data.
OpenAI's data privacy policy is... worrying. Even when they say they don't look at your data, they leave a carve-out that they will look at it if they are "concerned" about it. I hope things like this help pressure them to change.
But do they also ban google/bing/whatever? They also get access to proprietary queries. And possibly to internal links that someone might search by accident.<p>I bet search engines know a ton of company secrets.
They probably also don’t want to get involved in a misty copyright suit if someone who had their copyrighted data hovered up by ChatGPT claims it subsequently ended up in an Apple product.
I'm surprised it wasn't already restricted. It's been blocked at our company for weeks, for better or worse. Would love for them to spin-up access to the api version.
Siri itself isn't AI based right? Just advanced search engine?<p>But I'd bet the farm they are working on Siri upgraded to AI, Alexa too<p>Then stuff is going to get weird when it's everywhere all the time on every device with voice recognition and speech and learning everything about everyone everywhere. That's dystopia scifi tv-series right there.
From now on they're only allowed to use BratGPT.<p><a href="https://bratgpt.com/" rel="nofollow">https://bratgpt.com/</a>
every company is banning chatgpt use by employees. every government agency is banning chatgpt use by employees.<p>it’s not newsworthy when company #3,426 bans the use of chatgpt. if a company _allowed_ chatgpt use by employees, that might be newsworthy.
The service economy has finally started really impacting the big companies in a substantial way it seems. What's Apple going to do? As long as chat GPT has the most users and the most momentum, apple is losing a competitive advantage by refusing to use it.<p>Edit: I'm talking about how software as a service has taken off and it's becoming difficult to impossible to run modern tools locally. Back in the day something like chat GPT would be sold with a license and now they will refuse to ever do that because they can't get as much money that way.<p>I'm not talking about services like apple providing cloud services.
I always chuckle when I see companies trying to “ban” new technology. On one hand, I understand (it’s impossible to ensure the proper data security controls), but there are 100 new AI based applications popping up a day. Ban ChatGPT and people will just use some other tool (probably with even worse data security safeguards).<p>In my opinion, the only real way out of this is for companies to offer their own security-approved solution. This might take the form of an internally-hosted model and chat interface, or one pointing to Microsoft Azure’s OpenAI APIs (Azure having more enterprise friendly data security terms).<p>This article says Apple is working on their own LLM, and presumably they’re offering that for employees to test, but many other companies are simply closing their eyes and trying to pretend it doesn’t exist.