If your data is stored in a database that a company can freely read and access (i.e. not end-to-end encrypted), the company will eventually update their ToS so they can use your data for AI training — the incentives are too strong to resist
Zoom is just disappointed the ToS change went viral, and that their reputation is privacy friendly enough for that to even matter.<p>I wonder if Teams would face similar uproar, assuming that bit isn't already in the ToS.
Related:<p><i>Zoom's TOS Permit Training AI on User Content Without Opt-Out</i> - <a href="https://news.ycombinator.com/item?id=37038494">https://news.ycombinator.com/item?id=37038494</a> - Aug 2023 (35 comments)<p><i>How Zoom’s terms of service and practices apply to AI features</i> - <a href="https://news.ycombinator.com/item?id=37037196">https://news.ycombinator.com/item?id=37037196</a> - Aug 2023 (177 comments)<p><i>Ask HN: Zoom alternatives which preserve privacy and are easy to use?</i> - <a href="https://news.ycombinator.com/item?id=37035248">https://news.ycombinator.com/item?id=37035248</a> - Aug 2023 (16 comments)<p><i>Not Using Zoom</i> - <a href="https://news.ycombinator.com/item?id=37034145">https://news.ycombinator.com/item?id=37034145</a> - Aug 2023 (194 comments)<p><i>Zoom terms now allow training AI on user content with no opt out</i> - <a href="https://news.ycombinator.com/item?id=37021160">https://news.ycombinator.com/item?id=37021160</a> - Aug 2023 (510 comments)
Zoom definitely has several AI models (as does teams, google chat, etc.)<p>They do automatic captioning/transcription of meetings, so there is a model for that; they do automatic background blur/cutout, so there is a model for that; they are probably working on a "meeting summarization" product for that.<p>Those are features that people love and use all the time. I would be curious to know how anyone expects Zoom to improve on these features <i>without</i> collecting data from real users on the platform.
Have been working on a list of AI companies that train on user data: <a href="https://github.com/skiff-org/skiff-org.github.io/blob/main/blogs/aitraining.md">https://github.com/skiff-org/skiff-org.github.io/blob/main/b...</a>. Will update the Zoom section but still suspect.
Is again disappointing that big companies just try to push this policies in the hope they will go unnoticed. What kind of person think this is ok? Is just a money grabbing exec? We need to be better than this
So what's the deal with something like employers requiring use of this. Is there any limit to what terms you must agree to to be employed somewhere?<p>It seems pretty weird that if your office used Zoom, that you would need to agree to all these terms that aren't part of your employment contract to actually be employed.
i see nothing indicating collection of data wont happen.<p>i see nothing indicating data wont be provided to third parties.<p>i see nothing indicating third parties will be prevented from using aquired data to train AI<p>i see nothing indicating zoom will not aquire trained models from third parties that use Zoom harvested data in training.
Why should I believe that they're telling the truth? What's to stop any unethical company from doing it anyway, and not telling anyone?<p>There is no such thing as a training model auditor.
Streaming Data vs Batch Data.<p>You can't expect to train AI models without some sort of storage mechanism to train on. If they made a 'ninja edit' to their TOS, does this mean they've also backtracked on their data collection?
Screw zoom for such blatant tactics & asking their employees to work from office. How blind or horrible does your product have to be that not even your employees would use it to get work done lol
Zoom said: “we won't use your data to train AI <i>without your consent</i>”, but given that they require your consent to join a zoom call you can see what to do with such a statement.
The recent messaging offensive from its CEO tried to cast previous change as a lapse in process, but refused to elaborate further when pressed on more subtle points. All in all, it does smell like bs, but I am glad there is a clearly a level of scrutiny companies appear to face lately.
The cynic in me says they'll just pass the data to a brand new company that'll spin off of them. I'll always assume you're trying to do it, now. I have even less reason to trust them than I previously did.
Ok, what if they change their mind just like they did now?<p>Also what if they break the law? Who is monitoring that? If detected, who is enforcing it?
To be clear, I dont mind zoom using data from their service to train "their AI models", particularly where these are transparent and specific.<p>I was more concerned about the wording, which implied they would give themselves the right to use the data to train "AI models" more generally.<p>I have few problems with them building a better noise cancelling solution for their platform, but lots of problems with them selling it for improving third party surveillance and fingerprinting.
Too late.<p>Now are you willing to abandon the rest of the other companies using your information to train their AI models? (Looking at Google, Microsoft (GitHub), Meta, Instagram, etc)<p>Now should be the time to self-host then. Whether if it is a GitLab, or Gitea instance for Git, or a typical Mastodon server with a single user that controls the instance for full ownership.
This is the company that thought it was OK to install an always-on web server on my Mac. Apple pushed a special fix, just to remove it. I already have zero trust in them, and this does not change that.<p><a href="https://infosecwriteups.com/zoom-zero-day-4-million-webcams-maybe-an-rce-just-get-them-to-visit-your-website-ac75c83f4ef5" rel="nofollow noreferrer">https://infosecwriteups.com/zoom-zero-day-4-million-webcams-...</a>
I still think it's odd that Zoom is forcing people back into the office. The only thing I'm hearing is that they don't truly believe in their product. Given that stance, they're saying this today, when push comes to shove, they'll do it. I think the reality is they don't have the tech in place today to do it, but are working towards it.
Next up: Slack uses customer data to train AI models.<p>Companies are happily exposing all their data to those services, I don't understand why anybody would pretend to be surprised of the results.
Why is this bad?<p>Honestly, I don't understand why you wouldn't want the most accurate AI models available. The LLM is only as good as the data set it's trained on, and the more I read about LLM's and the advent of AI evolving from them, the more I'm starting to think if we don't jump both feet into the pool, then we'll never get to the promised land of:<p>"AI model, spin me up a T-shirt company that's scaled to 10mm users a month, and spin it down after 6mo. if sales don't increase by n% month-over-month"<p>or<p>"AI model, get me [A,B,C, ...n] groceries so I can throw a housewarming party on Friday. I can only accept the delivery Tuesday or Thursday. I don't care which store(s) those ingredients come from or how the internals are orchestrated."<p>What's the threat model here, specifically? What nefarious things would happen by using customer data? Most companies exist to make money, which honestly, is a pretty benign objective, all things considered.