> "Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit"<p>> "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."<p>Yes, and it was patently obvious from the onset. Why did it take a massive public backlash to actually reason about this? Can we get a promise that future initiatives will be evaluated a bit more critically before crap like this bubbles to the top again? Come on you DO hire bright people, what's your actual problem here?
I'm curious about the new parental control features they announced at the same time as the iCloud photo scanning. My recollection is that when they withdrew the iCloud scanning they also withdrew the new parental controls.<p>I'm curious why they also withdrew those. For those who don't remember the parental control, which were largely overshadowed by the controversy over the cloud stuff, they were to work like this:<p>1. If parents had enabled them on their child's device, they would scan incoming messages for sexual material. The scan would be entirely on-device. If such material was found the material would be blocked, the child would be notified that the the message contained material that their parents thought might be harmful, and asked if they wanted to see it anyway.<p>2. If the child said no, the material would be dropped and that would be the end of it. If the child said yes what happened next depended on the age of the child.<p>3. If the was at least 13 years old the material would be unblocked and that would be the end of it.<p>4. If the while was not yet 13 they would be given another warning that their parents think the material might be harmful, and again asked if they want to go ahead and see it. They would be told that if they say "yes" their parents will be notified that they viewed the material.<p>5. If they say no the material remains blocked and that is the end of it.<p>6. If they say yes it is unblocked, but their parents are told.<p>There wasn't a lot of discussion of this, and I only recall seeing one major privacy group object (the EFF, on the grounds that if it reaches step 6 it violates the privacy of the person sending sex stuff to your pre-teen because they probably did not intend for the parents to know).
The article keeps saying that Apple has responded or that Apple has clarified and then linking other Wired articles. Is there an Apple press release somewhere? If so, I'd rather read that.<p>ETA: looks like they directly provide documents from Apple at the bottom of the article
Sarah Gardner, the author of the letter to Apple and CEO of the Heat Initiative, worked for 10 years and until earlier this year as a VP at Thorn [1]. Thorn sells a "comprehensive solution for platforms to identify, remove and report child sexual abuse material." [2]<p>She's using PR to pressure on Apple into implementing the kind of solution her previous company is selling. Won't someone think of the children??<p>[1] <a href="https://www.linkedin.com/in/sarah-gardner-aba90013/" rel="nofollow noreferrer">https://www.linkedin.com/in/sarah-gardner-aba90013/</a>
[2] <a href="https://www.thorn.org/our-work-to-stop-child-sexual-exploitation/" rel="nofollow noreferrer">https://www.thorn.org/our-work-to-stop-child-sexual-exploita...</a>
It's nice that Apple have clarified this. I think that the original intent was a misstep and possibly an internal political situation that they had to deal with. I can see that a number of people would be on each side of the debate with advocacy throughout the org.<p>There is only one correct answer though and that is what they have clarified.<p>I would immediately leave the platform if they progressed with this.
I’m not sure I understand Apple’s logic here. Are iCloud Photos in their data centers not scanned? Isn’t everything by default for iCloud users sent there automatically to begin with? Doesn’t the same logic around slippery slope also apply to cloud scans?<p>This is not to say they should scan locally, but my understanding of CSAM was that it would only be scanned on its way to the cloud anyways, so users who didn’t use iCloud would’ve never been scanned to begin with.<p>Their new proposed set of tools seems like a good enough compromise from the original proposal in any case.
Part of the reason why this was (and is) a terrible idea is how these companies operate and the cost and stigma of a false negative.<p>Companies don't want to employ people. People are annoying. They make annoying demands like wanting time off and having enough money to not be homeless or starving. AI should be a tool that enhances the productivity of a worker rather than replacing them.<p>Fully automated "safety" systems <i>always</i> get weaponized. This is really apparent on Tiktok where reporting users you don't like is clearly brigaded becasue a certain number of reports in a given period triggers automatic takedowns and bans regardless of assurances there is human review (there isn't). It's so incredibly obvious when you see a duet with a threatening video gets taken down while the original video doesn't (with reports showing "No violation").<p>Additionally, companies like to just ban your account with absolutely no explanation, accountability, right to review or right to appeal. Again, all those things would require employing people.<p>False positives can be incredibly damaging. Not only could this result in your account being banned (possibly with the loss of all your photos on something like iCloud/iPhotos) but it may get you in trouble with law enforcement.<p>Don't believe me? Hertz was falsely reported their cars being stolen [1], which created massive problems for those affected. In a better world, Hertz executives would be in prison for making false police reports (which, for you and me, is a crime) but that will never happen to executives.<p>It still requires human review to identify offending content. Mass shootings have been live streamed. No automatic system is going to be able to accurately differentiate between this and, say, a movie scene. I guarantee you any automated system will have similar problems differentiating between actual CSAM and, say, a child in the bath or at the beach.<p>These companies don't want to solve these problems. They simply want legal and PR cover for appearing to solve them, consequences be damned.<p>[1]: <a href="https://www.npr.org/2022/12/06/1140998674/hertz-false-accusation-stealing-cars-settlement" rel="nofollow noreferrer">https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...</a>
False positives would constitute a huge invasion of privacy. Even actual positives would be, a mom taking a private picture of her naked baby, how can you report that. They did well dropping this insane plan. The slippery slope argument is also a solid one.
I haven't forgot about the guy that sent photos of his child to his doctor and was investigated for child pornography. With these systems, in my humble opinion, you are just one innocent photo at the beach away from your life turned upside down.
Pretty ridiculous idea. Bad actors simply won't use their platform if this was in place. It would only be scanning private data from all people who aren't comitting crimes.
I think they likely also considered the lawsuit exposure. If just 0.0001% of users sued over false positives, Apple would be in serious trouble.<p>And there's another dynamic where telling your customers you're going to scan their content for child porn is the same as saying you suspect your customers of having child porn. And your average non-criminal customer's reaction to that is not positive for multiple reasons.
<i>> “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit," Neuenschwander wrote. "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”</i><p>Both of these arguments are absolutely, unambiguously, correct.<p>The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to. This is, I argue, a bad thing. Is is bad for the individuals who are re-victimised on every share. It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.<p>Does the tech industry have any alternate solutions that could functionally mitigate this abuse? Does the industry feel that it has any responsibility at all to do so? Or do we all just shout "yay, individual freedom wins again!" and forget about the actual problem that this (misguided) initiative was originally aimed at?
The who is often interesting with these stories.<p>> a new child safety group known as Heat Initiative<p>Doesn't even have a website or any kind of social media presence; it literally doesn't appear to exist apart from the reporting on Apple's response to them, which is entirely based on Apple sharing their response with media, not the group interacting with media.<p>> Sarah Gardner<p>on the other hand previously appeared as the VP of External Affairs (i.e. Marketing) of Thorn (formerly DNA Foundation): <a href="https://www.thorn.org/blog/searching-for-a-child-in-a-private-world-thorn-vp-of-external-affairs-speaks-at-tedxwarwick/" rel="nofollow noreferrer">https://www.thorn.org/blog/searching-for-a-child-in-a-privat...</a><p>So despite looking a bit fishy at first, this doesn't seem to come from a christofascist group.
Speculation: they did a trial on random accounts from all over the world and found out so much illegal content that it will make them do enormous amount of policing on scale and lose troves of customers.
The vast majority (99%+) of iCloud Photos are not e2ee and are readable to Apple.<p>You can rest assured that they are scanning all of it serverside for illegal images presently.<p>The kerfuffle was around clientside scanning, something that it has been reported that they dropped. I have thus far seen no statements from Apple that they actually intended to stop the deployment of clientside scanning.<p>Serverside scanning has been possible (and likely) for a long time, which illuminates their "slippery slope" argument as farce (unless they intend to force migrate everyone to e2ee storage in the future).
> Child sexual abuse material is abhorrent and we are committed to breaking the chain of coercion and influence that makes children susceptible to it.<p>It is amazing that so much counter-cultural spirit remains in Apple. They are probably going to ban likes and other vanity features in all iOS applications, prohibit access to popular media, put “pop stars” into rehabs, and teach their users to disobey (the hardest of all tasks).<p>A lot of people try really hard not to see that “unusual” abuse of the children is the same as “usual” abuse of everyone. Conveniently, the need for distinction creates “maniacs” that are totally, totally different from “normal people”, and cranks up the sensation level. The discussion of “external” evil then can continue ad infinitum without dealing with status quo of “peaceful, normal life”.