> <i>Can non-CSAM images be “injected” into the system to flag accounts for things other than CSAM?<p>Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes.</i><p>The problem is not that Apple can't add images to the database but that organizations from the outside can inject any hashes to the new, constantly sniffing system at the heart of iOS, padOS and macOS. Apple has no way to verify those or any hashes before they get injected into the database.<p>If the system detects any matches only some overworked and underpaid content checker from Bangladesh is there to stop your life from being destroyed by some SWAT team crashing through your front door at 3am, killing your barking dog. And who knows if those foreign sweatshops are even trustable.
There is one question missing: if China asks Apple to flag users having Winnie-the-Pooh images on their devices, or leave the chinese market, what will Apple choose?
There are apps, like WhatsApp, that allow you to save photos you receive to your camera roll instantly.<p>If somebody, or another compromised device sends a large collection of CSAM to your device, they will be uploaded to iCloud, probably before you get a chance to remove them -- the equivalent of "swatting".<p>Besides the apps that you give permission to store photos in your Photos app, what about malware such as Pegasus we've seen again and again?<p>I wonder if we'll start hearing a year from now about journalists, political dissidents, or even candidates running for office going to jail for being in possession of CSAM. It would be much easier to take out your opponents when you know Apple will report it for you.<p>I guess all this does is disincentivize anyone who cares about their privacy from using iCloud Photos, which is sadly ironic since privacy is what Apple was going for.
Feels like they messed up the comms on this in a quite un-Apple-like way.<p>My understanding (high level) is their system is designed to improve user privacy by meaning they don't need to be able to decrypt photos on iCloud (which is, if I understand correctly, how other cloud providers do this scanning, which they are required to do by law?), but rather do it on the device - without going in to the upsides and downsides of either approach, I'm surprised they didn't manage to communicate more clearly in the initial messaging that this is a "privacy" feature and why they are taking this approach, and instead are left dealing with some quite negative press.
Expectation : Political rivals and enemies of powerful people will be taken out because c*ild pornography will be found in their phone. Pegasus can already monitor and exfiltrate every ounce of data right now, it won't be that hard to insert compromising images on the infected device.
> Why is Apple doing this now?<p>I find the answer to this question unconvincing.<p>If we think very selfishly from the company's perspective - Apple already had one of the most secure, private and trusted platforms. And they must have anticipated the backlash against the new feature. So I still don't get why a company like Apple would consider the marginal benefit from this to be worth the cost.
><i>CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations.</i><p>How is Apple validating the datasets for non-US child safety organisations?
Something I'd missed before:
"By design, this feature only applies to photos that the user chooses to upload to iCloud Photos"<p>This is not about what people have on their own phones. This is about what people are uploading to iCloud, because Apple does not want CSAM on their servers!
Everyone has said everything wrong about it already. Nevertheless, Apple can sugarcoat it as much as they like. There’s no <i>technical</i> control (no actual nor possible one) making this exclusively about targeting CSAM.
It's frustrating (though not at all surprising) to see Apple continue to be so tone-deaf. They clearly think "If only we could make people understand how it works, they wouldn't be so upset, in fact they'd thank us."<p>This is not the case - we do understand how it works, and we think it's a bad idea.
The question and answer i’m missing is:<p>Will Apple notify a user once an image has been (/erroneously) flagged and will be inspected by Apple employees?