This article raises serious privacy concerns. Just building the infrastructure for on-device scanning and reporting is extremely troubling. A slippery slope, or a break in the dam, as others have said. My view is that we've been on that slope for a long time, and this changes little. We just have to trust Apple, as has always been the case.<p>When I look at the technical details, it seems to me to be a reasonable compromise. It allows Apple and government to do something about the worst offenders, whereas it has no impact on anyone else.<p>Two technical reasons for this:<p>- The "neuralMatch" algorithm suggests some sort of CV, whereas the article talks about matching against a hash of know images. My guess is that the actual technology is something like Microsoft's PhotoDNA (<a href="https://news.microsoft.com/on-the-issues/2018/09/12/how-photodna-for-video-is-being-used-to-fight-online-child-exploitation/" rel="nofollow">https://news.microsoft.com/on-the-issues/2018/09/12/how-phot...</a>). Hash collisions aside, this should only produce matches against images that are already in a government database. It will match manipulated images (e.g., rotated or cropped), but it won't match new images.<p>- As described here, the scanning only applies to images also uploaded to iCloud ("[the] algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system"). While we don't know whether this description is accurate, it suggests that if you don't back up images to iCloud your device won't do any scanning locally. Apple already has the keys to your iCloud backups, so if you value privacy you're probably not backing up to iCloud anyhow.<p>- It sounds like it doesn't flag a single image, but requires multiple hash hits.<p>So, if you want to feel better about this, understand that it is a system that will flag people who are downloading storing known child exploitation images on their devices and naively backing those up to iCloud.<p>This is the sort of privacy compromise that works in practice. Serious offenders are either caught or diverted to other channels. Minor offenders are probably not caught. The risk to non-offenders is zero or close to it.<p>Apple has the control to do all sorts of invasive things to our privacy. They could be scanning and reporting all kinds of things already, and we might not even know. Or they could start doing so tomorrow. From this point of view we're already trusting them to do right by us as their customers, and this feature doesn't change that.