This simple site is a far better demo and explanation of the extreme danger of Apple's proposal than any of the long articles written about it.<p>Thank you for caring enough to put this together and publish it.
Seems like this CSAM tech could be super useful in China for detecting winnie the pooh or other evidence of thought crime against the regime. Even if Apple doesn't end up rolling it out, I'm sure Huawei is taking careful notes.
This page directly links to the EFF:
<a href="https://act.eff.org/action/tell-apple-don-t-scan-our-phones" rel="nofollow">https://act.eff.org/action/tell-apple-don-t-scan-our-phones</a><p>Please spend a few bucks on supporting them.<p>A bit of a background on <i>why</i> apple did this (this was flagged, but I don't know why):
<a href="https://news.ycombinator.com/item?id=28259622" rel="nofollow">https://news.ycombinator.com/item?id=28259622</a>
I think Apple may have figured out that the best way to get people to accept backdoored encryption is simply to not call it backdoored, and claim that its a privacy feature...<p>...as if having a trillion dollar corporation playing batman and going on a vigilante crusade to scan your private files is a situation we should already be comfortable with.
Imagine hiring a young-looking 18-year old model to duplicate the photos in the database and create a hash collision. Now you have a photo which is perfectly legal for you to possess but can rain down terror on anyone you can distribute this file to.
The argument against this tech is a slippery slope argument - that this technology will eventually be expanded to prevent copyright infringement, censor obscenity, limit political speech or other areas.<p>I know this is a controversial take (in HN circles), but I no longer believe this will happen. This kind of tech has existed for a while, and it simply hasn't happened that it's been mis-applied. I now think that this technology has proved to be an overall net good.
So 30+ images get flagged and they run it against the real CSAM database and it doesn't match? Or let's say someone is able to somehow make an image that gets flagged by both and someone looks at the image and it isn't CSAM. Nothing happens.
Each image on the left has a blob vaguely similar to the highlights in the dog image on the right. Likely the "perceptual" algorithm isn't "perceiving" contrast the same way human eyes and brains do.
Here's a web demo[0] where you can try out any two images and see the resulting hashes, and whether there's a collision. You can also try your own transformations (rotation, adding a filter, etc) on the image. Demo was built using Gradio[1].<p>[0]: <a href="https://huggingface.co/spaces/akhaliq/AppleNeuralHash2ONNX" rel="nofollow">https://huggingface.co/spaces/akhaliq/AppleNeuralHash2ONNX</a>
[1]: <a href="https://gradio.dev" rel="nofollow">https://gradio.dev</a>
> For example, it's possible to detect political campaign posters or similar images on users' devices by extending the database.<p>So who controls the database?
Can somebody please explain to me how one can go about finding images that have collision hashes? Or how you can create an image to have a specific hash?
Apple have stated that they will make the database of hashes that their system uses auditable by researchers. Does anyone know if that has happened yet? Is it possible to view the database and if so, in what form? Can the actual hashes be extracted? If so then that would obviously open up the kind of attack described in the article. Otherwise, it would be interesting to know how Apple expects the database to be auditable without revealing the hashes themselves.
Irrespective of whether or not NeuralHash is flawed, should Apple scan user data or should they not?<p>If not, what is going to convince them to stop at this point?<p>I believe that they should scan user data in <i>some</i> capacity, because this is about data that causes harm to children.<p>However, I believe that they should <i>not</i> run the scan on the device, because that carries significant drawbacks for personal privacy.
Now let's create one for the hash matching that Google, Microsoft, and other cloud providers use.<p>If your problem with Apple's proposal is the fact they do hash matching (rather than the system is run on your device), why is the criticism reserved for Apple instead of being directed at everyone who does hash matching to find CSAM? It seems like a lot of the backlash is because Apple is being open and honest about this process. I worry that this will teach companies that they need to hide this type of functionality in the future.