Pixelating and blurring images has long been known to be insufficient in completely obscuring information [1]. In fact, in a lot of computer vision work, the image resolution is reduced to ignore noise and facilitate the workload for algorithms anyway. Completely destroying information through blacking out is preferable.<p>[1] <a href="https://dheera.net/projects/blur" rel="nofollow">https://dheera.net/projects/blur</a>
I’m not terribly surprised by this. Downsampling is a really effective way to create a “fingerprint” by which you can identify some data. And that’s essentially how an image-searching service works.
Can journalists stop writing 'AI' everywhere when it's just Neural Nets? It's all starting to look ridiculous - if you need a popular science friendly word, what's wrong with Image Recognition Program?
Reminds me of <a href="https://en.wikipedia.org/wiki/Christopher_Paul_Neil" rel="nofollow">https://en.wikipedia.org/wiki/Christopher_Paul_Neil</a>