For anyone interested in this area, the IEEE Transactions on Information Forensics and Security (TIFS) [1] is full of papers on the so-called area of <i>image forensics</i> -- the identification of processing that has been performed upon an image, including resampling, rotation, JPEG compression, block processing, and more.<p>Consequently, another hot research area is <i>image anti-forensics</i> -- the obfuscation of such operations to avoid detection. (Shameless self-promotion: one example paper on anti-forensics of JPEG compression can be found here [2].)<p>[1] <a href="http://www.signalprocessingsociety.org/publications/periodicals/forensics/" rel="nofollow">http://www.signalprocessingsociety.org/publications/periodic...</a><p>[2] <a href="http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/0001694.pdf" rel="nofollow">http://www.mirlab.org/conference_papers/International_Confer...</a>
This is fantastic, and I was just wondering if something like existed a week ago. Sometimes I need to find the original picture of a thumbnail on the internet, so I use Google reverse image search to search for other sizes of my uploaded image. However, some websites simply upscale the low quality image in order to be at the top of the search results. I wonder if Google can use a technology like this to place those results on a lower ranking.
You can detect downsampling by looking at chroma frequencies. Bayer pattern color filters on cameras mean that color resolution is never going to be as high as pixel resolution (assuming 1:1 mapping of pixels to sensor pixels) and if you see chroma frequencies that are higher than 2x pixel densities, then you have some downsampling going on.
"Specifically, this project was born out of a yet-unpublished image deduplication framework, while attempting to identify whether duplicates were scaled versions of one another."<p>A humble but fun origin story for a very cool project.
Embarassing side-note: I've never made the mistake of trying to add an empty directory to a git repo before; if you tried and failed to build, it's been fixed.
Could you do this over a sliding window and calculate a score for regions? That would let you see if something has been composited from up scaled images.
You might be able to detect if downsampling is done incorrectly, since many programs just average the pixel values instead of the actual light intensity.
Does anyone know of something similar for automatic analysis of audio using the FFT? There are visual ways to do it, but it would be nice to have an automated script