I remember this, when it was first published.<p>Good article. One thing about fakes, is that they don't need to be super-high-quality, in many cases. They just need to be enough to reinforce a narrative to a receptive audience.<p>An example is that Kerry/Fonda fake. Just looking at it as a thumbnail on my phone, it was easy to see that it was a composite. Also, I have seen both photos, in their original contexts. They are actually fairly well-known images, in their own rights.<p>That didn't stop a whole lot of folks from thinking it was real. They were already primed.<p>The comment below, about using an AI "iterative tuner" is probably spot-on. It's only a matter of time, before fake photos, videos, and audio, are par for the course.
<a href="https://twitter.com/JackPosobiec/status/1434581638923620360?s=20" rel="nofollow">https://twitter.com/JackPosobiec/status/1434581638923620360?...</a><p>These days you don’t even need to fake the photo, you can just attach the fake drama to a photo of something else and no one will bat an eyelid.
Is this a solvable problem by requiring camera manufacturers cryptographically sign photos and videos created on those devices? If that’s in place then it seems like it could be the basis for chain of custody of journalistic images backed by a blockchain. This seems like the only viable solution to me since any AI powered solution would just be a cat and mouse game.
With GANs, any fake image detection technique you could derive based on visual data could probably be learned by the discriminator given the right architecture choice.
I tried to prove that crops which do not preserve photographic centre are detectable <a href="https://physics.stackexchange.com/a/367981/3194" rel="nofollow">https://physics.stackexchange.com/a/367981/3194</a><p>This was after photographers seemed to not believe this was the case <a href="https://photo.stackexchange.com/q/86550/45128" rel="nofollow">https://photo.stackexchange.com/q/86550/45128</a><p>In any case, detecting cropped photos could be a way to detect that something has been intentionally omitted after the fact.
There are also misleading photos - not fake images but a more subtle attempt to manipulate viewers.<p><i>A mundane example</i>: You're browsing a property website, look through the pictures, and then visit a property only to discover the rooms are tiny matchbox-sized spaces. They looked so much more spacious when you viewed them online. You're just discovered <i>wide-lens-photography</i> for real estate - purposely distorts or make a space look spacious.<p><i>A 'fake' news example</i>: During the coronavirus lockdown, a Danish photo agency, Ritzau Scanpix, commisioned two photographers to use two different perspectives to shoot scenes of people in socially-distance scenarios. Were people observing the rules? Or did the type of lens (wide-angle and telephoto) intentionally give a misleadling impression?<p>The pictures are here - the article is in Danish, but the photos tell the story:<p><a href="https://nyheder.tv2.dk/samfund/2020-04-26-hvor-taet-er-folk-paa-hinanden-disse-billeder-er-taget-samtidig-men-viser-to" rel="nofollow">https://nyheder.tv2.dk/samfund/2020-04-26-hvor-taet-er-folk-...</a>
It's been really interesting to see another recent uptick in media (and HN) coverage of deepfakes, modified media, etc lately.<p>There are virtually endless ways to generate ("deepfake") or otherwise modify media. I'm convinced that we're (at most) a couple advancements of software and hardware away from anyone being able to generate or otherwise modify media to the point where it's undetectable (certainly by average media consumers).<p>This comes up so often on HN I'm beginning to feel like a shill but about six months ago I started working on a cryptographic approach to 100% secure media authentication, verification, and provenance with my latest startup Tovera[0].<p>With traditional approaches (SHA256 checksums) and the addition of blockchain (for truly immutable and third party verification) we have an approach[1] that I'm confident can solve this issue.<p>[0] <a href="https://tovera.com" rel="nofollow">https://tovera.com</a><p>[1] <a href="https://www.tovera.com/tech" rel="nofollow">https://www.tovera.com/tech</a>
Camera's often have non-linear radial image distortions. For example, OpenCV's camera calibration process computes these radial distortions along the way [1]. They may not be very significant, but they exist.<p>[1] <a href="https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html" rel="nofollow">https://docs.opencv.org/master/dc/dbb/tutorial_py_calibratio...</a><p>Aligning points on a photo outside of more-or-less linear center region will certainly result in crossing lines. Which we see in the alignment attempt there in the article - the points we align are close to center and close to edge (max distortion).<p>There is no mention of distortions in the entire article.<p>But some other points are interesting to think about.
Discussed at the time:<p><i>Signs that can reveal a fake photo</i> - <a href="https://news.ycombinator.com/item?id=14670670" rel="nofollow">https://news.ycombinator.com/item?id=14670670</a> - June 2017 (18 comments)
I wonder why the article has the title '20170629-the-hidden-signs-that-can...' in the url. That date would suggest it's from the 29th of june 2017 (while the date below the headlines says 2020), way before the current breakthroughs of deep fakes.