"Whenever someone subsequently edits that photo, the changes are recorded to an updated manifest, rebundled with the image, and updated in the Content Credentials database whenever it is reshared on social media. Users who find these images online can click on the CR icon in the [pictures'] corner to pull up all of this historical manifest information as well, providing a clear chain of providence, presumably, all the way back to the original photographer."<p>Fat chance. I can think of exactly zero examples where a photo shared on social media, or even on Whatsapp, has its metadata intact. This is frustrating to me because often it's the only way to get a photo from computer illiterates, and I like my EXIF data, specifically, the exact time/date the picture was taken.
You can probably build a rig to feed false data to the sensor, or take a photo of a sufficiently high-resolution display, or use side-channel power analysis to extract the cryptographic keys. I have serious doubts this kind of provenance metadata will actually work against a sophisticated attacker (someone willing to spend more than 100k on making fake images), but I'm sure it will be used as an excuse to further lock down computing platforms.
Worth noting that the expensive price isn't because of the metadata feature - it's a new version of the Leica M11 which is $8995 itself (Leicas are just crazy expensive)
Content Credentials is just one/the latest standard.<p>Canon (maybe others) offered a similar feature from the very early days of digital, with a module for the 1Dx at least that would cryptographically sign files the camera generated to say "this is what the sensor saw". Typically it was marketed to law enforcement, because there was a wariness around digital photography as evidence even in the early days.
One issue is because there are so few CR cameras out there, most people would sign their photos in Lightroom. So the provenance starts from software rather than capture device.<p>So you can sign a fake photo that you modified and its provenance from that point on is traceable. But you still wouldn’t know if the photo was captured authentically.<p>I guess it doesn’t matter — this is about traceability rather than authenticity.
I can imagine this becoming pretty standard in the near term, particularly in new smart phones, and also done for video too.<p>As ever, yes it can be faked (by taking A picture of a screen for example) but it ups the cost of faking a massive amount from where it is currently
Uh, deja vu? Didn't Canon have a system very similar to this, and it famously got cracked over a decade ago?<p><a href="https://it.slashdot.org/story/10/12/03/2133218/canons-image-verification-system-cracked" rel="nofollow noreferrer">https://it.slashdot.org/story/10/12/03/2133218/canons-image-...</a>
About a year ago when AI was getting really big, I was really interested in C2PA, since then I've come to the conclusion that actual artists don't care about any of that crap
Content credentials (<a href="https://contentauthenticity.org/" rel="nofollow noreferrer">https://contentauthenticity.org/</a>) appears to use a Certificate Authority (CA) system to authenticate information, and they believe it can help fight misinformation: <a href="https://rd.nytimes.com/projects/using-secure-sourcing-to-combat-misinformation" rel="nofollow noreferrer">https://rd.nytimes.com/projects/using-secure-sourcing-to-com...</a> Anyone here has insight into the PKI behind it?