Hi all<p>As I scanned the daily news with more amazing image generations from the new kid on the block, and the usual comments and posts that fear doom and gloom over the AI future(fake images, fake videos, etc), I couldn't help but wonder why we still don't use reliable means to provide verification of an image authenticity.<p>A quick google search revealed papers as old as 1998[1], where a proposal for using public keys for that purpose was presented.<p>Picture this, you see some shocking image of Elon Musk having a date with a humanoid robot (I actually saw this photo yesterday). But now you have a tool to submit that photo for verification. Maybe sources like Getty, AP, CNN, etc, would have their public keys available for anyone to cross-check the authenticity of images. Much like we do today with PGP/GPG.<p>Perhaps a whole new image format could be developed that would even facilitate this(or require such keys to be used). And there would be no gatekeeping, everyone can have their own private/pub keys, like we already do now. Famous photographers will have their pub keys on their websites so that people can use them to verify.<p>If AI-generated images are such a problem (and will become a bigger one), why is this not being done?<p>[1] - https://ieeexplore.ieee.org/document/723526