I set up this super simple ‘Which Face Is Real?’ (<a href="http://www.whichfaceisreal.com/" rel="nofollow">http://www.whichfaceisreal.com/</a>) style challenge. Click the row to show the answers. You might need to zoom out.<p><a href="https://veedrac.github.io/stylegan2-real-or-fake/game.html" rel="nofollow">https://veedrac.github.io/stylegan2-real-or-fake/game.html</a><p>There's a harder version as well, where the image is zoomed in.<p><a href="https://veedrac.github.io/stylegan2-real-or-fake/game_cropped.html?x" rel="nofollow">https://veedrac.github.io/stylegan2-real-or-fake/game_croppe...</a><p>I get 100% reliably with the first link (game.html), and got 4/5 on the cropped version (game_cropped.html) so far.
Only watched the video, but one of the interesting things is the potential method to tell a generated image from a real one: namely, if you take a generated image, it's possible to find parameters which will generate exactly the same image. But if you take a real image, it's generally <i>not</i> possible to get exactly the same image, but only a similar one.<p>The exact point in the video:<p><a href="https://youtu.be/c-NJtV9Jvp0?t=208" rel="nofollow">https://youtu.be/c-NJtV9Jvp0?t=208</a>
The demo in the official video is mind blowing.
<a href="https://www.youtube.com/watch?v=c-NJtV9Jvp0" rel="nofollow">https://www.youtube.com/watch?v=c-NJtV9Jvp0</a>
I wonder when we will see full movies unrecognizable from real ones made from deep learning.
The part of the video showing the location bias in phase artifacts (straight teeth on angled faces) is really interesting and very clear in retrospect if you look at StyleGAN v1 outputs.<p>Their “new method for finding the latent code that reproduces a given image” is really interesting and I’m curious to see if it plays a role in the new $1 million Kaggle DeepFakes Detection competition.<p>It feels like we’re almost out of the uncanny valley. It’s interesting to place this in context and think about where this technology will be a few years from now - see this Tweet by Ian Goodfellow on 4.5 years of GAN progress for face generation: <a href="https://twitter.com/goodfellow_ian/status/1084973596236144640?lang=en" rel="nofollow">https://twitter.com/goodfellow_ian/status/108497359623614464...</a>
I'm surprised to see Nvidia hosting[1] the pre-trained networks on google drive which has already been blocked for going over the quota:<p>> Google Drive download quota exceeded -- please try again later<p>1. <a href="https://github.com/NVlabs/stylegan2#using-pre-trained-networks" rel="nofollow">https://github.com/NVlabs/stylegan2#using-pre-trained-networ...</a>
A bit off-topic - the license [0] is interesting. IIUC, if anyone who is using this code decides to sue NVidia, the grants are revoked, and they can sue back for copyright infringement?<p>Also, interesting that even with such "short" licences there are trivial mistakes in it (section 2.2 is missing, though it is referenced from 3.4 and 3.6 - I wonder what it was...)<p>[0] <a href="https://nvlabs.github.io/stylegan2/license.html" rel="nofollow">https://nvlabs.github.io/stylegan2/license.html</a>
Of course we’ll hit a wall at some point, but when this repo dropped the other night and I saw the rotating faces in the video, it made me realize that in the future, VR experiences might be generated with nets rather than modeled with traditional CG.
ctfu at the car images. I made a twitter bot to tweet them out with fake make/model names <a href="https://twitter.com/bustletonauto" rel="nofollow">https://twitter.com/bustletonauto</a>
It makes me feel ill to see computers doing things like this. Aidungeon was difficult to stomach as well. GANs were invented on a whim by a single person. Nobody thought it would work when applied to this kind of problem. It came out of nowhere. Pretty soon someone will try something on a higher order task and it’s going to work. We are opening Pandora’s box and I’m not sure that we should do that.