Amazing. And now one only needs a random article generator, which describes marriages, breakups, mishaps, accidents, and plain random appearances of these celebrities. Tada, random fake celebrity news, which is probably even better at wasting people's time than the real thing. This is going to happen really soon, because text generation is much easier than what the Nvidia guys did.
The thumbstamp version of creating fake celeb faces was one of the projects for the Deep Learning Udacity nanodegree. (These get changed over time, so it may be different now.)<p>The dataset of 200,000 celeb photos with the face nicely centered at a known location is a nontrivial part of making the exercise feasible.<p>I trained on windows with a 6Gb 1060, and went off script from the DCGAN paper, by using upscaling rather than transpose convolutions. Once all the fiddling details are set correctly, the results are quite amazing. It didn’t even require a complete single pass over that dataset.
Really interesting article seeing how I'm finishing up my own DCGAN project!<p>Generative models like GANs are fascinating, but very temperamental to train. Some of the findings in this paper mirror my own observations - increasing the complexity of the GAN adds a lot of instability. My solution was to keep things as simple as possible. I spent a lot of effort trying to increase the size of the network to get better results, but in the end my smallest implementation worked the best.<p>This bit is interesting: "Without progressive growing, all
layers of the generator and discriminator are tasked with simultaneously finding succinct intermediate representations for both the large-scale variation and the small-scale detail. With progressive
growing, however, the existing low-resolution layers are likely to have already converged early on, so the networks are only tasked with refining the representations by increasingly smaller-scale effects as new layers are introduce"
I read the paper, but I did not understand a thing. What is the path to follow (for example, books or papers to read) in order to at least understand what the paper is talking about? My background is in computer science.
From the demonstration video [0] in the link, very interesting, but also very noticeable where the GAN attempts glasses or other "items" (see around 0:58 and 2:35, respectively). Has any research been done into GANs which aim for realistic artificial objects, instead of faces?<p>[0] <a href="https://www.youtube.com/watch?v=G06dEcZ-QTg&feature=youtu.be" rel="nofollow">https://www.youtube.com/watch?v=G06dEcZ-QTg&feature=youtu.be</a>
"We recommend NVIDIA DGX-1 with 8 Tesla V100 GPUs."<p><a href="https://www.nvidia.com/en-us/data-center/dgx-1/#order-now-dgx-1" rel="nofollow">https://www.nvidia.com/en-us/data-center/dgx-1/#order-now-dg...</a><p>The NVIDIA DGX is available for purchase in select countries and is priced at:<p><pre><code> DGX with P100 at $129,000*
DGX with V100 at $149,000*
DGX support plan is required and must be purchased separately.</code></pre>
It is interesting really studying the two images on the github page. My first thought was - wow, amazing! But having looked at them for a bit longer they've fallen straight into the uncanny valley for me.<p>The female's left and right eyes are different shapes, as are her eyebrows. And the male's ears appear to be in different places on the left and right side of his head. His eyes are creepily different too.
The accompanying video reminds me of the Godley and Creme video ‘Cry’ from 1985.<p>Nvidia article: <a href="https://youtu.be/G06dEcZ-QTg" rel="nofollow">https://youtu.be/G06dEcZ-QTg</a><p>Cry: <a href="https://youtu.be/KxtPRF6NG7I" rel="nofollow">https://youtu.be/KxtPRF6NG7I</a>