This is so cool and I can't help but feel like I'm missing something important that's taking place and has huge potential.<p>As a busy programmer who gets exhausted at night from the mental effort required at my day job, I have a feeling like I will never be able to catch up at this rate.<p>Are there any introductory materials to this field? Something I can read slowly during the weekends, that gives an overview of the fundamental concepts (primarily) and basic techniques (secondarily) without overwhelming the reader in the more advanced/complicated techniques (at least during the beginning).<p>I'd really appreciate any recommendations.
Brief summary: a nice intro about what generative models are and the current popular approaches/papers, followed by descriptions of recent work by OpenAI in the space. Quick links to papers mentioned:<p>Improving GANs <a href="https://arxiv.org/abs/1606.03498" rel="nofollow">https://arxiv.org/abs/1606.03498</a><p>Improving VAEs <a href="http://arxiv.org/abs/1606.04934" rel="nofollow">http://arxiv.org/abs/1606.04934</a><p>InfoGAN <a href="https://arxiv.org/abs/1606.03657" rel="nofollow">https://arxiv.org/abs/1606.03657</a><p>Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks <a href="http://arxiv.org/abs/1605.09674" rel="nofollow">http://arxiv.org/abs/1605.09674</a><p>Generative Adversarial Imitation Learning <a href="http://arxiv.org/abs/1606.03476" rel="nofollow">http://arxiv.org/abs/1606.03476</a><p>I think the last one seems very exciting, I expect Imitation Learning would be a great approach for many robotics tasks.
Very cool. As you're thinking about unsupervised or semi-supervised deep learning, consider medical data sets as a potential domain.<p>ImageNet has 1,034,908 labeled images. In a hospital setting, you'd be lucky to get 1000 participants.<p>That means those datasets really show off the power of unsupervised, semi-supervised, or one-shot learning algorithms. And if you set up the problem well, each increment of ROC translates into a life saved.<p>Happy to point you in the right direction when the time comes—my email is in my HN profile.
Have these techniques been used to generate realistic looking test data for testing software? I have had ideas along these lines but people think I'm talking about fuzz testing when I try and describe it.<p>I'm imagining something where you take a corporate db and reduce it down to a model. Then that can be shared with third parties and used to generate unlimited amounts of test data that looks like real data w/o revealing any actual user info.
I really like that they used TensorFlow and published their code in GitHub. It will help a lot of people like me, that are new in the field and want to learn more about generative models. Amazing work by the OpenAI team!
The actual outputs look grotesque. Disembodied dog torsos with seven eyeballs and such. It's cool, but to me this is clearly showing the local nature of convolutional nets; it's a limitation that one has to overcome if one is to truly generate lifelike images from scratch.
The generated images look like the stuff nightmares are made out of. Which is to say they're extremely aesthetically unpleasant. So what exactly have these networks learned?
Interesting topic, tedious article. Paraphrasing:<p>Q: What's a generative model?<p>A: Well, we have these neural nets and...<p>Ugh. I understand the excitement for one's own research but if the point is to make these results accessible to a wider audience then it's important not to get lost in the details, at least not right away. IMO, there's very little here in the way of high-level intuition. If I did not already have a PhD, and some exposure to ML (not my area), I would probably find this article entirely indecipherable. Again, paraphrasing:<p>Q: OK, so I understand you want to create pictures that resemble real photos. And you really like this DCGAN method, right?<p>A: Yes! See, it takes 100 random numbers and...<p>Come on guys. You can do better.