TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

StyleGAN2

117 点作者 rolux超过 5 年前

12 条评论

Veedrac超过 5 年前
I set up this super simple ‘Which Face Is Real?’ (<a href="http:&#x2F;&#x2F;www.whichfaceisreal.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.whichfaceisreal.com&#x2F;</a>) style challenge. Click the row to show the answers. You might need to zoom out.<p><a href="https:&#x2F;&#x2F;veedrac.github.io&#x2F;stylegan2-real-or-fake&#x2F;game.html" rel="nofollow">https:&#x2F;&#x2F;veedrac.github.io&#x2F;stylegan2-real-or-fake&#x2F;game.html</a><p>There&#x27;s a harder version as well, where the image is zoomed in.<p><a href="https:&#x2F;&#x2F;veedrac.github.io&#x2F;stylegan2-real-or-fake&#x2F;game_cropped.html?x" rel="nofollow">https:&#x2F;&#x2F;veedrac.github.io&#x2F;stylegan2-real-or-fake&#x2F;game_croppe...</a><p>I get 100% reliably with the first link (game.html), and got 4&#x2F;5 on the cropped version (game_cropped.html) so far.
评论 #21782487 未加载
评论 #21783074 未加载
评论 #21782748 未加载
gwd超过 5 年前
Only watched the video, but one of the interesting things is the potential method to tell a generated image from a real one: namely, if you take a generated image, it&#x27;s possible to find parameters which will generate exactly the same image. But if you take a real image, it&#x27;s generally <i>not</i> possible to get exactly the same image, but only a similar one.<p>The exact point in the video:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;c-NJtV9Jvp0?t=208" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;c-NJtV9Jvp0?t=208</a>
评论 #21780730 未加载
评论 #21780979 未加载
resiros超过 5 年前
The demo in the official video is mind blowing. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=c-NJtV9Jvp0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=c-NJtV9Jvp0</a> I wonder when we will see full movies unrecognizable from real ones made from deep learning.
评论 #21784532 未加载
alexcnwy超过 5 年前
The part of the video showing the location bias in phase artifacts (straight teeth on angled faces) is really interesting and very clear in retrospect if you look at StyleGAN v1 outputs.<p>Their “new method for finding the latent code that reproduces a given image” is really interesting and I’m curious to see if it plays a role in the new $1 million Kaggle DeepFakes Detection competition.<p>It feels like we’re almost out of the uncanny valley. It’s interesting to place this in context and think about where this technology will be a few years from now - see this Tweet by Ian Goodfellow on 4.5 years of GAN progress for face generation: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;goodfellow_ian&#x2F;status&#x2F;1084973596236144640?lang=en" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;goodfellow_ian&#x2F;status&#x2F;108497359623614464...</a>
anonfunction超过 5 年前
I&#x27;m surprised to see Nvidia hosting[1] the pre-trained networks on google drive which has already been blocked for going over the quota:<p>&gt; Google Drive download quota exceeded -- please try again later<p>1. <a href="https:&#x2F;&#x2F;github.com&#x2F;NVlabs&#x2F;stylegan2#using-pre-trained-networks" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;NVlabs&#x2F;stylegan2#using-pre-trained-networ...</a>
评论 #21782726 未加载
anyzen超过 5 年前
A bit off-topic - the license [0] is interesting. IIUC, if anyone who is using this code decides to sue NVidia, the grants are revoked, and they can sue back for copyright infringement?<p>Also, interesting that even with such &quot;short&quot; licences there are trivial mistakes in it (section 2.2 is missing, though it is referenced from 3.4 and 3.6 - I wonder what it was...)<p>[0] <a href="https:&#x2F;&#x2F;nvlabs.github.io&#x2F;stylegan2&#x2F;license.html" rel="nofollow">https:&#x2F;&#x2F;nvlabs.github.io&#x2F;stylegan2&#x2F;license.html</a>
评论 #21780839 未加载
tiborsaas超过 5 年前
Imagine when these faces start talking, tracking objects with their eyes with a perfectly synthesized voice all, generated in real time.
gdubs超过 5 年前
Of course we’ll hit a wall at some point, but when this repo dropped the other night and I saw the rotating faces in the video, it made me realize that in the future, VR experiences might be generated with nets rather than modeled with traditional CG.
sails超过 5 年前
Any good resources for using GANs to generate synthetic tabular data?
narsk超过 5 年前
ctfu at the car images. I made a twitter bot to tweet them out with fake make&#x2F;model names <a href="https:&#x2F;&#x2F;twitter.com&#x2F;bustletonauto" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;bustletonauto</a>
nalllar超过 5 年前
urgh, custom CUDA ops now.<p>Original StyleGAN worked on AMD cards, this won&#x27;t without porting those.<p>):
jdkdnfndnfjd超过 5 年前
It makes me feel ill to see computers doing things like this. Aidungeon was difficult to stomach as well. GANs were invented on a whim by a single person. Nobody thought it would work when applied to this kind of problem. It came out of nowhere. Pretty soon someone will try something on a higher order task and it’s going to work. We are opening Pandora’s box and I’m not sure that we should do that.
评论 #21782736 未加载