While the technology is amazing, I am a bit bothered by all these picture and video modifying algorithms.<p>The issue is that we can't know what's real any more. It used to be if you saw a video or a photo depicting an event you could be pretty sure that what you're looking at actually happened.<p>Now, if you see a video of a prominent politician saying something awful in your twitter timeline (or whatever), they may have actually never said anything remotely close. It could be a completely fictional video that looks perfectly realistic[0], made by some teen in Macedonia.[1]<p>I realize photography and video have always been used to trick people into thinking things that aren't true, but this technology enables nuclear-grade deception.<p>I am wondering: is there a use-case for such an algorithm that is practical and good for the world?<p>PS: I know an eye-rolling algo is quite innocuous but I've had this thought on my mind about these in general and needed to air it out.<p>[0] <a href="https://www.youtube.com/watch?v=ohmajJTcpNk" rel="nofollow">https://www.youtube.com/watch?v=ohmajJTcpNk</a>
[1] <a href="https://www.wired.com/2017/02/veles-macedonia-fake-news/" rel="nofollow">https://www.wired.com/2017/02/veles-macedonia-fake-news/</a>
We had it a few months ago (see <a href="https://news.ycombinator.com/item?id=12164728" rel="nofollow">https://news.ycombinator.com/item?id=12164728</a>) but now there seem to be a demo available online.<p>Sample result: <a href="http://imgur.com/a/nyG4Z" rel="nofollow">http://imgur.com/a/nyG4Z</a><p>Abstract: <a href="http://sites.skoltech.ru/compvision/projects/deepwarp/" rel="nofollow">http://sites.skoltech.ru/compvision/projects/deepwarp/</a>
I find that the side-to-side motion is nearly natural. However, the up-down movement seems to introduce clipping between the lower eye lid and the iris in addition to slightly smudged upper lid. Still very cool technology. And here I thought Skolkovo was long dead.