Video showing its capabilities : <a href="http://www.youtube.com/watch?v=0XwaARRMbSA" rel="nofollow">http://www.youtube.com/watch?v=0XwaARRMbSA</a>
Very interesting approach.<p>I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.<p>I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.<p>This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).<p>I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?
Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use - which means it won't have mass adoption - which means it's ultimately pointless (unless someone else creates an open source version of it).