John Resig used a Convnet to upscale japanese prints, waifu2x<p><a href="http://ejohn.org/blog/using-waifu2x-to-upscale-japanese-prints/" rel="nofollow">http://ejohn.org/blog/using-waifu2x-to-upscale-japanese-prin...</a>
Interesting stuff! It would certainly benefit from a comparison to other super-resolution techniques, e.g.<p>Glasner et al. "Super-resolution from a single image"
Freeman et al. "Example-based super-resolution"
What's intriguing about this is that the output isn't really real. The best place to see this is the bark patterns on the trees in the last 3-way comparison. The output is convincing and yet not quite right. The neural net didn't <i>know</i>, so it <i>guessed plausibly</i>. Keep scaling and I bet you'd see Google inceptionism style dream details slipping in.
I'll admit that I skimmed the article but I have the feeling this CNN didn't learn what they intended it to learn. Looking at the examples shown they started with a full resolution image and applied some downsampling algorithm to get the lower resolution to apply their algorithm to. Their algorithm has learned to undo the downsampling that they applied. This doesn't mean it will perform well on images that haven't been downsampled or images that have been downsampled in a different way.