For context, a different colorization model with about the same results: <a href="http://richzhang.github.io/colorization/" rel="nofollow">http://richzhang.github.io/colorization/</a><p>Another model previously posted on HN, with (IMO) worse results than these two models: <a href="http://tinyclouds.org/colorize/" rel="nofollow">http://tinyclouds.org/colorize/</a>
This is amazing to me. My major was Digital Imaging Technology in 2005 and I remember doing this by hand in photoshop wondering if one day there would be a button for it.
Tried with some historical photos and my own BW images. It's missing the global image prior for most of the images except vegetation, it has a hard time even with people. For local features I've seen similar problems, my guess is that they trained it on a not large enough dataset, the spectacular samples are from over-training. While the idea looks promising the current implementation is far from general.
A small sampling of how this preforms on some B&W images I had laying around (all my family)....<p>My Grandfather & Brothers: <a href="http://adam.gs/v/IMG_0090.jpg" rel="nofollow">http://adam.gs/v/IMG_0090.jpg</a> <a href="http://adam.gs/v/IMG_0090.color.jpg" rel="nofollow">http://adam.gs/v/IMG_0090.color.jpg</a><p>My Grandfather, My mother and my Aunt <a href="http://adam.gs/v/IMG_4629.jpg" rel="nofollow">http://adam.gs/v/IMG_4629.jpg</a> <a href="http://adam.gs/v/IMG_4629.color.jpg" rel="nofollow">http://adam.gs/v/IMG_4629.color.jpg</a><p>My grandfather and my grandmother: <a href="http://adam.gs/v/IMG_6868.jpg" rel="nofollow">http://adam.gs/v/IMG_6868.jpg</a> <a href="http://adam.gs/v/IMG_6868.color.jpg" rel="nofollow">http://adam.gs/v/IMG_6868.color.jpg</a><p>From my perspective, these are decent results considering what they have to work with, I think it did a very good job.
Impressive stuff, I especially like the style transfer that can be done by using the global features of one image and the local features of another (Fig. 7)<p>What I find somewhat annoying is that whilst they show some examples from their validation set, and a couple of examples of the model failures. They don't appear to show a <i>random</i> selection of cases from their validation set.
I'm surprised by how apt Lua is for these kind of algorithms. From the architecture diagram I expected to be hit by a large blob of code but found that most of things are taken care by the language/framework itself!
Can someone please put the file colornet.t7 on the torrent network or a high-volume service somewhere? I'm probably not the only having a hard time downloading that file.
I would really like to see approaches like these applied to movie scenes. Especially how differences in single colorized frames depicting the same scene could be handled.
Not sure if the title would be too long, but I'll be honest and say that I thought this was about the news organization for a minute.<p>CNN = Convolutional Neural Networks in this context.