Terminology note: there is a difference between Style Transfer and <i>Fast</i> Style Transfer.<p>Fast Style Transfer takes a <i>very</i> long time to train a style model (and the output models can be somewhat large; the pre-trained models in the Dropbox are 20MB), but the styles can be applied quickly. Fast Style Transfer is the technique that is used by Prisma/Facebook. This repo is the first I've seen that uses TensorFlow instead of lua/Torch dependency shenanigans, and as a result should be much easier to set up. (this code release also beats Google's TF code release for their implementation: <a href="https://research.googleblog.com/2016/10/supercharging-style-transfer.html" rel="nofollow">https://research.googleblog.com/2016/10/supercharging-style-...</a> )<p>EDIT: Playing around with this repository, I can stylize a 1200x1200px image on a dual-core CPU on a 2013 rMBP in about 30 seconds.<p>Normal Style Transfer is ad-hoc between the style image and the initial image; it can only be used for one image at a time, but overall it is faster than training a model for Fast Style Transfer (however, this is infeasible for mobile devices).
Nice work! Have you seen this - could help reduce some of those checkerboard artifacts<p><a href="http://distill.pub/2016/deconv-checkerboard/" rel="nofollow">http://distill.pub/2016/deconv-checkerboard/</a>
Utterly amazing what people are achieving with neural nets. The idea that 'style transfer' can be fit into an algorithm is slightly blowing my mind right now.<p>The jumping fox video does looks a bit 'off' though, I think because the animation is kept the same and so it ends up looking too realistic for that style. Still this is early days!
There has been a lot of cool image manipulation/transformation/synthesis work coming out over the past couple of years using NNs and such. I'm curious if any of these techniques have started worming their way into products? Will new effects in Photoshop (or whatever) get better over point releases as companies train up better and better NNs?
It would be nice if this algorithm could be created in a more localised form, so that you could take a photo and apply the effect with different intensities as if you were using a brush.<p>Merely because sometimes it gets things wrong, and I think a normal human could correct things if they had more control over the process.
So you can transfer style from image A to image B. What I want to know is, can you use style transfer to "amplify" image A's style to, say, 500%?