I think the approach is really cool but the processing time required is too much for this to be very useful at the moment.<p>On a 1080 Ti it takes 45-90 minutes to train networks for the various tasks on 256px images (depending on some quality parameters and which task). Each task also requires training individually so if you'd like to try them all for a given image you'll need to train 6 times.<p>Also the pyramid of GANs approach is very memory hungry. I was only able to get up to 724px images with 11 GB of VRAM. This was also only possible with a higher scale factor (sparser pyramid) which sacrifices a lot of quality and is incredibly noticeable at larger image sizes. I only tried for larger sizes with the animation task though, perhaps there is a way to combine the super resolution and animation task and achieve better results. Training on larger sizes was taking upwards of 6-8 hours.<p>All of this was tested with the official repo[1] about a month ago.<p>[1] <a href="https://github.com/tamarott/SinGAN" rel="nofollow">https://github.com/tamarott/SinGAN</a>
Not read the paper in detail yet, but it reminds me of Deep Image Prior (<a href="https://sites.skoltech.ru/app/data/uploads/sites/25/2018/04/deep_image_prior.pdf" rel="nofollow">https://sites.skoltech.ru/app/data/uploads/sites/25/2018/04/...</a>).
Having spent some time in trying to do style transfers, this looks very promising.<p>The harmonization aspect of the paper actually makes it very useful. There certainly are cases where you want to introduce an image component as an overlay and want the style to integrate.<p>Really cool stuff, and with code!