> The model also struggles to make a recognisable reconstruction when the scene is very low contrast, especially with faces.<p>It could be getting this wrong if his error function is calculating linear data from the given image pixels, which are in the totally not linear sRGB colorspace. That would make it badly underestimate any error in a dark image.<p>Quick check of the PIL docs doesn't mention gamma compensation, so they probably forgot about it. People usually do.
The autoencoder converts an image to a reduced code then back to the original image. The idea is similar to lossy compression, but it's geared specifically for the dataset that it's trained on.<p>According to the defaults in the code, it uses float32 arrays of the following sizes:<p><pre><code> image: 144 x 256 x 3 = 110,592
code: 200
</code></pre>
Note that the sequence of codes that the movie is converted to could possibly be further compressed.
Correct me if I haven't looked into this closely, but one glaring problem is that all the results are from the training set. So it's not surprising you get something movie-ish by running the network over a movie <i>it was trained on</i>; the network has already seen what the output of the movie should look like.
This is incredible, and almost sounds like the algorithm dreamed up by Pied Piper's middle-out. Incredible application of machine learning technology.
I'm a little confused by the article: it appears to me that the input to the neural net is a series of frames, and the output is a series of frames? So it works as a filter? Or is the input key-frames, and so the net extrapolates intermediary frames from keyframes?<p>[ed: does indeed appear from the github page, that the input is a series of png frames, and the output is the same number of png frames, filtered through the neural net. No compression, but rather a filter operation?]
What I found most interesting was that A Scanner Darkly, which is rotoscoped, looked live action in several of the coherent frames that had been filtered through his Blade Runner-trained network.
Correct me if I am wrong, but it is not so much "reconstruction" as "compression". (Or I got it wrong. Or the description is utterly unclear: reconstruct what from what.)<p>If it is the compression case, I am curious for the size of the compressed movie.
I'm a developer who knows nothing about AI but is fascinated by the recent painting/music/"dreaming" applications of it.<p>What would be some good resources for 1. getting the bare minimum knowledge required for using existing libraries like Tensorflow 2. going a bit further and having at least some basic understanding of how most popular ML/AI algorithms work ?
What is the difference between using a neural network to do this and using a filter that obtains the same or similar effect by distorting the frames of the input randomly?<p>I guess I feel like there's no practical result here. It's only interesting from an aesthetic point of view.<p>Am I being unfair?
It's a cool thing this guy did.. It would be interesting to see how small the files are that are generated in this process.. Just low pass filtering the video like somebody else suggested would probably achieve a similarly lossy image.
I guess what I take away from this is that maybe the way we store info in our brains looks kind of like this? Kinda fuzzy versions of the real thing?<p>Would be interesting if somebody one of these days can actually reconstruct to a high level of fidelity what our brain is "seeing".. I bet it would look kind of like this..
on a side note, I never heard this voice over version before. I am only used to the Harrison Ford voice over. This one lets me understand why many didn't like the VO.<p>Now back to the article, can someone explain about how many passes before it gets to near film quality? Can it extrapolate missing frames eventually?
The article lacks some details (I guess many can be found in cited papers), but it definitely seems to be a giant step toward usable large scale image analysis (providing a meaningful description). Maybe this could benefit google's new cpu...
What are the potential applications for an autoencoder, aside from an exercise in neural networks?<p>Lossy compression with super high compression rates?