TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Autoencoding Blade Runner: reconstructing films with artificial neural networks

119 pointsby artoalmost 9 years ago

14 comments

astrangealmost 9 years ago
&gt; The model also struggles to make a recognisable reconstruction when the scene is very low contrast, especially with faces.<p>It could be getting this wrong if his error function is calculating linear data from the given image pixels, which are in the totally not linear sRGB colorspace. That would make it badly underestimate any error in a dark image.<p>Quick check of the PIL docs doesn&#x27;t mention gamma compensation, so they probably forgot about it. People usually do.
评论 #11768278 未加载
leecho0almost 9 years ago
The autoencoder converts an image to a reduced code then back to the original image. The idea is similar to lossy compression, but it&#x27;s geared specifically for the dataset that it&#x27;s trained on.<p>According to the defaults in the code, it uses float32 arrays of the following sizes:<p><pre><code> image: 144 x 256 x 3 = 110,592 code: 200 </code></pre> Note that the sequence of codes that the movie is converted to could possibly be further compressed.
评论 #11768798 未加载
argonautalmost 9 years ago
Correct me if I haven&#x27;t looked into this closely, but one glaring problem is that all the results are from the training set. So it&#x27;s not surprising you get something movie-ish by running the network over a movie <i>it was trained on</i>; the network has already seen what the output of the movie should look like.
评论 #11767869 未加载
评论 #11767484 未加载
DennisAleynikovalmost 9 years ago
This is incredible, and almost sounds like the algorithm dreamed up by Pied Piper&#x27;s middle-out. Incredible application of machine learning technology.
e12ealmost 9 years ago
I&#x27;m a little confused by the article: it appears to me that the input to the neural net is a series of frames, and the output is a series of frames? So it works as a filter? Or is the input key-frames, and so the net extrapolates intermediary frames from keyframes?<p>[ed: does indeed appear from the github page, that the input is a series of png frames, and the output is the same number of png frames, filtered through the neural net. No compression, but rather a filter operation?]
评论 #11768544 未加载
failratealmost 9 years ago
What I found most interesting was that A Scanner Darkly, which is rotoscoped, looked live action in several of the coherent frames that had been filtered through his Blade Runner-trained network.
staredalmost 9 years ago
Correct me if I am wrong, but it is not so much &quot;reconstruction&quot; as &quot;compression&quot;. (Or I got it wrong. Or the description is utterly unclear: reconstruct what from what.)<p>If it is the compression case, I am curious for the size of the compressed movie.
renaudgalmost 9 years ago
I&#x27;m a developer who knows nothing about AI but is fascinated by the recent painting&#x2F;music&#x2F;&quot;dreaming&quot; applications of it.<p>What would be some good resources for 1. getting the bare minimum knowledge required for using existing libraries like Tensorflow 2. going a bit further and having at least some basic understanding of how most popular ML&#x2F;AI algorithms work ?
aardsharkalmost 9 years ago
What is the difference between using a neural network to do this and using a filter that obtains the same or similar effect by distorting the frames of the input randomly?<p>I guess I feel like there&#x27;s no practical result here. It&#x27;s only interesting from an aesthetic point of view.<p>Am I being unfair?
评论 #11767163 未加载
评论 #11767792 未加载
xt00almost 9 years ago
It&#x27;s a cool thing this guy did.. It would be interesting to see how small the files are that are generated in this process.. Just low pass filtering the video like somebody else suggested would probably achieve a similarly lossy image. I guess what I take away from this is that maybe the way we store info in our brains looks kind of like this? Kinda fuzzy versions of the real thing?<p>Would be interesting if somebody one of these days can actually reconstruct to a high level of fidelity what our brain is &quot;seeing&quot;.. I bet it would look kind of like this..
评论 #11767841 未加载
Shivetyaalmost 9 years ago
on a side note, I never heard this voice over version before. I am only used to the Harrison Ford voice over. This one lets me understand why many didn&#x27;t like the VO.<p>Now back to the article, can someone explain about how many passes before it gets to near film quality? Can it extrapolate missing frames eventually?
评论 #11767520 未加载
alivarysalmost 9 years ago
The article lacks some details (I guess many can be found in cited papers), but it definitely seems to be a giant step toward usable large scale image analysis (providing a meaningful description). Maybe this could benefit google&#x27;s new cpu...
listicalmost 9 years ago
What are the potential applications for an autoencoder, aside from an exercise in neural networks?<p>Lossy compression with super high compression rates?
bcheungalmost 9 years ago
Very interesting. This makes me wonder if a similar technique can be used for compression?
评论 #11767507 未加载
评论 #11767750 未加载
评论 #11768000 未加载
评论 #11767132 未加载