TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Image Compression with Neural Networks

232 点作者 hurrycane超过 8 年前

19 条评论

emcq超过 8 年前
This is pretty neat. But is it just me or does the dog picture look better in JPEG?<p>When zoomed in, the JPEG artifacts are quite apparent and the RNN produces a much smoother image. However, to my eye when zoomed out the high frequency &quot;noise&quot;, particularly in the snout area, looks better in JPEG. The RNN produces a somewhat blurrier image that reminds me of the soft focus effect.
评论 #12607597 未加载
评论 #12607694 未加载
richard_todd超过 8 年前
jpeg 2000 had about a 20% reduction in size over typical jpeg, while producing virtually no blocking artifacts, 16 years ago[1]. Almost no one uses it, though. Now in 2016 we are using neural networks to get a similar reduction, except the dog&#x27;s snout looks blurry, and with a process that I assume is much more resource intensive. It&#x27;s interesting for sure, but if people didn&#x27;t care about jp2, they would have to be drinking some serious AI Kool-aid to want something like this.<p>[1]: <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;JPEG_2000" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;JPEG_2000</a>
评论 #12610208 未加载
评论 #12611723 未加载
评论 #12610640 未加载
评论 #12611476 未加载
评论 #12610739 未加载
starmole超过 8 年前
Important quote from the paper:<p>&quot;The next challenge will be besting compression methods derived from video compression codecs, such as WebP (which was derived from VP8 video codec), on large images since they employ tricks such as reusing patches that were already decoded.&quot;<p>Beating block based JPEG with a global algorithm doesn&#x27;t seem that exciting.
评论 #12608766 未加载
the8472超过 8 年前
Why does a blog page showing static content do madness like this? I&#x27;d think google engineers of all people would know better. The site doesn&#x27;t even work without javascript from a 3rd-party domain.<p><a href="https:&#x2F;&#x2F;my.mixtape.moe&#x2F;klvzip.png" rel="nofollow">https:&#x2F;&#x2F;my.mixtape.moe&#x2F;klvzip.png</a><p>Static mirror: <a href="https:&#x2F;&#x2F;archive.fo&#x2F;yyozl" rel="nofollow">https:&#x2F;&#x2F;archive.fo&#x2F;yyozl</a>
评论 #12610448 未加载
wyldfire超过 8 年前
&gt; Instead of using a DCT to generate a new bit representation like many compression schemes in use today, we train two sets of neural networks - one to create the codes from the image (encoder) and another to create the image from the codes (decoder).<p>So instead of implementing a DCT on my client I need to implement a neural network? Or are these encoder&#x2F;decoder steps merely used for the iterative &quot;encoding&quot; process? It seems like the representation of a &quot;GRU&quot; file is different from any other.
评论 #12609153 未加载
评论 #12608201 未加载
jpambrun超过 8 年前
It&#x27;s fun and scientifically interesting, but the decoder model is 87MB by itself.
评论 #12608557 未加载
ilaksh超过 8 年前
I asked about the possibility of doing this type of thing on CS Stack Exchange two years ago.<p><a href="http:&#x2F;&#x2F;cs.stackexchange.com&#x2F;questions&#x2F;22317&#x2F;does-there-exist-a-data-compression-algorithm-that-uses-a-large-dataset-distribu" rel="nofollow">http:&#x2F;&#x2F;cs.stackexchange.com&#x2F;questions&#x2F;22317&#x2F;does-there-exist...</a><p>They basically ripped me a new one said it was a stupid idea and that I shouldnt make suggestions in a question. Then I took the suggestions and details out (but left the basic concept in there) and they gave me a lecture on basics of image compression.<p>Made me really not want to try to discuss anything with anyone after that.
评论 #12610660 未加载
评论 #12608589 未加载
评论 #12610096 未加载
评论 #12609491 未加载
ChrisFoster超过 8 年前
It&#x27;s quite exciting to see progress on a data driven approach to compression. Any compression program encodes a certain amount of information about the correlations of the input data in the program itself. It&#x27;s a big engineering task to determine a simple and computationally efficient scheme which models a given type of correlation.<p>It seems to me like the data driven approach could greatly outperform hand tuned codecs in terms of compression ratio by using a far more expressive model of the input data. Computational cost and model size is likely to be a lot higher though, unless that&#x27;s also factored into the optimization problem as a regularization term: if you don&#x27;t ask for simplicity, you&#x27;re unlikely to get it!<p>Lossy codecs like jpeg are optimized to permit the kinds of errors that humans don&#x27;t find objectionable. However, it&#x27;s easy to imagine that this is not the <i>right kind</i> of lossyness for some use cases. With a data driven approach, one could imagine optimizing for compression which only looses information irrelevant to a (potentially nonhuman) process consuming the data.
Houshalter超过 8 年前
This seems so overly complicated, with the RNN learning to do arithmetic coding and image compression all at once. Why not do something like autoencoders to compress the image? Then you need only send a small hidden state. You can compress an image to many fewer bits like that. Then you can clean up the remaining error by sending the smaller Delta, which itself can be compressed, either by the same neural net, or with standard image compression.<p>The idea of using NNs for compression has been around for at least 2 decades. The real issue is that it&#x27;s ridiculously slow. Performance is a big deal for most applications.<p>It&#x27;s also not clear how to handle different resolutions or ratios.
Lerc超过 8 年前
I see there being a number ofpaths for Neural Network compression.<p>She Simplest is a network with inputs of [X,Y] and outputs of {R,G,B] Where the image is encoded into the network weights. You have to per-image train the network. My guess is it would need large complex images before you could get compression rates comparable to simpler techniques. An example of this can be seen at <a href="http:&#x2F;&#x2F;cs.stanford.edu&#x2F;people&#x2F;karpathy&#x2F;convnetjs&#x2F;demo&#x2F;image_regression.html" rel="nofollow">http:&#x2F;&#x2F;cs.stanford.edu&#x2F;people&#x2F;karpathy&#x2F;convnetjs&#x2F;demo&#x2F;image_...</a><p>In the same vein, you could encode video as a network of [X,Y,T] --&gt; [R, G, B]. I suspect that would be getting into lifetime of the universe scales of training time to get high quality.<p>The other way to go is a neural net decoder. The network is trained to generate images from input data, You could theoretically train a network to do a IDCT, so it is also within the bounds of possibility that you could train a better transform that has better quality&#x2F;compressibility characteristics. This is one network for all possible images.<p>You can also do hybrids of the above techniques where you train a decoder to handle a class of image and then provide a input bundle.<p>I think the place where Neural Networks would excel would be as a predictive+delta compression method. Neural networks should be able to predict based upon the context of the parts of the image that have already been decoded.<p>Imagine a neural network image upscaler that doubled the size of a lower resolution image. If you store a delta map to correct any areas that the upscaler guesses excessively wrong then you have a method to store arbitrary images. Ideally you can roll the delta encoding into the network as well. Rather than just correcting poor guesses, the network could rank possible outputs by likelyhood. The delta map then just picks the correct guess, which if the predictor is good, should result in an extremely compressible delta map.<p>The principle is broadly similar to the approach to wavelet compression, only with a neural network the network can potentially go &quot;That&#x27;s an eye&#x2F;frog&#x2F;egg&#x2F;box, I know how this is going to look scaled up&quot;
concerneduser超过 8 年前
That neural network technology is all fine and good for compressing images of lighthouses and dogs - but what about other things?
rdtsc超过 8 年前
Now that Google is full on the neural network deep learning train with their Tensor Processing Units we&#x27;ll be seeing NN applied to everything. There was an article about translation now imagine compression. It is a bit amusing, but nothing wrong with it, this is great stuff, I am glad they are sharing all this work.
sevenless超过 8 年前
I&#x27;ve been wondering when neural networks might be able to compress a movie back down to the screenplay.
acd超过 8 年前
Is there any image compression that uses Eigenfaces? Using the fact your face may look similar to someone else face.<p>What if you use uniqueness and eigenface look up table for compression?
评论 #12611512 未加载
zump超过 8 年前
Compression engineers shaking in their boots.
aligajani超过 8 年前
I knew this was coming. Great stuff.
rasz_pl超过 8 年前
you could probably reach 20% by building custom quantization table (DQT) per image alone
samfisher83超过 8 年前
Was this this inspired by silicon valley?
joantune超过 8 年前
Nice!! They should call it PiedPiper :D (I can&#x27;t believe I was the 1st one with this comment)
评论 #12612846 未加载