TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

78% MNIST accuracy using GZIP in under 10 lines of code

363 点作者 js98超过 1 年前

26 条评论

montebicyclelo超过 1 年前
I tried replacing the distance function in the code with some simpler distance measures:<p><pre><code> Gzip distance: ~3 minutes, 78% accuracy Euclidean distance: ~0.5 seconds, 93% accuracy Jaccard distance * : ~0.7 seconds, 94% accuracy Dice dissimilarity * : ~0.8 seconds, 94% accuracy * after binarising the images </code></pre> So, as a distance measure for classifying MNIST digits, GZIP has lower accuracy, and is much more computationally demanding than other measures.<p>I&#x27;m not that familiar with how the GZIP algorithm works, it&#x27;s kind of interesting that it&#x27;s so much lower. I wonder whether image focused compression algorithms might do better?<p>(Edit: Btw, I enjoyed the post; it&#x27;s a creative idea, the writing and code was great, and it&#x27;s sparked some good discussion. But after having a closer look, I think the baselines above provide some context to the gzip scores.)
评论 #37591101 未加载
评论 #37590220 未加载
评论 #37591971 未加载
评论 #37595205 未加载
评论 #37590757 未加载
GaggiX超过 1 年前
For a comparison with others techniques:<p>Linear SVC (best performance): 92 %<p>SVC rbf (best performance): 96.4 %<p>SVC poly (best performance): 94.5 %<p>Logistic regression (prev assignment): 89 %<p>Naive Bayes (prev assignment): 81 %<p>From this blog page: <a href="https:&#x2F;&#x2F;dmkothari.github.io&#x2F;Machine-Learning-Projects&#x2F;SVM_with_MNIST.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;dmkothari.github.io&#x2F;Machine-Learning-Projects&#x2F;SVM_wi...</a><p>Also it seems from reading online articles that people are able to obtain much better results just by using K-NN, so I imagine that the author just made his job harder by using gzip but I could be wrong about this.
评论 #37586338 未加载
评论 #37585509 未加载
评论 #37587068 未加载
评论 #37585455 未加载
评论 #37587952 未加载
tdr2d超过 1 年前
Obviously, the code may be elegant and compact, 78% accuracy is considered very very bad for MNIST.<p>A dummy model written with Tensorflow easilly reaches 90% accuracy. The best models ranked at 99,87%, see the benchmark : <a href="https:&#x2F;&#x2F;paperswithcode.com&#x2F;sota&#x2F;image-classification-on-mnist" rel="nofollow noreferrer">https:&#x2F;&#x2F;paperswithcode.com&#x2F;sota&#x2F;image-classification-on-mnis...</a>
评论 #37586067 未加载
评论 #37585607 未加载
评论 #37586197 未加载
tobeyey超过 1 年前
Replacing the NCD<p><pre><code> distances = [(compute_ncd(x1, x), label) for x, _, label in compressed_lengths] </code></pre> with the Euclidean distance<p><pre><code> distances = [(np.sqrt(np.sum(np.square(x1-x))), label) for x, _, label in compressed_lengths] </code></pre> gives you +15% test accuracy and saves you a lot of compute.
benreesman超过 1 年前
My favorite book about the deep connections between information theory, compression, and learning algorithms is MacKay (most probably know about it but I didn’t for a long time so maybe some will benefit from the mention).<p>I gather this is common knowledge (if not sufficiently emphasized at times) among those with serious educations, but as a self-taught, practical application-type ML person this profound thread running through all these topics (and seemingly into heavy-ass particle physics and cosmology and stuff like that) was a blinding “Aha!” moment that I’ll venture this comment in the hopes that even one other person has that unforgettable moment.
评论 #37591932 未加载
jszymborski超过 1 年前
In fairness you can run MNIST through UMAP and get near perfect seperation. I&#x27;m of the belief that you have to try pretty hard not to do well on MNIST these days.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;lmcinnes&#x2F;umap_paper_notebooks&#x2F;blob&#x2F;master&#x2F;UMAP%20MNIST.ipynb">https:&#x2F;&#x2F;github.com&#x2F;lmcinnes&#x2F;umap_paper_notebooks&#x2F;blob&#x2F;master...</a><p>EDIT: I should add, unless it isn&#x27;t clear, that we really should retire the dataset. Something like the QuickDraw dataset makes a lot more sense to me.
评论 #37585760 未加载
评论 #37586029 未加载
评论 #37587509 未加载
评论 #37585664 未加载
wayeq超过 1 年前
I&#x27;m merely a hobbyist in this domain but isn&#x27;t highly compressed data (like encrypted data) also high entropy? If this is finding patterns in the compressed data to figure out which digit the uncompressed data represents, shouldn&#x27;t those patterns be exploitable for better compression?
评论 #37589357 未加载
评论 #37589445 未加载
tjungblut超过 1 年前
I don&#x27;t immediately find it, but couple of years back there was a &quot;meta-feature&quot; which was the size of the MNIST image. I think that scored about 90&#x27;ish % accurate results on its own - without even looking at the image.
评论 #37586220 未加载
评论 #37585650 未加载
3abiton超过 1 年前
Didn&#x27;t it turn out that authors of that paper have made mistakes that catapulted their results to the top of the benchmark charts? I thought the theory was inconsistent after that incident. 78% accuracy from just GZIP is impressive.
评论 #37588531 未加载
评论 #37594336 未加载
jp57超过 1 年前
Leaving aside whether this problem is a good application of this compression trick, I want to say that everyone experimenting with this should stop using `gzip` and start using `zlib`.<p>If you change the first line from `gzip.compress` to `zlib.compress` you should get the same classifier performance with a 3x speedup.
bob1029超过 1 年前
General purpose compressors and information distance measures have become super interesting to me while I&#x27;ve been investigating alternative language models.<p>I&#x27;ve been playing around with an attention mechanism that combines the idea of using normalized compression distance (gzip) with discrete convolution between candidate sequences (sliding window of N bytes over each). Another round of normalization over the convolution outputs - accommodating varying lengths - allows for us to compare candidate sequences for relevant information on equal grounds.<p>The NCD formula I am using right now:<p><pre><code> NCD(x,y) = (C(xy) - MIN(C(x),C(y))) &#x2F; MAX(C(x),C(y)) </code></pre> No weird parameters or any other things to tune. The only parameters are the source documents and the input context&#x2F;query.
评论 #37609747 未加载
yannccc2超过 1 年前
All these ideas date back to at least 2010, probably earlier. It was called &quot;information distance&quot;. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1006.3520" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1006.3520</a>
tysam_and超过 1 年前
If you&#x27;d like to play around with MNIST yourself, I wrote a PyTorch training implementation that gets ~99.45%+ val accuracy in &lt;13.6 seconds on a V100, est. &lt; 6.5 seconds on an A100. Made to be edited&#x2F;run in Colab: <a href="https:&#x2F;&#x2F;github.com&#x2F;tysam-code&#x2F;hlb-CIFAR10">https:&#x2F;&#x2F;github.com&#x2F;tysam-code&#x2F;hlb-CIFAR10</a><p>It&#x27;s originally kitted for CIFAR10, but I&#x27;ve found the parameters to be quite general. The code is very easy to read and well-commented, and is a great starting place for exploration.<p>Min-cut deltas to run MNIST:<p><pre><code> .datasets.CIFAR10(&#x27; -&gt; .datasets.MNIST(&#x27; (both occurences) &#x27;whiten&#x27;: Conv(3, -&gt; &#x27;whiten&#x27;: Conv(1, crop_size = 28 - &gt; `crop_size = 28 </code></pre> Compute for the project funded by Carter Brown and Daniel Gross, my appreciation to them both for helping make this possible. Their support via encouragement has been very helpful as well. &lt;3 :)
m00x超过 1 年前
It goes along the same lines as the recent paper from Deepmind stating that DL (language modeling in their case) is compression.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2309.10668.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2309.10668.pdf</a>
thomasahle超过 1 年前
Similar result (2019) using ZIP, getting 74%: <a href="https:&#x2F;&#x2F;www.blackhc.net&#x2F;blog&#x2F;2019&#x2F;mnist-by-zip&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.blackhc.net&#x2F;blog&#x2F;2019&#x2F;mnist-by-zip&#x2F;</a>
0xDEF超过 1 年前
Has anyone tried how different compression algorithms compare when doing NCD classification?<p>gzip first does LZ77 and then Huffman coding. ANS is a more modern alternative to Huffman coding that can achieve higher compression ratios than Huffman coding.
Lerc超过 1 年前
I&#x27;m sure there was something like this mentioned in &quot;Managing Gigabytes&quot; or thereabouts. It might have been using bitwise arithmetic compression which makes sense for the problem at hand.
benob超过 1 年前
What explains the huge difference in perf with the approach from 2019 mentioned at the end?<p>For image classification, couldn&#x27;t one use the residual from applying a jpeg codebook learned on a training example as metric?
SomewhatLikely超过 1 年前
Wouldn&#x27;t it make more sense to create ten streams of images of the same class and then see which one results in the smallest compressed size for a test image? That is, if I gzip a hundred &#x27;6&#x27;s plus my test image and get a compressed size for my test image of 10 bytes, but doing the same for other digits gives me say 15 bytes then I conclude the test image is a &#x27;6&#x27;.
yieldcrv超过 1 年前
I want someone to show me how to use compression to compile exploits on emulated logic gates, like NSOGroup keeps doing<p>Seems like a sufficiently novel and important advancement that should be taught in universities at this point, since we need to harden software around this kind of possibility.
petters超过 1 年前
I wonder why you started with a code-golfed version? That seems original to the main point of the post
评论 #37584767 未加载
great_psy超过 1 年前
Having some artificial intelligence algorithm that is completely understood and tu able like gzip is would be great for many uses.<p>I think it’s pretty hard to just improve a NN without more data or some non trivial amount of effort.
tysam_and超过 1 年前
Additionally, try flipping your images and averaging the size before comparing the distance between them. I&#x27;d expect a boost of about 78% -&gt; 84% or so, based on how this typically works as TTA.
评论 #37587477 未加载
tripzilch超过 1 年前
What they mean with &quot;“intelligence is compression”&quot; is that there&#x27;s actually an equivalence between them.<p>See, an &quot;intelligence&quot; is a predictor. Like a LLM, it predicts the next char&#x2F;token depending on what it&#x27;s seen before.<p>In order to turn this into a compressor, you store the &quot;difference&quot; between the predicted token and the actual token. If the predictor is <i>good</i>, this stream of data will be very low entropy which can be compressed with arithmetic encoding or something similar.<p>In the case of arithmetic encoding you get lossless compression. (additionally, because arithmetic encoding is based on probabilities (frequencies), if the predictor outputs probabilities instead of a single prediction, this can also be used to crunch the entropy)<p>Now look at for instance the speech codecs in GSM. They use Linear Prediction Coding, which has the same concept of using a predictor and storing the error stream. Except the latter&#x27;s coefficients are rounded down, making it a form of lossy compression.<p>And yes, you can probably make a pretty good (lossless or perhaps even lossy, but I don&#x27;t think you want that) text compressor by using an LLM to (deterministically) predict (the likelihood of) tokens and storing only the errors&#x2F;differences. It should be able to outperform zip or gzip, because it can make use of language knowledge to make predictions.<p>There&#x27;s a catch, however, which is that in the case of LLM compression you also need to store all those weights somewhere, cause you need the predictor to decode. This is always the catch with compression, there is always some kind of &quot;model&quot; or predictor algorithm that is implicit to the compressor. In the case of gzip it&#x27;s a model that says &quot;strings tend to repeat in patterns&quot;, which is of course &quot;stored&quot; (in some sense) in the gzip executable code. But with gzip we don&#x27;t count the size of gzip to our compressed files either, because we get to compress a lot of files with one model. Similarly for this hypothetical LLM-text compression scheme, but you just need to compress a whole lot more text before it&#x27;s worth it.<p>All that said, however, like many others pointed out, 78% isn&#x27;t a great score for MNIST.<p>Then again, I also don&#x27;t think that gzip compression is the best predictor for gray scale image similarity. For one if you have two pixels of grayscale values 65 and 66, gzip will see them as &quot;different bytes&quot;, regardless of them being very similar in grayscale level. You might even be able to increase the score by thresholding the training set to BW 0&#x2F;255.
_a_a_a_超过 1 年前
&quot;MNIST&quot;?<p>accuracy - but of what?<p>what&#x27;s this about?
评论 #37586785 未加载
Karellen超过 1 年前
You solved some problem, <i>including implementing the GZIP algorithm</i>, in less than 10 lines of code?<p>...<p>Oh, ok, not that.