This test was done COMPLETELY WRONG.<p>Look at the details on the jpeg settings from the image itself.<p>Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.<p>This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.<p>This entire test has to be redone.<p>Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.<p>(and do not use default photoshop settings ever for web images)<p>ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it
"other companies have already started or will also start implementing this new Retina technology."<p>That is simply impossible for two reasons:<p>1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays.
2) Retina is not a technology.<p>I think the word technology is being overused these days.<p>The author was simply talking about high-PPI displays when he said "this new Retina technology", which other companies have already "implemented" in their smartphone displays. Unless one is talking only about Apple products (which is not the case here) the term retina display should not be used.
This is incredible. I can't believe it's taken this long for someone to realise this (at least, it doesn't seem like common knowledge to me). Just commenting for the benefit of anyone without a retina display -- the differences really are stunning. It's like night and day, and to do that while still reducing file sizes seems crazy.<p>So, this raises the question: should this become standard practice from now on. If not, why not?<p>Poor headline though.
Suspect that in-browser sharpening of resized images (CSS or HTML) can make up for some of the detail lost in the lower JPG quality.<p>In the 60-90 range, differences are always minimal, especially when applied to images lacking detail 'coverage' to begin with (like the test set on the site).<p>Bottom line: I to think that the blogger is onto something.<p>Even if the "less than original size" thing doesn't always pan out due to inefficiencies in his compression process, it makes sense that in-browser sharpening would allow reduced file sizes for images displayed at lower than native resolution.<p>PS Significant savings in JPEG size with barely any perceptible loss of detail can be achieved by anyone with JPEGMini (which, unlike JPEG2000 is 100% compatible with all browsers these days).
I've always been vaguely aware that JPEG gives better results at a constant quality setting as increase the size of the image.<p>Firstly - the 8x8 blocks become smaller relative to the image but in addition to this I think that the it's just in the nature of compression algorithms in general and lossy compression in particular to produce better results when there's more source material to work with.<p>However I didn't expect it to improve enough to enable one to but the Gordian knot that Retina displays have forced upon us by using 2x resolution images across the board.<p>I would imagine that not all source images respond quite as well as others.<p>Also - using 2x images for all devices will surely create a quadrupling of RAM requirements which might cause performance issues.
This is fascinating as I discovered exactly the same thing this week (sadly after applying loads of javascript to swap retina images).<p>I found that a 30 jpeg at retina size generally looks better than a scaled 80 and is smaller.<p>Plus you can zoom in on any Mac, not just an iPad (something I do all the time).
Why not just figure out the pixelRatio once and then serve images according to that?<p>See this gist: <a href="https://gist.github.com/3848834" rel="nofollow">https://gist.github.com/3848834</a>
Smart. It now sems like it should have been obvious all along.<p>You want to reduce the size of an image file from X kB down to y kB. Which method will give better-looking results?<p>1. Dumb, across-the-board by-two resolution reduction?
2. Smart, perceptually-tuned jpeg compression?<p>We probably should have been using this all along. That we can also benefit from the extra resolution thanks to touch interfaces and high-dpi displays is icing.
While I can appreciate the technique, I haven't adopted the use of @2x images for photos. Most photos tend to hold up pretty well when scaled (soft edges and relatively low contrast). If you're serving up a portfolio, sure, but I find more value in maintaining and serving @2x (based on media queries for MDPR > 1.3) for UI elements or logos not easily reproduced in CSS/SVG.
Surely this works because the compression algorithm can find more predictability in an image before its resolution is reduced. You see this affect a lot when you are trying to get very high compression levels. However, it is difficult to prove exactly what is the best settings without actively trying different configurations.
This is so wrong.<p>1) He didn't sharpen the small images.<p>2) He only displayed one type of image - very bright with no shadow detail.<p>This theory breaks completely if you actually compare apples to apples. Like these two, both 80KB generated from a high quality image:<p><a href="http://i.imgur.com/E3gaB.jpg" rel="nofollow">http://i.imgur.com/E3gaB.jpg</a><p><a href="http://i.imgur.com/UrDEx.jpg" rel="nofollow">http://i.imgur.com/UrDEx.jpg</a>
Good observation, but too bad that no one will ever see your high DPI images because the "retina revolution" isn't a thing. Most people won't ever notice the difference.