TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fighting JPEG color banding

136 pointsby igoradamenkoalmost 3 years ago

14 comments

RicoElectricoalmost 3 years ago
I remember that almost 20 years ago, I played with a utility that allowed manual JPEG optimization by painting regions with desired quality settings. I think it was included on a CD bundled with &quot;Magazyn internetowy WWW&quot; [1]. Does anyone remember such program?<p>[1] <a href="https:&#x2F;&#x2F;sprzedajemy.pl&#x2F;www-magazyn-internetowy-2003-nr-04-03-72-warszawa-2-402203-nr53783895" rel="nofollow">https:&#x2F;&#x2F;sprzedajemy.pl&#x2F;www-magazyn-internetowy-2003-nr-04-03...</a> - incidentally this must be the very issue, see the teaser about image optimization
lynguistalmost 3 years ago
This is OK for icon sized images, but it hurts me when I read webpages that incorporate photographs or visualizations in low resolution. They keep saying “Web resolution” and talk about something like 200x200 images maximum.<p>Even for the past 20 years, when I read webpages with images, I inspect them closely. Often it’s something like a newspaper article with photographs or Wikipedia. I specifically set my Wikipedia settings to deliver me images at the highest possible resolution and now I can actually enjoy reading it. Specifically I use the Timeless skin and set the thumbnail size to the maximum.<p>Sometimes I come across webpages that describe something like historic trains and all they have are icon sized photographs of them. It’s so sad.
viraptoralmost 3 years ago
Thing I&#x27;d love to try if I had time - a compressor which derives the best table for the image. I&#x27;m imagining a loop of: start with the default, compress, calculate the difference from the original, dct the error to see which patterns are missed, adjust the table, repeat. Stop on some given error&#x2F;size-increase ratio. (yes, I&#x27;m trying to get someone else nerd sniped into doing this)<p>Edit: something like this <a href="https:&#x2F;&#x2F;www.imaging.org&#x2F;site&#x2F;PDFS&#x2F;Papers&#x2F;2003&#x2F;PICS-0-287&#x2F;8494.pdf" rel="nofollow">https:&#x2F;&#x2F;www.imaging.org&#x2F;site&#x2F;PDFS&#x2F;Papers&#x2F;2003&#x2F;PICS-0-287&#x2F;849...</a><p>So from the older known ones there DCTune and DCTex methods, but it seems neither is available for download anywhere.
评论 #31860703 未加载
评论 #31860203 未加载
hanswordalmost 3 years ago
I love this and would have dearly needed it like 5 years ago. Now, it is still a very interesting read.<p>But given what we have already seen from Nvidia on video compression [0], I think within the next few years, we will move everything to machine-learning-&#x27;compressed&#x27; images (aka transmitting a super-low-res seed image and some additional ASCII and having an ML model reconstruct and upscale it at the client side).<p>[0] <a href="https:&#x2F;&#x2F;www.dpreview.com&#x2F;news&#x2F;5756257699&#x2F;nvidia-research-develops-a-neural-network-to-replace-traditional-video-compression" rel="nofollow">https:&#x2F;&#x2F;www.dpreview.com&#x2F;news&#x2F;5756257699&#x2F;nvidia-research-dev...</a>
评论 #31861814 未加载
评论 #31860191 未加载
评论 #31864033 未加载
lifthrasiiralmost 3 years ago
The original JPEG, retroactively named JPEG1, totally lacked loop filters and the quantization factor of DC coefficient matters much more than modern formats. As an example, libjpeg q89 is noticably worse than q90 because the DC quantization factor changes from 3 to 4 (smaller factor means less quantization thus higher quality), quite a big jump.
cabirumalmost 3 years ago
There&#x27;s also webp and heif and png and svg, and I believe all the existing formats already solve the image compression problem. The difference of 18kb vs 22kb from hours of microoptimisations is frankly irrelevant given the rate of networks getting faster.
评论 #31859903 未加载
评论 #31860237 未加载
评论 #31859720 未加载
yborisalmost 3 years ago
Just use JPEG XL (aka .jxl) - <a href="https:&#x2F;&#x2F;jpegxl.info&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jpegxl.info&#x2F;</a><p>You can re-save a JXL image a thousand times without deterioration :)
raszalmost 3 years ago
Does this mean default quantization tables were badly picked after all? And someone only noticed after 30 years?<p>It sounds like the employed solution only modifies first element from [16,17] to [10,16].
评论 #31859142 未加载
评论 #31860273 未加载
erlndgdtalmost 3 years ago
How is this different from say, what <a href="https:&#x2F;&#x2F;kraken.io" rel="nofollow">https:&#x2F;&#x2F;kraken.io</a> is doing. When I upload images there I get a smaller file size and the image looks no different
评论 #31860535 未加载
评论 #31860500 未加载
saltmineralmost 3 years ago
&gt; Ok, but where does this table come from when we need to save a file? It would be a big complication if you had to construct, and transmit 64 independent numbers as a parameter. Instead, most encoders provide a simple interface to set all 64 values simultaneously. This is the well known “quality,” which value could be from 0 to 100. So, we just provide the encoder desired quality and it scales some “base” quantization table. The higher quality, the lower values in quantization table.<p>I never really thought about how that &quot;quality&quot; slider worked (besides making the compression lossier), but it makes perfect sense now! It always amazes me how much I take for granted.<p>I always treat compression like a black box: &quot;-crf 23&quot; for H264, PNG and FLAC are nice but MP3 320s and 90+ &quot;quality&quot; JPEGs are good compromises, etc. And that&#x27;s just for the stuff I deal with, there&#x27;s no telling how much lossy compression goes on behind the scenes on my own computers, let alone all the stuff served up over the internet. There&#x27;s so much lossy compression in the world, from EP speed on VHS tapes to websites reencoding uploaded images to every online video ever, it&#x27;s crazy to think about.
Firadeoclusalmost 3 years ago
Though its impact may be limited, this is some nice work!<p>Now could someone look at how video codecs can produce excellent high-detail scenes and motion in 4k resolution while at the same time making a blocky mess out of soft gradients, especially in dark scenes with slow movement?
LordDragonfangalmost 3 years ago
If you want a similar read on DCT-based encoding, I highly recommend this article which lays it out in an extremely digestible format:<p><a href="https:&#x2F;&#x2F;sidbala.com&#x2F;h-264-is-magic&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sidbala.com&#x2F;h-264-is-magic&#x2F;</a>
mcdonjealmost 3 years ago
I ran into this problem exporting images from Dark Table. The solution I ended up going with was simply exporting as png instead of jpg.
hulitualmost 3 years ago
just use PNG. JPEG sucks.
评论 #31859719 未加载
评论 #31859482 未加载
评论 #31857859 未加载
评论 #31858725 未加载
评论 #31859284 未加载
评论 #31859526 未加载
评论 #31859449 未加载