The article doesn't put enough emphasis on the need for gamma correction. If you dither a pure gray area of level 128, it will become approximately 50% white and 50% black. But when you display that back on the screen, because of gamma effects it will look closer to a level of 186!<p>A few years ago there was a need for 15-bpp or 16-bpp color images on phones rather than the 24-bpp images we usually work with, and dithering was a great way of producing them. No idea how much need there is today though.
There's a really good technique for getting rid of the wormy or snake-like textures in Floyd-Steinberg: add another term to the threshold proportional to (some monotonic function of) the distance to the nearest preceding dot. This tends to make the dots very nicely spaced. These ideas were, among other places, used in Gutenprint.<p>A bit of description, and ancient but working GPL'ed code here: <a href="http://www.levien.com/artofcode/eventone/" rel="nofollow">http://www.levien.com/artofcode/eventone/</a><p>A paper containing the basic output-dependent feedback idea: <a href="http://levien.com/output_dependent_feedback.pdf" rel="nofollow">http://levien.com/output_dependent_feedback.pdf</a>
Dithering trades sample resolution for sample frequency to convey the same information. For images, the underlying sample frequency usually doesn't change but the <i>apparent</i> spatial sample frequency becomes lower in order to achieve more than one effective bit per (coarser) sample.<p>For one-dimensional signals like audio, the underlying sample frequency is usually increased while decreasing the sample resolution (often to 1 bit per sample). This keeps the effective Nyquist frequency where it needs to be while pushing noise much higher in frequency where it's very easy to remove with an analog filter. Delta-sigma modulation is perhaps the most common method of audio dithering (although the delta-sigma literature rarely uses the D word).<p>The reason images and audio usually use dithering in opposite ways is that images are usually post-processed for lower true sample resolution (and higher effective sample resolution) <i>after</i> sampling, while audio is often sampled <i>initially</i> at much higher than the Nyquist frequency because dithering is a planned part of the audio processing chain. But not always! Those are merely common use cases.
Also a paper about halftoning on laser printers:<p><a href="http://users.eecs.northwestern.edu/~pappas/papers/pappas_ist94.pdf" rel="nofollow">http://users.eecs.northwestern.edu/~pappas/papers/pappas_ist...</a><p>I have never used a laser printer that did implement anything like this, the usual halftoning sucks. Once I handcrafted a file with a 600 dpi Floyd-Steinberg image (the native resolution of the printer I hadd) and it resulted in much better results, I didn't bother calibrating the gray levels though.
What a coincidence, I just stumbled upon this article a few hours ago while trying to find out which error diffusion algorithm is used by Photoshop out of curiosity.<p>I've been playing with dithering recently to create braille art[0] and this series of articles[1] by the libcaca developers has been a huge help. It also goes over model based dithering algorithms which tend to give the best results.<p>[0]: Example <a href="https://pastebin.com/raw/cRt4GL8j" rel="nofollow">https://pastebin.com/raw/cRt4GL8j</a><p>[1]: <a href="http://caca.zoy.org/study/index.html" rel="nofollow">http://caca.zoy.org/study/index.html</a>
Two modern use cases where dithering is more important than ever:<p>- Tone mapping hdr to display colors<p>- Alpha style transparency in deferred rendering<p>Also one case I heard about but don't know much details is audio.<p>Dithering is important and powerful any time some precision is discarded.
The article mentions ordered dithering but fails to list void-and-cluster and similar variants thereof. Those parralelize really well (unlike error diffusion), don't result in obvious patterns (unlike normal ordered dithering) and can be run on gpus. It's quite useful to dither high bit depth videos to 8bit in realtime. Dithering HDR content has the benefit of not introducing banding on SDR displays.
Seeing a dithered image really brings me back to the mid 90's. Seeing a dithered photograph reminds me of the early web or AOL. You used to remove colors from GIFs to save space on web pages, as few as you could as long as the image was still tolerable.<p>With a 1MB SVGA card, you could pick between 16-bit color at 800x600, or 8-bit (256 colors) @ 1024x768. Did you value higher resolution, or not having to palette shift every time you switch apps?
I was showing coworkers an example of the Floyd Steinberg algorithm today.<p>The following images have the same amount of colors, namely black and white.<p><a href="https://i.imgur.com/stQUl5E.gif" rel="nofollow">https://i.imgur.com/stQUl5E.gif</a><p><a href="https://i.imgur.com/mw8IX9N.gif" rel="nofollow">https://i.imgur.com/mw8IX9N.gif</a><p>Source image:<p><a href="https://i.imgur.com/diR72k2.jpg" rel="nofollow">https://i.imgur.com/diR72k2.jpg</a>
<a href="http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pdf" rel="nofollow">http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pd...</a> is one of the best general papers I've read on the subject.
These images, especially the last one, bring back a tonne of nostalgia about early 90s Apple Macintoshes and my HP Deskjet 310.<p>I wonder what artifacts of the limitations of modern technology will be remembered with nostalgia by those growing up with today's equipment.