TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Differentiable Dithering

144 pointsby underanalyzerover 4 years ago

13 comments

dgantover 4 years ago
A great read on dithering: Lucas Pope&#x27;s development blogs while working on Return of the Obra Dinn.<p>It&#x27;s an incredible dive into how he created the game&#x27;s remarkable and unique look, featuring a wonderful and unexpected mathematical contribution from a forum member. If you&#x27;re not familiar with the game peek at a trailer to see what an achievement it was.<p><a href="https:&#x2F;&#x2F;forums.tigsource.com&#x2F;index.php?topic=40832.msg1363742#msg1363742" rel="nofollow">https:&#x2F;&#x2F;forums.tigsource.com&#x2F;index.php?topic=40832.msg136374...</a>
baneover 4 years ago
On dithering, the original Playstation had built in support for dithering. On CRT televisions it helped provide a better looking visual and it&#x27;s a huge part of the &quot;look&quot; of the system.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bi-Wzl6BwRM&amp;feature=emb_title&amp;ab_channel=ModernVintageGamer" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bi-Wzl6BwRM&amp;feature=emb_titl...</a>
anilgulechaover 4 years ago
I&#x27;ve recently done a few things around dithering, and found this site good to experiement:<p><a href="https:&#x2F;&#x2F;ditherit.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ditherit.com&#x2F;</a><p>It&#x27;s open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;alexharris&#x2F;ditherit-v2" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alexharris&#x2F;ditherit-v2</a>
评论 #24494106 未加载
londons_exploreover 4 years ago
&gt; A pipedream would be an entirely differentiable image compression pipeline where all the steps can be fine tuned together to optimize a particular image with respect to any differentiable loss function.<p>Neural Image Compression? <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1908.08988" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1908.08988</a>
评论 #24491736 未加载
评论 #24493767 未加载
评论 #24493859 未加载
评论 #24491264 未加载
steerablesafeover 4 years ago
It&#x27;s a very interesting approach, however once you have the probability distribution for each pixel, independent random sampling produces a poor dither pattern compared to Floyd-Steinberg or other error diffusion approaches.<p>I think once you have the target distributions then maybe you can combine the sampling with some error diffusion approach. The idea is to make the sampling of neighboring pixels negatively correlated, so the colors average out at shorter length scale.<p>For a sledgehammer approach you can try to have a blur in your loss function and try to sample from the combined probability distribution of all the pixels (ie. sample whole images). It would probably make the calculation even more expensive or possibly even infeasible.
评论 #24491099 未加载
评论 #24498359 未加载
Const-meover 4 years ago
Last time I did dithering was for Polyjet 3D printers. The problem is substantially different from what’s in the article.<p>The palette is fixed, as the colors are physically different materials. The amount of data is huge, an image is a layer and the complete model has thousands of layers, because 3D.<p>I implemented a 3D-generalization of ordered dithering <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ordered_dithering" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ordered_dithering</a> The algorithm doesn’t have any data dependencies across voxels, the result only depends on source data, and position of the voxel. I did it on GPU with HLSL shaders, it takes a few seconds to produce thousands of images.
phonebucketover 4 years ago
Fun. I never considered differentiable dithering before.<p>Would be interesting to see results using a content loss function as defined by Gatys (2015), as opposed to the L2 loss as given. That should hopefully capture more long-distance structures in the image rather than optimising each pixel independently.
dbauppover 4 years ago
Very interesting!<p>This seems somewhat similar to the recently published GIFnets[1]. However, I believe GIFnets is training a reusable network to a predict palettes, and pixel assignments, while this post is focusing on optimising the &quot;weights&quot; (i.e. pixel values) for a single image.<p>I wonder if the loss functions from GIFnets could be applied to this single-image approach to potentially solve the banding problem via something a little more &quot;perceptual&quot; than the variance term mentioned.<p>[1]: &quot;GIFnets: Differentiable GIF Encoding Framework&quot; <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2006.13434" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2006.13434</a>
评论 #24494739 未加载
SimplyUnknownover 4 years ago
Looks cool!<p>Two questions:<p>- Is this approach also learning the palette? It is kind presented as a given here but it is of course very important for a good dithering.<p>- The loss function might work better on spatially downsampled images. The downsampling causes a mix of the image colors making the dithered image look more like the original given a good dithering. This also naturally removes the variance that is now penalized in the loss function as this is blurred away.
评论 #24493566 未加载
bufferoverflowover 4 years ago
What are the applications for dithering these days? I understand it was needed when we had 4 or 16 or 256 color limits. But now we have 8-bit&#x2F;channel displays, and 10-bit is becoming popular.
评论 #24492046 未加载
评论 #24493173 未加载
enriqutoover 4 years ago
A straightforward implementation of differentiable dithering consists in applying a large support band-pass filter to the image (so that it becomes of of zero-mean), and then thresholding it at 0. Sure, you lose the property that the average colors over large regions are conserved, but the image is perfectly recognizable, even with higher contrast than the original.
loklover 4 years ago
Might be better with CIELAB color space, where &quot;difference&quot; is closer to &quot;perceptual difference.&quot;
tlarkworthyover 4 years ago
Seems like the gains in pallete information is wasted in precise placement of pixels for dithering. Net loss IMHO, except for naive formats like bitmap. Interesting nevertheless but I guess we could do better by optimizing against the storage format. But then we are at the state-of-the-art
评论 #24490888 未加载
评论 #24490782 未加载