TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Differentiable Dithering

144 点作者 underanalyzer超过 4 年前

13 条评论

dgant超过 4 年前
A great read on dithering: Lucas Pope&#x27;s development blogs while working on Return of the Obra Dinn.<p>It&#x27;s an incredible dive into how he created the game&#x27;s remarkable and unique look, featuring a wonderful and unexpected mathematical contribution from a forum member. If you&#x27;re not familiar with the game peek at a trailer to see what an achievement it was.<p><a href="https:&#x2F;&#x2F;forums.tigsource.com&#x2F;index.php?topic=40832.msg1363742#msg1363742" rel="nofollow">https:&#x2F;&#x2F;forums.tigsource.com&#x2F;index.php?topic=40832.msg136374...</a>
bane超过 4 年前
On dithering, the original Playstation had built in support for dithering. On CRT televisions it helped provide a better looking visual and it&#x27;s a huge part of the &quot;look&quot; of the system.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bi-Wzl6BwRM&amp;feature=emb_title&amp;ab_channel=ModernVintageGamer" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=bi-Wzl6BwRM&amp;feature=emb_titl...</a>
anilgulecha超过 4 年前
I&#x27;ve recently done a few things around dithering, and found this site good to experiement:<p><a href="https:&#x2F;&#x2F;ditherit.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ditherit.com&#x2F;</a><p>It&#x27;s open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;alexharris&#x2F;ditherit-v2" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alexharris&#x2F;ditherit-v2</a>
评论 #24494106 未加载
londons_explore超过 4 年前
&gt; A pipedream would be an entirely differentiable image compression pipeline where all the steps can be fine tuned together to optimize a particular image with respect to any differentiable loss function.<p>Neural Image Compression? <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1908.08988" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1908.08988</a>
评论 #24491736 未加载
评论 #24493767 未加载
评论 #24493859 未加载
评论 #24491264 未加载
steerablesafe超过 4 年前
It&#x27;s a very interesting approach, however once you have the probability distribution for each pixel, independent random sampling produces a poor dither pattern compared to Floyd-Steinberg or other error diffusion approaches.<p>I think once you have the target distributions then maybe you can combine the sampling with some error diffusion approach. The idea is to make the sampling of neighboring pixels negatively correlated, so the colors average out at shorter length scale.<p>For a sledgehammer approach you can try to have a blur in your loss function and try to sample from the combined probability distribution of all the pixels (ie. sample whole images). It would probably make the calculation even more expensive or possibly even infeasible.
评论 #24491099 未加载
评论 #24498359 未加载
Const-me超过 4 年前
Last time I did dithering was for Polyjet 3D printers. The problem is substantially different from what’s in the article.<p>The palette is fixed, as the colors are physically different materials. The amount of data is huge, an image is a layer and the complete model has thousands of layers, because 3D.<p>I implemented a 3D-generalization of ordered dithering <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ordered_dithering" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ordered_dithering</a> The algorithm doesn’t have any data dependencies across voxels, the result only depends on source data, and position of the voxel. I did it on GPU with HLSL shaders, it takes a few seconds to produce thousands of images.
phonebucket超过 4 年前
Fun. I never considered differentiable dithering before.<p>Would be interesting to see results using a content loss function as defined by Gatys (2015), as opposed to the L2 loss as given. That should hopefully capture more long-distance structures in the image rather than optimising each pixel independently.
dbaupp超过 4 年前
Very interesting!<p>This seems somewhat similar to the recently published GIFnets[1]. However, I believe GIFnets is training a reusable network to a predict palettes, and pixel assignments, while this post is focusing on optimising the &quot;weights&quot; (i.e. pixel values) for a single image.<p>I wonder if the loss functions from GIFnets could be applied to this single-image approach to potentially solve the banding problem via something a little more &quot;perceptual&quot; than the variance term mentioned.<p>[1]: &quot;GIFnets: Differentiable GIF Encoding Framework&quot; <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2006.13434" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2006.13434</a>
评论 #24494739 未加载
SimplyUnknown超过 4 年前
Looks cool!<p>Two questions:<p>- Is this approach also learning the palette? It is kind presented as a given here but it is of course very important for a good dithering.<p>- The loss function might work better on spatially downsampled images. The downsampling causes a mix of the image colors making the dithered image look more like the original given a good dithering. This also naturally removes the variance that is now penalized in the loss function as this is blurred away.
评论 #24493566 未加载
bufferoverflow超过 4 年前
What are the applications for dithering these days? I understand it was needed when we had 4 or 16 or 256 color limits. But now we have 8-bit&#x2F;channel displays, and 10-bit is becoming popular.
评论 #24492046 未加载
评论 #24493173 未加载
enriquto超过 4 年前
A straightforward implementation of differentiable dithering consists in applying a large support band-pass filter to the image (so that it becomes of of zero-mean), and then thresholding it at 0. Sure, you lose the property that the average colors over large regions are conserved, but the image is perfectly recognizable, even with higher contrast than the original.
lokl超过 4 年前
Might be better with CIELAB color space, where &quot;difference&quot; is closer to &quot;perceptual difference.&quot;
tlarkworthy超过 4 年前
Seems like the gains in pallete information is wasted in precise placement of pixels for dithering. Net loss IMHO, except for naive formats like bitmap. Interesting nevertheless but I guess we could do better by optimizing against the storage format. But then we are at the state-of-the-art
评论 #24490888 未加载
评论 #24490782 未加载