Hello! I made this. People are talking about not wanting pictures to be initially blurry before they finish loading. I understand that too, and I'm not sure how I feel about it myself (I could go either way).<p>But for what it's worth, I actually made this for another use case: I have a grid of images that I want to be able to zoom really far out. It'd be nice to show something better than the average color when you do this, but it would be too expensive to fetch a lot of really small images all at once. ThumbHash is my way of showing something more accurate than a solid color but without the performance cost of fetching an image. In this scenario you'd only ever see the ThumbHash. You would have to zoom back in to see the full image.
Blurring images or doing any sort of maths on the RGB values without first converting from the source-image gamma curve to "linear light" is wrong. Ideally, any such generated image should match the colour space of the image it is replacing. E.g.: sRGB should be used as the placeholder for sRGB, Display P3 for Display P3, etc...<p>Without these features, some images will have noticeable brightness or hue shifts. Shown side-by-side like in the demo page this is not easy to see, but when <i>replaced in the same spot</i> it will result in a sudden change. Since the whole point of this format is to replace images temporarily, then ideally this should be corrected.<p>As some people have said, developers often make things work for "their machine". Their machine on the "fast LAN", set to "en-US", and for <i>their monitor</i> and web browser combination. Most developers use SDR sRGB and are blithely unaware that all iDevices (for example) use HDR Display P3 with different RGB primaries and gamma curves.<p>A hilarious example of this is seeing <i>Microsoft</i> use Macs to design UIs for Windows which then look too light because taking the same image file across to a PC shifts the brightness curve. Oops.
I hate these blurry image thumbnails, much prefer some sort of hole, and just wait for a better thumbnail (look at youtube for this, or basically any site). I'd much rather see engineers spending more time making the thumbnails load faster (improving their backend throughput, precache thumbnails, better compression, etc). The blurry thumbnails have 2 issues 1) trick person into thinking they're loaded, especially if there's a flicker before the blurry thumbnails are displayed!!! so then the brain has to double back and look at the new image. 2) have a meaning that content is blocked from viewing
I open sourced a version of what Evan calls the "webp potato hash" awhile back: <a href="https://github.com/transitive-bullshit/lqip-modern">https://github.com/transitive-bullshit/lqip-modern</a><p>I generally prefer using webp to BlurHash or this version of ThumbHash because it's natively supported and decoded by browsers – as opposed to requiring custom decoding logic which will generally lock up the main thread.
What I’ve seen instagram and slack do is create a really small jpg and inline that in the API response. They then render it in the page and blur it while the full size image loads.<p>Placeholder image ends up being about 1KB vs the handful of bytes here but it looks pretty nice<p>Everything is a trade off of course, if you’re looking to keep data size to a minimum then blurhash or thumbhash are the way to go
An order of magnitude smaller than Facebook's 200 byte goal for preview photos in their graphql responses.<p>see <a href="https://engineering.fb.com/2015/08/06/android/the-technology-behind-preview-photos/" rel="nofollow">https://engineering.fb.com/2015/08/06/android/the-technology...</a>
This is nice, I really like it.<p>It reminds me of exploring the SVG loader using potrace to generate a silhouette outline of the image.<p>Here's a demo of what that's like:<p><a href="https://twitter.com/Martin_Adams/status/918772434370748416?s=20" rel="nofollow">https://twitter.com/Martin_Adams/status/918772434370748416?s...</a>
ThumbHash? seems more like MicroJPEG maybe? hash implies some specific things about the inputs and outputs that are definitely not true!<p>cool idea to extract one piece of the DCTs and emit a tiny low-res image though!
On the examples given, it definitely looks the best of all of them, and seems to be as small as or smaller than their size<p>I'm not really sure I understand why all the others are presented in base83 though, while this uses binary/base64. Is it because EvanW is smarter than these people or did they try to access some characteristic of base83 I don't know about?
Cool tech, but i feel that for all even remotely modern connection types placeholders like this are obsolete and do nothing but slow down showing the real thing.
I think they should siply use four patches of BC1 (DXT1) texture: <a href="https://en.wikipedia.org/wiki/S3_Texture_Compression" rel="nofollow">https://en.wikipedia.org/wiki/S3_Texture_Compression</a><p>It allows storing a full 8x8 pixel image in 32 Bytes (4 bits per RGB pixel).
Very nice, I just saw the Ruby implementation[1]. This looks useful! Right now I'm making 16x16 PNGs and this looks way better. I might attempt making a custom element that renders these.<p>[1] <a href="https://github.com/daibhin/thumbhash">https://github.com/daibhin/thumbhash</a>
At <a href="https://www.mobilityengineeringtech.com/" rel="nofollow">https://www.mobilityengineeringtech.com/</a> the images are inline SVGs before they finish loading. Never seen that anywhere else.
Anyone know why the first comparison image is rotated 90 degrees for both ThumbHash and BlurHash versions? Is this a limitation of the type of encoding or just a mistake? All other comparison images match source rotation.
The results are pretty impressive. I wonder if the general idea can be applied with a bit more data than the roughly 21 bytes in this version. I know it's not a format that lends itself to be configurable. I'd be fine with placeholders that are say around 100-200 bytes. Many times that seems enough to actually let the brain roughly know what the image will contain.
I'm a big fan of anything that can make networked experiences a little smoother. When you're having to deal with less than amazing connections pages full of loading spinners and blank spots get old fast.<p>Also, love that this comes with a reference implementation in Swift. Will definitely keep it in mind for future projects.
For these ultra-small sizes, I think I would go with Potato WebP since you can render it without JS, either with an <img> tag or a CSS background. I think it looks better too.
I don't understand why it is only for <100x100 images. Isn't the blurring useful for larger images? what's the point of inlining small ones?
love these type of optimizations... blurhash seems to be giving me more pleasant results that thumbhash on the few examples i ran through it! thumbhash seems to over emphasize/crystalize parts of the image and results in a thumbnail that diverges from the source in unexpected ways.<p>either way this is awesome, and thanks for sharing
First of all, I love the idea and I think it's very creative.<p>As for my impression, but I don't think the blurry images is impressive enough to load an additional 32 kB per image. I think the UX will be approximately the same with a 1x1 pixel image that's just the average color used in the picture, but I can't test that out.
A single file with a few functions, it seemed a good test to convert it to some other languages with GPT-4 (I tried Python and Ruby). Unfortunately, my access to GPT-4 is limited to the 2k version, and the first function is 4,500 tokens (800 minified, but losing names, comments, and probably the quality of the conversion).<p>With some language-independent tests in such a repository, you might be able to semi-automatically convert the code into different languages, and continue with code scanning and optimizations.<p>Anyway: very nice work!