TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Building a New Image Format

18 pointsby robolazover 12 years ago

13 comments

mistercowover 12 years ago
I have three major problems with WebP. Well, I have one major problem: it's a video format, not an image format, and that leads to three major consequences.<p>First off, the concept of using the intra-frame compressor of a video format to compress images is unsound. Video compression has constraints that don't apply in still-image compression, and you are dragging that baggage around for no reason. These constraints lead to my other two issues.<p>One of the biggest problems with JPEG is that block artifacts are often visible even at relatively high quality settings. Post-decoding deblocking algorithms have proven utterly inadequate for fixing this.<p>One simple and effective leap in mitigating block artifacts is to use larger block sizes. Modern computers are obscenely fast compared to when JPEG was introduced, and we could bump the block size up from 8x8 to 16x16 or even 32x32 without breaking a sweat. This reduces the number of pixels that are at block boundaries, and that means less block artifacts. WebP uses <i>smaller</i> blocks because it's a video format, not an image format.<p>Another way that you can avoid edge artifacts is to overlap the blocks so that the edge artifacts don't create discontinuities. This is called "time-domain alias cancellation" and it's used in the 1D case by every lossy audio codec you've ever used. It generalizes easily to the 2D case, and it <i>would</i> generalize easily to the 3D case, but for reasons that I don't understand because video compression is not really my field, nobody does video compression by taking the DCT of 3D blocks of video. Instead they use a more ad hoc method of motion compensation, which consists of brute force searching for where blocks in one frame have moved in the next frame. It's not clear how you'd reconcile that with overlapping blocks in the interframe compressor, so VP8 doesn't use overlapping blocks. And by extension, neither does WebM.<p>Beating JPEG with a new video format is not all that difficult. We have two decades of research between JPEG's introduction and today, and each of us routinely carries hardware in our pocket that would have been considered a major military asset in 1992. It isn't at all surprising that the key frame compressor of a modern video codec would beat JPEG by 25%, but we can do <i>so much</i> better than that. Once we manage to displace JPEG, we're going to be stuck with whatever displaces it for a long time. So we need to make sure we have the best format we can muster before we make that commitment.
ZeroGravitasover 12 years ago
I don't see what a new format would bring to this problem, in the end you only want to send one image to each visitor, so you probably want a server side program that can transform one big image into various smaller size images. You don't actually want to be shifting all the various sizes around as a bundle.<p>(And WebP's real killer feature is lossy-photo + alpha at the same time, that's where there's a real gap in the market at the moment, in between PNG and JPEG.)
评论 #4814057 未加载
lucian1900over 12 years ago
WebP is not a hybrid of GIF or JPEG or PNG. It's a subset of the VP8 video codec, which, if anything, is more similar to H.264.
csenseover 12 years ago
What's needed is a downloadable JavaScript renderer which parses the webp file format and draws the resulting image in an HTML5 Canvas.<p>Then we don't have to wait for browser makers to implement it in their browsers. You just add a one-line script tag to load webp.js, a script that scans your DOM for img tags with a src that ends with ".webp", and your website has webp support.<p>If browsers later support webp natively, you can just add a check to webp.js so it turns itself off when a webp-supporting browser is detected.<p>It's really too bad that Java in the browser never took off; its performance characteristics are far better for writing image decompressors than JS. But JS with a modern HTML5 Canvas running in a modern JS engine on a modern CPU is good enough for the purpose.
SeppoErvialaover 12 years ago
It's sad that Mozilla has not included WEBP. I cannot understand why since it seems to be a wonderful format.<p>Maybe they are waiting for all the promised features to deliver. So far I haven't seen working animations or lossy + alpha in Chrome so they probably aren't ready yet.
评论 #4814262 未加载
snogglethorpeover 12 years ago
WebP only seems to support 8-bit integer component values, which makes seems pretty backward for a new format.<p>Higher-bit depths and floating-point components (ala OpenEXR) seem an important feature for any image format that hopes to become a future standard. 8 bits were maybe sufficient in 1990; not so much today. Even microsoft eventually got that clue with JPEG XR...<p>You only get so many chances to define a new standard, so going with something as lackluster as WebP would seem to be a mistake. JPEG XR, despite its MS origins, seems to be vastly superior. [JPEG XR has licensing issues with the reference implementation (it's from MS after all), but that's not a fatal flaw...]
评论 #4852714 未加载
lobster_johnsonover 12 years ago
This makes very little sense. The article suggests that you bundle many sizes of the same image into the same file; say, a 1024x1024 version as well as a 512x512 version. This means that a browser that only wants the 512x image will also download the 1024x image whether it wants to or not. So why not just pick the 1024x image and resize it on the client size, if everyone is going to download it? This is something that today's chips do very fast. At 2x resolutions you can even skip interpolation and just go for nearest-neighbour scaling.
评论 #4814527 未加载
评论 #4814539 未加载
vy8vWJlcoover 12 years ago
"We need an image file format that is, in essence, a storage locker."<p>I disagree.<p>HTTP supports cache-able compression, and connection re-use. It wouldn't be any slower to download frames as needed (lazy = good)), using the existing transport protocol. All you need is an ASCII/UTF-8 index file for an animation. Getting browsers to support anything new that doesn't come from the W3C and contain the word "semantic" in the spec on the other hand... That's the hard part.
PommeDeTerreover 12 years ago
This is really one of the worst articles I've seen submitted here lately. I'm truly surprised that it is currently ranked up so high.<p>The presentation of the article is atrocious. The font alone renders extremely poorly in Firefox. It is just plain hard to read. Please just use a common font that will render well basically anywhere.<p>The inline "$3.99 ThanksGiving Offer" ad links peppered throughout the article are distracting, too. They really take focus away from the article itself (unless, of course, that is exactly what they were intended to do).<p>Then it has one part that goes, "It’s the WebP Image Format (see <a href="https://developers.google.com/speed/webp/" rel="nofollow">https://developers.google.com/speed/webp/</a>) as shown here", with an image of the WebP logo that follows. The WebP logo image is a PNG, however! With a lead-in like that, I was sure it was going to be a WebP image demonstrating some of the format's benefits.<p>The various inline URLs that aren't hyperlinks are very annoying, too. They should obviously have been actual links.<p>Seeing stuff like "Microsfot" only makes the horrible article look even worse.<p>While I'm not expecting perfection, nearly everything about this article is sub-par. It's not what I wish to see when I come to HN.<p>When it comes to information, the Wikipedia article is so much better, yet it's still quite concise, too. For anyone interested, it is at:<p><a href="http://en.wikipedia.org/wiki/WebP" rel="nofollow">http://en.wikipedia.org/wiki/WebP</a>
评论 #4814136 未加载
评论 #4814109 未加载
评论 #4814158 未加载
jaimebueltaover 12 years ago
Giving that according to a new graphics standard is surprisingly difficult, wouldn't it be a good idea to try to do a "container" format? It seems that adding new video codecs is (relatively) simple, but adding a new encode for images is awfully difficult due browser issues.<p>I mean, I am not sure, it will open a can of worms with a lot of problems, but not sure if anyone is trying something in that direction...
jreover 12 years ago
I'm not sure about the responsive image format thing. The problem is : the restrictions of mobile today (both in term of dpi and in term of connection speed) are likely to change very quickly. This means we'll need to re-encode images as mobile devices change.<p>A server-side solution where you store the full resolution image only and then use a cache for lower resolution seems much more flexible.
danboarderover 12 years ago
CSS+JS with a bit of PHP on the server can generate/cache appropriate image sizes for each device from larger source files, using current image formats.
mtgxover 12 years ago
This sounds almost like the Opus of images - a format that tries to combine all the others into one.