I'm pretty skeptical of using bzip2 as the entropy coder compared to DEFLATE as used in PNG. bzip2 is really slow. Also I would hope that an image format designed in 2016 would admit some degree of parallel decoding, but by leaning so heavily on a slow entropy coder they also killed all parallelism opportunities.<p>Browsers routinely spend as much time in image decoding as they do in layout, so this matters.<p>If you want a purely entropy coded format with a trivial, easily memorizable file format, just use PNG with all scanlines set to the None filter, one IDAT chunk, and one IEND chunk. Totally trivial, and probably simpler because DEFLATE is a relatively simple format.
>The author has yet to see an example of an image which looked any different in the print depending on if it had been processed in a sRGB or Adobe RGB pipeline.<p>The author clearly has very limited experience of professional printing.<p>Images aren't just used by photographers. They're used by graphic designers, digital artists, and other people who need a consistent wide-gamut colour space.<p>sRGB is <i>not</i> that colour space.<p>Referring to a couple of ancient Ken Rockwell blog pages as "proof" is simply uninformed and amateurish.
> Additionally, functionality like alpha channels and 16-Bit color depth can only be achieved via extensions<p>PPM explicitly supports 16 bit colour depth.<p><a href="http://netpbm.sourceforge.net/doc/ppm.html" rel="nofollow">http://netpbm.sourceforge.net/doc/ppm.html</a> says "Each sample is represented in pure binary by either 1 or 2 bytes."<p>And PAM explicitly supports transparency (which I guess you could consider an "extension" but since it's handled by libnetpbm, I'd quibble.)<p><a href="http://netpbm.sourceforge.net/doc/pam.html" rel="nofollow">http://netpbm.sourceforge.net/doc/pam.html</a> says "Each of the visual image formats mentioned above has a variation that contains transparency information."<p>Also,<p>> Due to it being a textual format it also lacks the desired compression characteristics.<p>...isn't true since P6 is binary (excepting the header which is likely trivial on a normal sized image).
I think it's nice to have a simple format for storing images for the sake of being able to build a pipeline of filters that are very lightweight and simple to implement.<p>But I was really distracted by how the author kept going on borderline nonsensical tangents about compression. There's a reason we usually build compression into the file format, instead of just zipping a lossless bitmap. It turns out that there's a ton of stuff you can do to exploit redundancy in two dimensions - <i>if</i> you can apply a transformation before the compression stage.<p>And yeah, you can make up some of your losses by using bzip2. But again, there's a reason formats like PNG don't do that: it's slow.
Won't be using this suckless format. Pretty useless. The real image formats took effort and are mature. This is just a more compact, less useful hacked out version of netpbm etc. Being easy to parse is no particular virtue. Use libraries with good APIs.
This seems likely to be unsuitable for large files (e.g. 50000x50000 pixels) because the external compression is going to make random access difficult. So extracting or displaying a small part of the image means you have to read from the beginning.
Kudos to the author! I really like this format.<p>Most people in the comments clearly do not understand the words "easy" and "straightforward". I can only recommend them to go to the nearest dictionary and look at those words carefully.<p>Personally, a tool like this is what I always wanted to have during my Engineering Master and Machine Learning PhD. A clear, simple, straightforward & easy format so that I could process my images easily, simply, clearly and straightforwardly.<p>The only small complaint I can make is the use of BE that requires a manual transform to an "actual" integer.
I was skeptical about the claims of competitive compression, but in a handful of experiments it looks like farbfeld+bzip2 got slightly better compression than png (after running through pngcrush). Decoding took an unfortunate number of milliseconds, though.<p>(Give it a try! The tools are right there in the git repo, and quite easy to use.)
I will try this format next time I write a simple image processing program. Piping with the converters is way nicer than linking against libpng. PPM is OK but this format is so dead simple I can commit it to memory. Big endian seems anachronistic though, why?
If you're going to go with 16 bits per component, please at least increase the color gamut. sRGB is fine if you can only dedicate 8 bits per component, but with 16 bits per component the extra depth is more useful as an increase in the color space rather than only greater precision.
The idea is very sensible. Using an image format that's trivial (and fast) to read and write makes it easy to build custom image-processing pipelines. The video game industry used to (and maybe still does) use Truevision TGA format, in uncompressed true-color mode, to meet this use case. <a href="https://en.wikipedia.org/wiki/Truevision_TGA" rel="nofollow">https://en.wikipedia.org/wiki/Truevision_TGA</a><p>TGA currently has the advantage over Farbfeld that many image editors and viewers can already read it, which means you can look at the intermediate results in your pipeline without having to convert them to PNG or whatever first. But Farbfeld has the advantage of utter simplicity.
I've got a soft spot for simple, sane formats like this. One obvious but still simple generalization would be to allow n-dimensional image data by encoding the number of dimensions first, then the size of each dimension, then the data. That would allow representing video (very inefficiently), 3D static images (e.g. from MRI scans), and even 3D video (which is 4D).
<i>Current image formats have integrated compression, making it complicated to read the image data. One is forced to use complex libraries like libpng, libjpeg, libjpeg-turbo, giflib and others</i><p>...<p><i>Dependencies<p>libpng<p>libjpeg-turbo</i><p>That's a bit of a non sequitur.<p>Aside from that, the arguments put forth aren't very convincing; the "integrated compression" is what makes compressed formats more efficient, since these are specialised compression algorithms adapted for image data. PNG, for example, uses a differencing filter. I think me and many others have tried ZIP'ing or RAR'ing uncompressed BMPs before, and found the compression is not as good as PNG. This is not even mentioning the possibility of lossy compression.
So it's a 16 bit RGBA bitmap with no support for custom headers, internal tiling, custom bands, band interleaving, or overviews. or compression. Simplicity is great and all, but all those features are actually really useful things to have...
I'm a bit flummuxed. Two things stick out. Firstly, PPM is widely used in these contexts. Its just not shouted about. Second, when PPM isn't used, its because its neither YCbCr nor YUV; I've found y2m an easy format. However, its not atypical to just use eat jpeg directly.
its always mystified me why there are not more standardised, simple things. when i load an image i want width, height and rgba in a buffer. complicated extras shouldn't be necessary to load an image.<p>this is why i use stb_image.c - even though it doesn't cover everything - it has a sane interface instead of a nightmare like libpng or libjpeg.<p>most image formats and the 'standard' libraries for using them look like a great reason to never employ anyone who had anything to do with them. this one looks like an engineer, competent at the most basic levels, did the most obvious thing.<p>good work.<p>given that most modern app formats do compression anyway i'm not sure there is any need to care about that. pngs don't shrink much inside an ipa or apk but raw data shrinks to about the same size as png in my experience.
All beloved Trump:<p>280K trump.ff.bz2<p>516K trump.ff.gz<p>304K trump.ff.xz<p>360K trump.png<p>32K trump.jpg<p>Using a more complex langscape image:<p>175M coast-hdr.ff<p>42M coast-hdr.ff.bz2<p>83M coast-hdr.ff.gz<p>39M coast-hdr.ff.xz<p>6.5M coast-hdr.jpg<p>52M coast-hdr.png<p>Does not look that bad to me compared to PNG.
Having only 2222 images means it is a bit limited for me, but I appreciate the point made that too many file formats have 'internal compression'.
would be nice to be able to verify the header before running it through decompression. i can easily get a hold of 150MP images and they can be a pain to hack on.