Not that many of the complaints aren't reasonable, but I thought that in general compression/format was orthogonal to parity, which is what I assume is actually wanted for long-term archiving? I always figured that the goal should normally to be able to get back out a bit-perfect copy of whatever went in, using something like Parchive at the file level or ZFS for online storage at the fs level. I guess on the principle of layers and graceful failure modes it's better if even sub-archives can handle some level of corruption without total failure, and from a long term perspective of implementation independence simpler/better specified is preferable, but that still doesn't seem to substitute for just having enough parity built in to both notice corruption and fully recover from it to fairly extreme levels.