If you want to extract the actual raw data, dcraw doesn't really advertise the way to do it. Many of the options for extracting the data still apply some processing/conversion to the underlying data.<p>It is a pretty big mess actually. Each manufacturer has different constants that need to be applied to the data, so the raw data from one device is not comparable to another. Convincing dcraw that you actually want the unaltered data is not straight forward.<p>Anyways, if you really do want to see the raw image, there are a few undocumented flags you can use.<p>> dcraw -E -4 -T *.CR2<p>This will give you an unprocessed 16-bit tiff file containing the "raw" data.<p>What is interesting is that some camera sensors capture data slightly beyond the image you are presented with. The camera will crop in and debayer the image for you, as will your image editor.<p>See this thread for hilarious lengths people go to in order to get unaltered data from their sensors (monochorme converting a dslr):<p><a href="https://stargazerslounge.com/topic/166334-debayering-a-dslrs-bayer-matrix/page/94/" rel="nofollow">https://stargazerslounge.com/topic/166334-debayering-a-dslrs...</a>
Really cool article. I'm left with just one question:<p>Why are the 16-bit RAW values so limited in their dynamic range? Wouldn't sensor manufacturers want to have their pixels able to return values that range the whole way from 0x0000 to 0xffff?
The author notes that 'But, there is no such standard...[a]ll real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load'.<p>While I can accept that there isn't a universal RAW -> RGB standard, it seems strange to me that 'compute how this photo should appear' is left as an exercise as a reader.<p>Photographers often view their work as a form of art, and artists are very particular about even the smallest details of their work.<p>Why, then, would Nikon, Canon, and especially Leica, not have their own definable standards of how to process RAW photos for their particular cameras?
Something that's fascinating to me is that at 1:1 display size, the output of modern cameras doesn't look that much better than the pictures out of old 0.3Mpixel still cameras of nearly twenty years ago. The dynamic range is better and the colors are more vibrant, and the noise floor is lower for dark scenes, but on the whole it still looks pretty crappy at full resolution. Why is that? Could we fix it by using larger CCD/CMOS sensor pixels and sticking to lower total pixel counts?
Try <a href="https://github.com/anuejn/batic" rel="nofollow">https://github.com/anuejn/batic</a> - a shadertoy-like experiment where you can to implement a shader to convert the bayer-pattern.
A raw file? Well, it's really more a lump of metal, until you start putting some teeth on it and you're ready for the tempering process.<p>/s