So that takes us back to the definition of DR. I'm happy to accept that DXO has a purely mathematical interpretation of DR, the ratio between white point (maximum saturation) and black point (noise floor). Again, though, I am not sure it is a useful or realistic definition of what dynamic range is. When one thinks about the value of dynamic range in digital photography, the first thing that usually comes to mind is the ability to recover useful detail from deep shadows. I say from the shadows, as I think any photographer who uses digital knows that it is critical to preserve the highlights, as once they are clipped, detail is well and truly gone.Realistic; who knows, at least I can tell a whole lot of things about a camera and the resulting images, and what you can DO with the camera - just by knowing the DR and some other base performance figures. You would get the same answer from any other competent machine vision specialist or optoelectrician.
Practical; Since a camera sensor signal is linear you can move around as you want in it, internal contrast will be constant. This means that I can expose (photometric exposure) maybe one full stop less with a camera with good DR, giving me more "practically usable latitude" in both highlight and shadow. This is not a very difficult PP operation - I put (in the raw converter) the exposure compensation at +0.5 and make the highlight tone curve a little less harsh in the cutoff knee.
If the DR is good, I can shorten my shutter speeds or get more DoF (stop down) at low ISOs - without loosing any image quality compared to a low-DR camera used at longer shutter speed or shorter DoF!
More DoF or/and shorter shutter speeds are in most situations something very practical, wouldn't you say?
Here is where my argument comes in. There is too much conflation of what is possible with an analog signal on a sensor, and what is possible with a digital image in post. I FULLY AGREE that dynamic range in terms of a linear analog signal on a sensor is, for lack of a better word, "fluid". You can adjust exposure up or down, and shift the signal anywhere within the dynamic range of the sensor. That's WHY it is called dynamic range. With the finer gradation of discrete levels measured in electrons, potentially tens of thousands of electrons, per pixel, you effectively have fine grained, near-infinite control over that signal. If we didn't adjust everything in stops, you could fine tune an exposure quite precisely in-camera.
I disagree that you have the same kind of unlimited, lossless control over the digital signal represented in an image, RAW or TIFF. For one, you are working with quantized, discrete data. Second, exposure latitude is not infinite, even with a RAW, when working in post. Even with an amazing camera like the D800, noise is going to eventually pose a problem since it is "baked" into the digital signal. Adjusting exposure in-camera, you don't have to contend with noise at all (for all practical intents.) Pushing or pulling exposure in post has its limits as well. If you severely under-expose, no matter how clean the results may be, you are going to have limited color fidelity as you continue to boost exposure digitally. A D800 can boost exposure by maybe six stops, but that is in no way an alternative approach to photography, as a severely under-exposed image lifted by +6EV will NEVER have the same kind of fine tonality, color fidelity, clarity, and sharpness as an image that was never underexposed by -6EV in the first place. It is certainly intriguing that you can lift shadows by 2-3 stops without any real problems with noise...that means you gain a lot of detail and some color fidelity in the deep shadows for applications like landscape photography. Push those shadows too much, though, and your amazing 13 stop landscape photo will quickly turn into a muddy something that looks like one of those poorly tone-mapped HDR images with stippled or muddy gray detail protruding into the lower midtones, which will have a disproportionately greater amount of detail and color fidelity. My point is...there are limits to what you can do with digital signal processing that don't exist when processing the signal in it's original analog form IN-CAMERA.
Now, I've never complained about DXO's "Screen DR" figure. I believe that tells me the dynamic range I have to work with when doing what you described...fiddling with exposure in-camera. My dispute is with the notion of Print DR, and what it seems to stand for given how DXO labels those results and sells the information to the public. I do not believe you really gain anything beneficial
, useful photographic DR that allow you to extract MORE DETAIL included, by downscaling a native RAW image to some smaller size in TIFF. I also dispute that assuming you did scale a 36.3mp image to an 8mp image and tried to utilize the supposed 1.2 stop gain in DR from downscaling, that you wouldn't have anywhere near the exposure latitude to actually do anything useful with that newfound DR even if it did contain more useful detail than the original RAW that had less DR.
The dispute on record here, if I may define it according to my own views as well as that which I've read from other DXO DR naysayers, is this:Firstly - Most cheap laptop (and cheap TN) screens use 6-bit with 240Hz time scale (delta/sigma) dither to get 8 bits of tone resolution. None of my devices (except maybe my phone) use lower than 8 discrete bits, and both my TV and my computer screens are true 8-bit >> 10-bit time-dithered.
What value does DXO PrintDR (the mathematically derived ratio between white point (maximum saturation, FWC) and black point (electronic noise floor)) have in a real-world context?
From the standpoint of simply moving the black point in a downsampled image, the only thing that occurs is shadows become darker. One LOSES information during the process of downsampling, so the primary benefit of having additional DR in the hardware no longer applies. In the context of viewing images on a computer screen, primarily done via the web, having a deeper black point might be valuable. Computer screens generally support a much deeper black point than actual prints on paper (particularly prints on high quality fine art paper), although none actually support 14 stops of DR regardless, and the average consumer screen is only 6-bit, so roughly the same DR as a print.
Secondly - This tone resolution is quantized in a gamma-corrected space, usually around gamma>>2.0. If you look at the sRGB gamma the step between the first 14 (of 256) bits is 1/13 of the bit value. This means the linear DR of 8-bit sRGB in the ideal application is 13x255 = 1:3315 or about 11.5 linear bits/Ev. A well calibrated HD-TV will follow ITU-R BT709, and present a step of 4.5 in the lower part of the gamma-curve - giving a linear DR of 4.5 x (235-16) = 1:985 - or slightly less than 10 bits/Ev
8-bit sRGB as a format standard has almost the same DR per pixel as a 1Dx. (!) -But in a nonlinear tone mapping - that's the difference.
Thanks for the detailed description, although I am not sure your explanation of ITU-R BT709 is entirely accurate. That system supports bit-appending, which I wouldn't call dithering, to achieve higher bit depths. It also reserves black and white "space" within its numeric range as foot and head room for various purposes (only actually used in TV's, as far as I know, computer screens always utilize the full range of bits without headroom). That foot- and headroom reservation lowers the native DR by at least a stop or so...even if you append extra bits, the headroom requirement still exists, so you might only gain back what you had originally lost, and not much more.
The non-linear tone mapping IS the difference. Another way to put that is the gamma compresses a wider range of information into a smaller space (when indeed mapping from a larger space, which is not necessarily always the case, but is in the case of RAW PP). More information in a space that can only contain less information means we OBSERVE whatever the container renders. If we could observe a 14 stop image on a device capable of fully rendering all of the information contained within that image without the need to compress it (tone map it) in any way, it would be much closer to seeing the world the way we see it with our eyes, where the contrast
of any given scene is lower but without actually appearing dull, drab, gray and lifeless. My point is that, despite
dithering and finely tuned gamma, there is no such device on the market today. We cannot truly observe the full beauty of a 13.2-stop landscape photograph in all of its linear, lively glory, without applying some kind of non-linear processing to make the information fit on even the best and most expensive of devices today.