elflord said:
jrista said:
I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".
A stop of dynamic range in the shadows is interchangeable with a stop of dynamic range in the highlights because you can always meter differently (e.g. underexpose by a stop)
You went off on a bit of a tangent suggesting that the extra dynamic range does substantially not increase the number of "levels" of luminance available, and I showed that it actually does (more precisely, that if I reduce noise by a factor of two, I get twice as many luminance levels). I also showed that by downsampling you can interchange spatial resolution for both dynamic range and number of luminance levels.
So the analysis where you try to demonstrate that the difference is "small" by using that table is incorrect. It's incorrect because when you add noise, you don't just lose the "bottom stop(s)" on the table (for example, levels 1-15) and keep all the others, you're really losing information content in the low order bit(s). You're not eating away at the "bottom of the table", you're eating away at the low order bits.
Perhaps we are on different pages. Once an image is digitized, its digitized. It has a fixed bit depth. In the case of modern DSLR's, the 14-bit output of a RAW is fixed, and the physical dimensions of that image are also fixed (so downscaling really isn't an option to start with...not if you wish to continue working with the image as a RAW image.) If you do export that 14-bit raw to say, TIFF, then you now have a 16-bit image. The number of bits is fixed. It doesn't change. If you scale that TIFF image down, yes, you can mitigate noise. You'll really be mitigating two types: Photon Shot and Read Noise. When it comes to Photon Shot, the D800 doesn't have any real advantage over any other camera, and the benefit of scaling would be the same for any image. When it comes to Read Noise, that noise only exists in the black and shadow levels. If you scale an image down, your only affecting the bottom small percentage of the total tonal range of your TIFF image. You could certainly move the gray point around, but your not redistributing bits...your only redistributing the existing tonal levels in the image...so the gain in the shadows of a few levels isn't going to translate into thousands of highlight levels by moving the gray point around after downscaling.
Again, though, I've been trying to discuss this topic in the context of a digital image on a computer. You keep conflating the issue by bringing in the behavior of the hardware in a camera. I'm not talking about metering and adjusting the exposure value pre-exposure time. I'm talking about working an image in post after its been digitized by the ADC and imported off the camera/memory card, as the original debate was whether you can really actually gain over a stop of DR by the simple act of scaling an image down (an act that occurs well beyond the camera, so discussing how you can use the DR of the hardware to gain shadow or highlight range is out of context.) I believe you can gain a couple stops of DR by downscaling, however since it is in the "lower order bits", or in the darkest tonal levels of an image, the gain is minimal. Were not talking about a huge difference overall, we are talking about a very small difference overall. That difference well indeed may improve the dynamic range of your shadow detail a bit, but its not like your gaining more than double the total tonal range you had before (which I had mistakenly thought was the opposing argument.)
elflord said:
I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops.
Yes, that's 13.2 stops per pixel. I'm not sure why this matters so much -- the Canon also drops (by about .8 EV) when you go from print to screen because the two cameras don't differ that much in megapixel count. Depending on whether you use DxO's "screen" or "print" number, the Nikon leads by 2.2Ev or 2.5Ev. I'm not sure why you think those 0.3 Ev matter a whole lot -- either way, the Nikon sensor trounces the Canon, so why devote so much effort to trying to prove that the Nikon is "only" 2.2Ev better ?
Back to your #95, the D800 user could underexpose by 1.2 stops. If he downsamples to 8mpx, he will be able to recover those shadows, and get 14.4 stops of dynamic range. I agree that he can't get 14.4 stops per pixel at full resolution.
As long as the destination for the image is some fixed size (print or on screen) and not a 100% crop, the "print" benchmark is the one you should care about. So I don't agree for example with the notion that medium format, full frame and crop cameras are equal in terms of dynamic range even if they are on a per pixel basis.
That last statement is your mistake, though. It's also the same mistake DXO makes:
Why is the assumption that a "print" is always less than native resolution (the same as a 100% crop)? The D800 has a native print size of around 17x22 (roughly speaking). If I print at native size, I am
not downscaling. That effectively
is a 100% crop. There isn't any averaging of any pixels going on when I print at native size, and once ink is laid down on paper, at best (assuming I use something like Epson UltraChrome or Canon Lucia ink on a high luster paper) I might get a dMax of 2-2.3, which is around 6 to 7 stops. The only time DXO's "Print DR" actually results in greater dynamic range is when that 8x12" printable image is viewed on a computer, and even then...you would require a 14-bit display to actually observe the all the detail at any level offered by a 14 stop image. Generally speaking, if I buy a camera like the D800, I'm not going to print at just 17x22. I'm going to print huge: 24x36, 30x40, 40x60. Those prints will probably be on Canvas (maybe 6 stops), or possibly on a fine art paper (which have a limited dynamic range around 5 stops.)
I argue about this because the entire notion of "Print DR" is assumptive, misleading, and attempts to nail down a specific result in a world (the world of print) that has thousands of potential final output options, viewing distances, inks, color gamuts, lighting scenarios, etc. etc. Its a terrible concept, a very misleading concept. It doesn't belong in the world of objective camera testing, at least not the way DXO does it where its a primary factor of measure for Sensor IQ.