And afterward, as we have I hope established, downscaling from the higher resolution is no longer limited to the 16,000 photons per pixel. When we are downscaling, we can work in a deeper numerical space, allowing the equivalent of, say, 64,000 photons per pixel without incident. We can normalize the result of the downscaling to the noise floor of 10 photons.NO.
And as a result, should this analysis apply (and please tell me how it doesn't): greater sensor photocell density, while maintaining a relatively constant dynamic range per photocell, will increase the dynamic range deliverable in a given lower-than-native resolution output simply by virtue of increasing the effective headroom by dividing the load onto more photocells.
Ignoring electron gain for the moment, your hardware
is still limited to 16,000 photons per well.
Lets say we have an ideal monochrome 32x32 pixel sensor (1024 pixels), with a maximum saturation of 16,384 electrons per well, an electron gain of 1 (one photon = on electron) and a standard deviation of noise of +/-5. If you capture 17,384 photons in the center 16x16 pixels, those 256 wells are fully saturated, registering 16,384 electrons
per well. The excess 1000 photons is waste. The diode cannot hold any more electrons, so any additional photon strikes are either reflected, or converted to heat (which, itself, could end up becoming an electron in a neighboring pixel or elsewhere in the electronics, contributing to noise). Now you digitize with a 14-bit ADC. Lets just assume you have a gain of 1, so you get 1024 pixels, the center 256 of which have a maximum value of 16,384 (pure white). The center white pixels are bordered by a basic grayscale gradient faloff, with the outer edge containing pixel values 0 +/-5 (which obviously leads to pixels with values between 0-5.)
Now you want to downsample. Your digitized image has 256 pixels in the center with BAKED IN MAXIMUM VALUES. Lets say we downscale considerably, such that we have a 3x3 pixel image. We average the center 256 pixels into the single center pixel of our 3x3 pixel image. The center pixel is...still pure white. The border of pixels around it become...well, roughly an even gray (the average of pixel values between +/-5 to 15360 differentiated by 1024 levels), with the corner pixels possibly approaching a darker gray more so than the rest, but none actually reaching the +/-5 floor for noise.
Two key points here:
1. Averaging pixels cannot increase highlight range, so if highlights are blown, they are blown regardless of how big the image is.
- Assuming you have a 9 pixel 3x3 area full of maximum saturation pixels, sampling them all to produce a single output pixel will always result in a pixel of maximum value.
- (16384 + 16384 + 16384 + 16384 + 16384 + 16384 + 16384 + 16384 + 16384) / 9 = 147456/9 = 16384
- The only thing that changes is the physical dimensions of the blown area...which would logically shrink in a downscaled image.
2. Averaging pixels also cannot increase shadow range, at best it can maintain it, and on average it will likely reduce shadow DR.
- If you have a 9 pixel 3x3, ranging in value from 0 to 5, sampling them all to produce a single output pixel will rarely result in an output pixel that is zero unless the majority of source pixels are zero.
- (0 + 2 + 1 + 5 + 2 + 4 + 0 + 0 + 3) / 9 = 17/9 = 1.8889 = 2
Mathematically, once you start averaging pixels with baked-in representations of an analog signal, you cannot gain dynamic range by downsampling (averaging multiple source pixel values together into single output pixels), even with more complex algorithms like bilinear or bicubic filtering (which does more to smooth averaging out over a greater area than anything, incurring an additional cost in terms of detail.) Mathematically, when it comes to dynamic range, you are more likely to LOSE DR (as demonstrated in point 2 above, where you started out with a dynamic range of 0-5, and ended up with a range of 2-2), and at best, you might keep it the same. You will never
be able to recover what you lost by not having a deep enough pixel well to start with.
Relating this to DXO:
1. Screen DR is representative of post-ACD baked-in RAW representations of the analog sensor signal (divided by a possible gain factor).
2. Print DR is representative of a downsampled RGB image generated from the RAW representation that is itself a representation of the analog sensor signal.
If you want to know what your sensor
is capable of, the DXO "Screen DR" measurement is the closest you are going to get to a hardware reading
. As for Print DR...mathematically with simple downscaling, you can't gain DR.
I am not really sure what print dr represents (I used to think I did, however DXO showing that the D800 gains 1.17 stops of DR simply by downsampling, putting the "Print DR" ABOVE the physical limitation of 14 stops the hardware imposes...has me utterly baffled and finding the measurement rather useless as a result), however it is not actually indicative of what the actual camera HARDWARE is capable of. It is far more indicative of what digital computer algorithms can do with an image once its been converted from analog to digital and turned into a RAW or RGB file, and that really has nothing to do with the camera
, and everything to do with software and your desktop computer.