What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?
No, sensor DR is the difference between the brightest bright it can record (where it clips) and the darkest dark that is not lost in noise.
But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.
*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.
Here you're not considering that when you downsize, you average pixels, which increases SNR for the area of pixels averaged. And you do know that areas with SNR < 1 can reach SNR = 1 with enough averaging, right? Therefore, darker tones can be pulled up to SNR = 1, and therefore calculated DR can increase.
Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.
Nonsense. There are those that can do both: love the math, and correlate the science/math to image quality differences. There's a reason for controlled tests - when done right, they reflect real-world differences in actual images.
Those in sensor design know this.
Noise and banding is what truthfuly diffentiates the current sensors, and that difference is nowhere near this mythical 3.1 stops of "DR". People that regularly use or work files from both know the differences are in the shadows and are closer to two stops, Canon files can be lifted 3 stops in the shadows with very high quality results, Exmor files can be lifted closer to five stops in the shadows but by the intrinsic nature of gamma curves lose a lot of tonality if you need to do that.
Again, no. Do the proper side by side, and it's not a 'mythical' difference. But you have to know how to do the test right. I.e. don't confuse photon shot noise in an exposure 3 EV under for sensor noise.
The respectable Bill Claff's data or a higher SNR cutoff shows a difference of 2.5 EV. So now we're arguing about a half a stop?
The point is that there are almost no
tones in the 14-bit D810 file that can't be used b/c of read noise. If you can't use them, it's b/c you didn't collect enough light to begin with down there. That's impressive, b/c it means the only way you can really get anything better is to use a bigger sensor. For the same reason that high ISO performance would increase with a larger sensor - collecting more light.
Furthermore, I've said time and again - it's not
about 'how many stops you can push'. It's about what particular tones in the 14-bit file you can and can't work with. You cannot
simplify it to 'Exmor can pushed X stops and Canon can be pushed Y stops'. That's just dead wrong
, if you're trying to be rigorous or quantitative, anyway.
As was evident in a recent post here with A7R RAW files available, large areas of 5 stop lifted shadow detail holds almost no tonality which mitigates the usefulness of the capability. That doesn't mean Canon shouldn't have it, it just means that when we do have it don't expect to get the same results from a 'normal' exposure and an underexposed image that is then lifted to 'normal', tonality does not work like that, and that was demonstrated in another thread here recently too.
Ok, but that has to do with photon shot noise. It's the same reason some people find extremely high ISO shots unacceptable. Because tones are made with too little light. Same with tones down in the depths of the 14-bit file. They're made with too little light.
So basically what you're arguing now is that you want a DR measure with a higher SNR cutoff. That's fine, but just realize what it is you're actually saying.
Pixel peeping, or taking a crop at 100% is frequently done in the practice, and as you mentioned, then we don't see any increase in DR. So why using a normalized value to compare something that can't be seen in the practice?
B/c it's not fair to show this:
... when in reality actual visual comparisons of DR will not place the D800 behind the D600, so the following, normalized
comparison is more accurate:
Again, not sure how we could make it any clearer - you normalize to simulate a comparison at the same viewing size. Downsampling decreases noise, which increases SNR, which means lower tones make it up to your SNR cutoff for DR, which means DR has to increase.
No one's arguing anything about the absolute
number and whether or not it reflects exactly
the DR someone may actually
But you have
to normalize for comparisons.