Tool - D810 vs. 5D Mk3

privatebydesign said:
What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

actually, it may be an oversimplification but yes, downsampling will give you darker dark tones as the noise (which lightens them) is averaged out.
Therefor, greater effective DR, when measured as the ratio of light/dark at some SNR limit.
 
Upvote 0
Aglet said:
privatebydesign said:
What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

actually, it may be an oversimplification but yes, downsampling will give you darker dark tones as the noise (which lightens them) is averaged out.
Therefor, greater effective DR, when measured as the ratio of light/dark at some SNR limit.

Daniel already covered that point, and we all agree, some dark tones are liberated by noise mitigation, but that doesn't alter the fact that black and white still have the same luminance values.

Going back to my initial question, if a single pixel has a well capacity of 14 stops of DR recording capacity how can downsampling that give me a brighter or darker luminescence? If 0 equals black and white is 16,384 where is the extra capacity? What is being said is that a 'noisy' sensor can't record detail below 0-1,000 (for example) by comparison a 'clean' sensor can record greys in the 500-1000 range 'so it has more DR', I say not, I say the total range from black to white is still the same, the clean sensor has more tonality between black and white, but it doesn't have more luminosity range between black and white.

Maybe that is where the difference is, I and everybody since, ever, has equated photographic DR to the range of luminosity values and the sensor geeks insist on referring to it as levels of tonality.
 
Upvote 0
dtaylor said:
In all examples to date the actual total DR difference is very small. Noise is very different which of course affects latitude and what is acceptable when exercising said latitude on the shadow side.

Very small?

Yes. Canon sensors are not blocking up a lot sooner then an Exmor sensor (though they do block up a little sooner). But the noise makes detail in the lowest tones unacceptable, when pushed higher on the scale, for most photographic purposes.

So you'd prefer words like 'little' or 'very small' compared to quantitative numbers like 2.5 EV difference, etc.

Ok, now that was totally worth having a conversation about. I'm glad I now know that when one says that you can pull tones that received 10x less exposure (an 'order of magnitude' to 'scientists', but who understands them anyway), you consider that 'small' or 'little'. That was so worth arguing over.

dtaylor said:
Yes, let's refer to Ansel Adams to talk about sensor DR. Because sensors totally existed back then.

They did. They were called "film."

... which required a slightly different method to analyze DR back then than the best method to measure DR of sensors now. You measured film densities, when and where film could no longer distinguish tones. And that's exactly what SNR analyses do now, even more rigorously. But the techniques are subtly different. As they should be, separated decades apart.

What's your point?

dtaylor said:
But the reality of it is that you measure dynamic range of sensors in a different way.

The definition and model of photographic dynamic range does not change based on capture medium.

I agree with your statement here. I also agree with my previous statement you quoted. They're not mutually exclusive. You're still not getting it.

I give up; this is like knocking one's head against the wall. I'm not continuing a conversation with someone who finds it fun to knock science, even though science was used to make his camera, and evaluate it as well.

There are those here actually making valid points and asking valid questions; all you do is derail and misinform.
 
Upvote 0
privatebydesign said:
Aglet said:
privatebydesign said:
What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

actually, it may be an oversimplification but yes, downsampling will give you darker dark tones as the noise (which lightens them) is averaged out.
Therefor, greater effective DR, when measured as the ratio of light/dark at some SNR limit.

Daniel already covered that point, and we all agree, some dark tones are liberated by noise mitigation, but that doesn't alter the fact that black and white still have the same luminance values.

Going back to my initial question, if a single pixel has a well capacity of 14 stops of DR recording capacity how can downsampling that give me a brighter or darker luminescence? If 0 equals black and white is 16,384 where is the extra capacity? What is being said is that a 'noisy' sensor can't record detail below 0-1,000 (for example) by comparison a 'clean' sensor can record greys in the 500-1000 range 'so it has more DR', I say not, I say the total range from black to white is still the same, the clean sensor has more tonality between black and white, but it doesn't have more luminosity range between black and white.

Maybe that is where the difference is, I and everybody since, ever, has equated photographic DR to the range of luminosity values and the sensor geeks insist on referring to it as levels of tonality.

I've already explained it, but it probably got lost in all the noise.

You've got it, pvd: "some dark tones are liberated by noise mitigation" --> exactly!

But then you said "but that doesn't alter the fact that black and white still have the same luminance values." That's where things are falling apart.

What is 'black'? There is no definition of 'black' other than 'signal at the noise floor'. Well if you've just brought signal above the noise floor by 'liberating them by noise mitigation', then they've just become usable tones. To see them, though, you need your output device to bring them up to a usable level. Today's monitors and prints don't generally do that, which is why we push shadows --> essentially tone-mapping.

And I don't know how clearer I could make it than when I showed the D800 vs D600 Screen vs Print DR curves. No sensible person testing the two cameras would give the D600 more DR, which is what the 'screen', pixel-level analysis indicates. The normalized analysis indicates they're pretty much the same - which is exactly what you'll find in the real-world if you try and shoot high DR landscapes or try to do shadow pushing from equally exposed files.

Please tell me it makes sense now?
 
Upvote 0
actually, I think I can see PBD's point of view on this - you can't average a bunch of zeros and get a lower zero.
but you can average bunch of slightly above zero shades+noise which has the effect of increasing effective DR because now there's more useable tonality.

I think the difference in arguments is the threshold chosen for the base
numerical 0 = black vs SNR=1 = black.

EDIT: there has to be some intrusive amount of noise in order for the downsampling to actually effect an improvement. If there's insufficient noise then the noise floor ~ 0 and you can't create more total tonal range from that data. (tho you might be able to smoothen it to create more discernable shades/tones)
The DxOmark calculated downsampled effective DR being > than the possible output range is just a mathematical creation useful only as a measure of merit. But that also does not negate the value of a lower read noise sensor as it truly can deliver more accurate tones and smoother shading in the deep shadow areas.
 
Upvote 0
Aglet said:
actually, I think I can see PBD's point of view on this - you can't average a bunch of zeros and get a lower zero.
but you can average bunch of slightly above zero shades+noise which has the effect of increasing effective DR because now there's more useable tonality.

I think the difference in arguments is the threshold chosen for the base
numerical 0 = black vs SNR=1 = black.

Yes, PBD's question is very valid, and interesting, and I can totally see his confusion. The problems comes in the actual numbers. I'm not absolutely certain I believe the absolute DxO numbers - as in what's actually usable by a photographer. SNR=1 data is not usable by a photographer.

But, I couldn't say it better Aglet - exactly, it's that other (e.g. single digit) tones become more usable upon downsampling.
 
Upvote 0
Aglet said:
actually, I think I can see PBD's point of view on this - you can't average a bunch of zeros and get a lower zero.
but you can average bunch of slightly above zero shades+noise which has the effect of increasing effective DR because now there's more useable tonality.

I think the difference in arguments is the threshold chosen for the base
numerical 0 = black vs SNR=1 = black.

Yes, you guys call 'useable toanlity' 'sensor DR', I have never understood that to be a way of stating 'photographic DR', I only know and understand the difference in recordable luminosity values.

Again, I think this is an area where the technologists have confounded and annoyed the photographers. When I, and millions of others, think of photographic DR we are thinking about the difference in scene luminosity we can actually record, not the point at which the dark tones become noisy. Replicating that capability on devices with a much smaller luminosity range is not and never has been the question.

So who has some RAW step wedge files to upload?
 
Upvote 0
Aglet said:
actually, I think I can see PBD's point of view on this - you can't average a bunch of zeros and get a lower zero.
but you can average bunch of slightly above zero shades+noise which has the effect of increasing effective DR because now there's more useable tonality.

I think the difference in arguments is the threshold chosen for the base
numerical 0 = black vs SNR=1 = black.

Yes, 'black' is defined by your cutoff. For a more reasonable SNR threshold of 2 or 3 (or corrected for COC as Bill Claff does), 'black' is actually a significantly higher number than 0 or 1. And that's where it's conceptually easier to see what downsampling can do.

So maybe the breakdown in understanding occurs b/c DxO keeps quoting numbers in and around the bit-depth of the ADC.

The principle of downsampling helping tones become usable is not hard to see. Check out the pixel-level noise of the D810 vs the A7s in DPR's studio scene here at ISO 12.8k:

http://www.dpreview.com/reviews/image-comparison/fullscreen?attr18=lowlight&attr13_0=nikon_d810&attr13_1=sony_a7s&attr13_2=sony_a7r&attr13_3=sony_a7s&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=12800&attr16_1=12800&attr16_2=12800&attr16_3=12800&normalization=full&widget=1&x=-0.33073366646429747&y=0.23770150083379657

Not very usable, right?

Now look at it normalized:

http://www.dpreview.com/reviews/image-comparison/fullscreen?attr18=lowlight&attr13_0=nikon_d810&attr13_1=sony_a7s&attr13_2=sony_a7r&attr13_3=sony_a7s&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=12800&attr16_1=12800&attr16_2=12800&attr16_3=12800&normalization=full&widget=1&x=-0.33073366646429747&y=0.23770150083379657

Look at the stripes in the black jacket? Suddenly you can see them, b/c they're not swamped in noise. Suddenly the become more usable. Those were dark tones not very usable at 36MP, but now usable at 8MP. B/c after pixel averaging, the SNR of those tones went up, above the arbitrary threshold I selected when I said that that jacket was 'not usable' at the pixel-level.

Here it is again as screenshots:

Pixel-level (not how you can't even make out the vertical stripes in the black jacket on the rightmost Beatle, b/c they're lost in noise):
D810_vs_A7S-PixelLevel.png


Normalized (you can now make out the stripes, just like you can on the A7S):
D810_vs_A7S-Normalized.png
 
Upvote 0
privatebydesign said:
Yes, you guys call 'useable toanlity' 'sensor DR', I have never understood that to be a way of stating 'photographic DR', I only know and understand the difference in recordable luminosity values.

Again, I think this is an area where the technologists have confounded and annoyed the photographers. When I, and millions of others, think of photographic DR we are thinking about the difference in scene luminosity we can actually record, not the point at which the dark tones become noisy. Replicating that capability on devices with a much smaller luminosity range is not and never has been the question.

So who has some RAW step wedge files to upload?

But 'the difference in scene luminosity we can actually record' is exactly what DxO is measuring. Because their DR is defined as the brightest bright vs. the darkest dark that is not lost in noise. 'Not lost in noise' is where SNR = 1, according to DxO. This is known as 'engineering dynamic range'.

Would you like DxO's lower cutoff to be higher, since you can't use SNR = 1 (where tones are completely lost to noise)?

SNR = 1 is used as the lower cutoff b/c different folks could argue till the cows come home what SNR is usable.

If that's what bugs you, then use Bill Claff's excellent analyses, where he defines a 'photographic DR' using a higher SNR cutoff:

http://cl.ly/Y2gu/D810_vs_A7S-PixelLevel.png

The differences between cameras are still fairly similar to DxO's findings, but the absolute numbers are different. Higher SNR cutoffs on the low end tend to shrink the differences between cameras of the same sensor size. After a certain point, a higher SNR cutoff won't even distinguish between cameras of similar sensor sizes, b/c the lower SNR cutoff will be dominated by the effects of shot noise (which'll be similar between cameras of similar sensor sizes), so that's not helpful either.

But Bill Claff's results vs. DxO's normalized results is just half a stop different (2.5 EV vs 3 EV) for the D810 vs. 5D3 for example. Not exactly earth shattering.

Is that what this entire debate, and all this arguing, is actually about in the end?
 
Upvote 0