dtaylor said:You are confusing signal (tone variations across 2D space) with dynamic range (the brightest and darkest tones that can be recorded). So is DxO.
Down sampling lets you confidently say that yes, in this tiny region of 2D space we really did detect a tone variation and not just noise fluctuations. It does not mean you recorded a lower min tone.
No, again, the definition of engineering DR is the range of tones between clipping and where signal is swamped in noise (SNR = 1).
Downsampling increases SNR, which makes darker tones more usable.
Whether or the sensor accurately recorded the tone or not - that's a measure of sensor linearity, which you can also measure in SNR analyses. I do, and so does DxO actually.
If there's enough noise at the lower end, it'll raise your average signal, and your sensor will deviate from linearity. This happens pretty early on (on the low end) for Canon. It's another way you can get an idea of DR, but I haven't found it an acceptable standard for DR measurement yet (i.e. 'where does it deviate from linearity?' as the lower cutoff, as opposed to SNR = 1 as the lower cutoff).
dtaylor said:In the transmission step wedge example I always throw out the signal...the squares in the wedge...is so large to begin with that only extreme noise could obscure it. Therefore you get a true idea of the range of tones that can be recorded.
Wait, what? How do you do a SNR analysis - which is the proper way to measure DR quantitatively - if you throw away the signal??
What are you doing?
Upvote
0