Calculating Dynamic Range

Status
Not open for further replies.
neuroanatomist said:
V8Beast said:
Great info! Have you used the step wedges on your 7D and 5DII, and if so, how do their measured DR compare to their calculated DR?

I have. Both are less than the calculated DR, in my step wedge tests coming in at about 9 stops at ISO 100 (no difference between them), and (as expected) decreasing with increasing ISO.

Thanks for the info. I'm curious what you'll find with 1Dx, DR wise, once Canon decides to ship them.
 
Upvote 0
Jul 21, 2010
31,088
12,851
epsiloneri said:
neuroanatomist said:
The alternate method you suggest I'd call measuring DR, as opposed to calculating it. IMO, measuring is the better way to test DR - that gives you an actual test of the usable DR.

About that wedge test... doesn't the measured DR, on the dark side, depend on the size of the wedge? If the criterium is the ability to distinguish two adjacent patches, and that ability is limited by noise, then I would think a 4 times larger intersection would give a factor ~2 more sensitivity (and thus 1 stop more DR; the eye is great in seeing structure among noise). If you have a wedge, it should be easy to test, just put the wedge 4 times further away than you would normally do and see if the DR comes out ~1 stop lower.

In the extreme, if each patch was only one pixel big, then it's clear the measured DR would be much less due to noise. The other extreme would be if two adjacent patches shared the full diagonal of the sensor (for a straight line). That would surely give a much higher "DR".

That would make sense for distinguishing the patches visually (which, admittedly, I discussed in the example above). But, the bigger problem in that instance would be that unless you had a monitor capable of displaying the full bit depth of a RAW file, the visual comparison would be limited by the 8-bits of a typical display - you might as well assess the DR of a jpg file. So, what I actually do is measure the luminance of the patches on the step wedge in the RAW file, meaning the borders between patches aren't relevant, you just need patches large enough to overcome sampling error. As you get down to the darker patches, the signal merges with the noise, and that's the bottom end of the DR.
 
Upvote 0
neuroanatomist said:
That would make sense for distinguishing the patches visually (which, admittedly, I discussed in the example above). But, the bigger problem in that instance would be that unless you had a monitor capable of displaying the full bit depth of a RAW file, the visual comparison would be limited by the 8-bits of a typical display - you might as well assess the DR of a jpg file.

Yes, of course. I assumed you just scaled up the intensity of the image to put those regions within the screens range when looking at them. But you do something more interesting..:

neuroanatomist said:
So, what I actually do is measure the luminance of the patches on the step wedge in the RAW file, meaning the borders between patches aren't relevant, you just need patches large enough to overcome sampling error. As you get down to the darker patches, the signal merges with the noise, and that's the bottom end of the DR.

So the difference of what you do to what the "calculating" DR people do, is that you actually have a signal that contributes to the noise when finding the DR floor, and by using a wedge you find what that signal is.

To be clear: your criterion is that the signal S equals the noise N. But N is an increasing function of S (something like N = sqrt(R^2 + k*S), where R is the background noise (mostly read-out noise) and k is the gain in electrons/ADU). Therefore N(S=0) < N(S=N), and that is the reason the two methods differ. Does this sound reasonable?
 
Upvote 0
Status
Not open for further replies.