Calculating Dynamic Range

Status
Not open for further replies.
Mar 2, 2012
3,188
543
Rather than hijack any of the existing threads on DR, I thought I'd start a new one.

I'm still struggling a little with the concept by which a few people (LetTheRightLensIn and maybe a couple others) are computing Dynamic Range.

Maybe my understanding of DR itself is faulty.

Is not DR the total range between brightest bright detail and darkest dark detail that a camera is capable of recording in a single exposure?

If so, how does having two separate exposures, one completely underexposed (body cap on stopped down fast shutter) and one completely overexposed (bright, slow shutter) aid in computing maximum potential DR?

Wouldn't you instead need a to meter at the median brightness point in a scene containing a fairly slow transition to complete dark and a fairly slow transition to complete bright (i.e. overexposed white and underexposed black existing together in a single exposure with smooth gradients towards the center) in order to determine useful DR?

I have a technical background, but I'm looking more for conceptual understanding than anything.

Thanks!
 
Jul 21, 2010
31,297
13,208
The approach of taking a blown out frame and a completely dark frame is valid for calculating DR, but there are inherent assumptions in that calculation (equivalent response between the frames, etc.). The calculated DR is almost always going to be greater than the 'usable DR' (i.e. what you can actually see in an image), since the bottom end of the calculated range is set by the noise floor, and regions only slightly brighter than that might not separate from the noise.

The alternate method you suggest I'd call measuring DR, as opposed to calculating it. IMO, measuring is the better way to test DR - that gives you an actual test of the usable DR.

Practically, it's pretty easy - you just need a step wedge (like the Stouffer T2115 or T4110, they cost about $10) and a bright, homogeneous backlight. For example, below is a crop from the setup I use. The T2115 is on the left, on the right is a chrome-on-glass USAF 1951-type resolution target (which I use for microscope imaging resolution testing). You set the exposure so the transparent top part of the wedge is just at the clipping point, and see how many stops you can distinguish down to the other end (each step on the scale is 0.5 stops for the T2115).
 

Attachments

  • StepWedge.jpg
    StepWedge.jpg
    7.4 KB · Views: 1,034
Upvote 0
T

Tijn

Guest
As I understand it, the RAW files have equal minimum and maximum (normalized) values of 'photon count' for each ISO setting. For several photos taken at the same ISO with the same camera, all overexposed, the overexposed "value" in those areas will be equal. For underexposed photos there will be more of a noise averaging process required, so it may fluctuate a bit, but only within a very small margin. Therefore comparing an underexposed photo and an overexposed photo at the same ISO value will give you a rough practical maximum of possible DR. It could in practice turn out to be even less, but it can't be much more if perfectly overexposed and underexposed areas yield that maximum DR result.
 
Upvote 0
Mar 2, 2012
3,188
543
neuroanatomist said:
The approach of taking a blown out frame and a completely dark frame is valid for calculating DR, but there are inherent assumptions in that calculation (equivalent response between the frames, etc.).

Shouldn't the the inherent assumption of dark frame and white frame calculations be that everything between those two extremes can be recorded by the sensor at one time?

Correct me if I'm wrong again (probably am), it's "black" when all pixels record 0 and white when all pixels record 255, in other words there are a sum total of (2^8) brightness values per pixel that can be recorded.

But we aren't concerned with that, we're concerned with how dark it has to be to black out pixels and how bright it has to be to white out pixels... in one scene.

I can go stand my wife up to the wall next to a window on a bright day. If I expose for her, the window will be overexposed white. If I expose for the outside, she'll be underexposed black. If I expose for the median, there will be a little of each (histogram shaped like a U, I guess), and detailed brightness range between those extremes is the total DR my camera can record. Is that what you mean by usable DR?

Put another way, do these methods suggest anything about where highlight and shadow clipping begins in a high DR scene exposed for the middle tones?
 
Upvote 0
Jul 21, 2010
31,297
13,208
Another assumption, yes, but it's a pretty safe one - it's very likely that for a given ISO setting, the noise recorded with no light input will be the same from shot to shot (under a controlled temperature), and also very likely that for a given ISO setting, the number of photons required to fill a well will not vary significantly from shot to shot. So, combining data from sequential shots is reasonable.

Your numbers are wrong (or rather, they are correct for an 8-bit capture, but not for the 14-bit ADC that's commonly used now, where the upper bound is 214-1 instead of 28-1, i.e. 16383 vs. 255 for white).

As for how dark to get black or how bright to to get white in one scene, that's going to depend on the ISO selected for that particular shot. The calculation determines the maximum possible DR for a given ISO setting (and DR decreases with increasing ISO).

In your example, if you exposed at median value, a camera with a narrow DR would give you blown highlights and blocked up shadows, while a camera with a wide DR might not clip on either end, if the DR of the camera exceeds that of the scene.

Yes, that's pretty much what I mean by 'usable' DR. It's relative to the exposure, but wider means you can capture brighter highlights and deeper shadows in the same scene, and narrower DR means you have to choose which way to bias your exposure so you sacrifice either shadow detail or highlight detail.

An alternate way to use the step wedge, and how DPR does it IIRC, is to expose so that the middle of the wedge is exposed to 18% gray, then see how far up the wedge from there until clipping occurs and how far down the wedge from there until the difference between the steps is lost in the noise.
 
Upvote 0
Mar 2, 2012
3,188
543
neuroanatomist said:
Another assumption, yes, but it's a pretty safe one - it's very likely that for a given ISO setting, the noise recorded with no light input will be the same from shot to shot

What does noise have to do with the equation?

In the film world, If I take a thin negative and a thick negative, can I indeed draw any conclusions about the total range capable of being recoded on a single frame?

I guess what I'm missing is how one takes a completely dark frame and a completely white frame and figures out what happens in between.

Is there actually a brightness gradient in those frames (ie not all 0 for the unexposed an 2^14-1 for the overexposed frame) that is used to generate a curve for interpolation?

I totally get using the physical gauge (wedge block). To me that's intuitive.
 
Upvote 0
Jul 21, 2010
31,297
13,208
Noise is fundamental in determining the bottom end of the DR. The idea is that DR is range between the highest measurable brightness and the lowest measureble darkness. Highest measurable brightness is easy - you just fill all the wells, they have a capacity in terms of number of electrons they hold (say, 30,000 e-, for example). But at the low end, noise is the determining factor. Are 3 photons enough to generate a signal? 10? 30? Depends on the noise, since electronic noise is inherent in any electronic system. So, you measure the noise with no light input - anything above what you measure as noise must be signal, so that becomes the lowest signal you can detect. In other words, the signal-to-noise ratio, and you take the base-2 log to convert that to stops of DR.
 
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
3kramd5 said:
In the film world, If I take a thin negative and a thick negative, can I indeed draw any conclusions about the total range capable of being recoded on a single frame?

Nope. It's pretty simple. To test DR you shoot a transmission step wedge and count the steps. The "calculated" claims based on black and white pixels or frames are generally wrong because the test and formula are wrong to begin with.

I do enjoy watching the arguments that erupt because somebody over analyzed a black pixel and therefore concluded that the 5D3 has the same DR as the Apple QuickTake, or that Nikon cameras have 250 more stops of DR than Canon cameras. Pretty funny stuff.
 
Upvote 0
Mar 2, 2012
3,188
543
neuroanatomist said:
Noise is fundamental in determining the bottom end of the DR. The idea is that DR is range between the highest measurable brightness and the lowest measureble darkness. Highest measurable brightness is easy - you just fill all the wells, they have a capacity in terms of number of electrons they hold (say, 30,000 e-, for example). But at the low end, noise is the determining factor. Are 3 photons enough to generate a signal? 10? 30?

Awesome. It's clicking now. Thanks.
 
Upvote 0
Y

YellowJersey

Guest
Personally, I think it's premature to say anything definitively about the 5D3/D800 in terms of DR until the cameras are actually released and guys like DxO have had a chance to do a proper test. I'm ignoring the DR debate because it's largely speculative, and I don't think the few samples we've been given are enough to really come to any firm conclusions. If this were something to make you consider switching to Nikon, it would be prudent to wait until the cameras are released and we have more conclusive information. It's a bit premature to bail to Nikon at this point.
 
Upvote 0
neuroanatomist said:
The approach of taking a blown out frame and a completely dark frame is valid for calculating DR, but there are inherent assumptions in that calculation (equivalent response between the frames, etc.). The calculated DR is almost always going to be greater than the 'usable DR' (i.e. what you can actually see in an image), since the bottom end of the calculated range is set by the noise floor, and regions only slightly brighter than that might not separate from the noise.

The alternate method you suggest I'd call measuring DR, as opposed to calculating it. IMO, measuring is the better way to test DR - that gives you an actual test of the usable DR.

Practically, it's pretty easy - you just need a step wedge (like the Stouffer T2115 or T4110, they cost about $10) and a bright, homogeneous backlight. For example, below is a crop from the setup I use. The T2115 is on the left, on the right is a chrome-on-glass USAF 1951-type resolution target (which I use for microscope imaging resolution testing). You set the exposure so the transparent top part of the wedge is just at the clipping point, and see how many stops you can distinguish down to the other end (each step on the scale is 0.5 stops for the T2115).

Great info! Have you used the step wedges on your 7D and 5DII, and if so, how do their measured DR compare to their calculated DR?
 
Upvote 0
Jul 21, 2010
31,297
13,208
V8Beast said:
Great info! Have you used the step wedges on your 7D and 5DII, and if so, how do their measured DR compare to their calculated DR?

I have. Both are less than the calculated DR, in my step wedge tests coming in at about 9 stops at ISO 100 (no difference between them), and (as expected) decreasing with increasing ISO.
 
Upvote 0
3kramd5 said:
neuroanatomist said:
The approach of taking a blown out frame and a completely dark frame is valid for calculating DR, but there are inherent assumptions in that calculation (equivalent response between the frames, etc.).

Shouldn't the the inherent assumption of dark frame and white frame calculations be that everything between those two extremes can be recorded by the sensor at one time?

Correct me if I'm wrong again (probably am), it's "black" when all pixels record 0 and white when all pixels record 255, in other words there are a sum total of (2^8) brightness values per pixel that can be recorded.

But we aren't concerned with that, we're concerned with how dark it has to be to black out pixels and how bright it has to be to white out pixels... in one scene.

I can go stand my wife up to the wall next to a window on a bright day. If I expose for her, the window will be overexposed white. If I expose for the outside, she'll be underexposed black. If I expose for the median, there will be a little of each (histogram shaped like a U, I guess), and detailed brightness range between those extremes is the total DR my camera can record. Is that what you mean by usable DR?

Put another way, do these methods suggest anything about where highlight and shadow clipping begins in a high DR scene exposed for the middle tones?

With a solid black frame you have a large area over which to avg the read noise out (although in the end it probably doesn't matter too much) and a large area to see what banding looks like but most importantly you know you are really getting a true black area on frame and it's super easy to take. There is no need to go through something tricky and try to blow out part of the scene 100% and still have part be pitch black. There is zero difference between using two frames vs. one frame and two makes it so easy and with less chance for error.

This measures "engineering DR" which is higher than most would consider usable, but what each person considers usable varies so that makes it worse for comparison and since all we really care about here is camera vs. camera and not any absolute value and since if a cam has more engineering DR then it will also have more usable, whatever your standards are, it seems the easiest, fairest way. Plus you can compare data taken by different people from different cams easily, no need to know what their standard was and to try to replicate it. Plus you can always convert it to your notion of usable, you can do tests yourself and find out what depths you are willing to live with and then apply that correction factor to any reported values if you need to know the actual absolute usable value for some reason. (although that is actually wrong since the usable DR can vary in one particular way, the ugliness factor of the noise, the thing to note here is banding, so then you can supply black frames with the range centered closely around the blackpoint and then compare, if a cam has much less banding than another but they both have the same engineering DR, the one with much less banding may have more usable DR. some types of banding may filtered out to some extent post-capture, there is a chance the 5D3 banding may be more amenable to this than the 5D2).

With your examples if you wanted the highlights then expose as to save as much of them as you want and the rest falls where it does, with a high DR camera you will still have clear details in the darker parts of the image that will look good, with a low DR camera they will be a mess of noise and/or banding. For many shots most current cameras already have plenty enough DR however it's hard to find many scenes where some don't. Whether that matters depends upon your tolerances and how/what you shoot. May matter a lot to some, a lot but only rarely to others and virtually never to some who don't care about shooting stuff they haven't been able to before or don't care about regardless they just like shooting modest DR scenes only.

Anyway, I'm sick of DR :D I'm bowing out.
 
Upvote 0
3kramd5 said:
neuroanatomist said:
Another assumption, yes, but it's a pretty safe one - it's very likely that for a given ISO setting, the noise recorded with no light input will be the same from shot to shot

What does noise have to do with the equation?

In the film world, If I take a thin negative and a thick negative, can I indeed draw any conclusions about the total range capable of being recoded on a single frame?

I guess what I'm missing is how one takes a completely dark frame and a completely white frame and figures out what happens in between.

Is there actually a brightness gradient in those frames (ie not all 0 for the unexposed an 2^14-1 for the overexposed frame) that is used to generate a curve for interpolation?

I totally get using the physical gauge (wedge block). To me that's intuitive.

digital sensor have linear capture
 
Upvote 0
dtaylor said:
3kramd5 said:
In the film world, If I take a thin negative and a thick negative, can I indeed draw any conclusions about the total range capable of being recoded on a single frame?

The "calculated" claims based on black and white pixels or frames are generally wrong because the test and formula are wrong to begin with.

Sorry to say but that is not correct.
 
Upvote 0
neuroanatomist said:
Noise is fundamental in determining the bottom end of the DR. The idea is that DR is range between the highest measurable brightness and the lowest measureble darkness. Highest measurable brightness is easy - you just fill all the wells, they have a capacity in terms of number of electrons they hold (say, 30,000 e-, for example). But at the low end, noise is the determining factor. Are 3 photons enough to generate a signal? 10? 30? Depends on the noise, since electronic noise is inherent in any electronic system. So, you measure the noise with no light input - anything above what you measure as noise must be signal, so that becomes the lowest signal you can detect. In other words, the signal-to-noise ratio, and you take the base-2 log to convert that to stops of DR.

Yes, this says it all, clearly.
 
Upvote 0
neuroanatomist said:
The alternate method you suggest I'd call measuring DR, as opposed to calculating it. IMO, measuring is the better way to test DR - that gives you an actual test of the usable DR.

About that wedge test... doesn't the measured DR, on the dark side, depend on the size of the wedge? If the criterium is the ability to distinguish two adjacent patches, and that ability is limited by noise, then I would think a 4 times larger intersection would give a factor ~2 more sensitivity (and thus 1 stop more DR; the eye is great in seeing structure among noise). If you have a wedge, it should be easy to test, just put the wedge 4 times further away than you would normally do and see if the DR comes out ~1 stop lower.

In the extreme, if each patch was only one pixel big, then it's clear the measured DR would be much less due to noise. The other extreme would be if two adjacent patches shared the full diagonal of the sensor (for a straight line). That would surely give a much higher "DR".

Could his be responsible for the difference between "calculated" DR and "measured" DR? From what I gather, the calculated DR is often given as highest signal/1 sigma (per pixel?), but 1-sigma noise is really not nearly enough to distinguish a feature reliably, normally one needs at least 3 sigma, preferably 5 sigma. 5 sigma would correspond to an intersection of at least 25 pixels.

As long as the wedge is imaged to the same size on an image sensor it should give relative results (between different sensors) that are the same as the "calculated" relative ones (with pixel size taken into account). If not, I need someone to explain to me why not.
 
Upvote 0
Status
Not open for further replies.