Too much dynamic range?

Status
Not open for further replies.
In the old days, you did have a problem.

Consider a sensor with max well capacity of 13Ke-, and rea-out noise of 13e-. DR is 60dB, or 10 stops.
Pair that with 12-bit ADC. Should be enough, right? Well, yes and no. You have 4096 gradations and you have to count up to 13000 electrons, so 892 and 895 will be the same to you. No big deal, since read-out noise means you can't really distinguish between 892 and 905, but, if you can't reduce that read-out noise, there's a small benefit if you go for a 14-bit ADC: you're getting better information about the image, and you'll be in a better position to try to average out that noise. Small, I know, but it's an improvement. if the 892 comes from a very unlucky 891 and 905 from a very unlucky 906, you're in a better shape if you can say there's a 3e- difference between them (when the real-world difference is 5e-), than if all you can say is that they're the same to you.

OTOH, if you stick to 10-bit ADC, then you clearly have a problem: your ADC-stepping will be added to your read-out noise. 892 and 904 electrons are the same to your ADC, but that 904 can come from a very unlucky 910, and that 892 can come from a very unlucky 886, and if 886 and 910 can look the same to you then you're in bad shape.
 
Upvote 0
NormanBates said:
True, but the geek inside me still enjoys these theoretical discussions.

From my point of view, as long as your ADC has significantly more gradations than the DR of the camera (e.g. "16 bits" for "13 stops at pixel level"), this is a non-issue: you have an ADC that has enough gradations to actually capture the read-out noise of your image, so that is your limiting factor.

Say you have a sensor with full well capacity of 20.000e-, and read-out noise of 2e-. Your DR is 20*log10(10000)=80dB, or 13.33 stops. I guess the D800 sensor is pretty similar to that.
Tie that up with 16-bit ADC, and you have absolutely no "lack of gradation" issues whatsoever: you have to count electrons, the most you'll find are 20K, and you have 65K gradations at your disposal. Even with a 14-bit ADC, you wouldn't have terrible issues: 20Ke- to count (max), 16K gradations to use; the 2e- read-out noise is still your bigger problem.

D800 2.7e read noise FWC 44972 = 14.0 stop
 
Upvote 0
jukka said:
NormanBates said:
True, but the geek inside me still enjoys these theoretical discussions.

From my point of view, as long as your ADC has significantly more gradations than the DR of the camera (e.g. "16 bits" for "13 stops at pixel level"), this is a non-issue: you have an ADC that has enough gradations to actually capture the read-out noise of your image, so that is your limiting factor.

Say you have a sensor with full well capacity of 20.000e-, and read-out noise of 2e-. Your DR is 20*log10(10000)=80dB, or 13.33 stops. I guess the D800 sensor is pretty similar to that.
Tie that up with 16-bit ADC, and you have absolutely no "lack of gradation" issues whatsoever: you have to count electrons, the most you'll find are 20K, and you have 65K gradations at your disposal. Even with a 14-bit ADC, you wouldn't have terrible issues: 20Ke- to count (max), 16K gradations to use; the 2e- read-out noise is still your bigger problem.

D800 2.7e read noise FWC 44972 = 14.0 stop

Nice. And the ADC is 14-bit, right?

So it has 16K values to count up to 45K electrons, and read noise is close to 3 electrons. Not ideal (16-bit ADC would be better), but not bad at all.
What's sure is that you can't say "I wish it had higher read noise or lower FWC, so I could get a better use of my 14-bit DAC"
 
Upvote 0
nightbreath said:
An interesting thought came to me before I went to bed. Below you'll find an assumption that came suddenly to my head, so please don't take it too seriously.

So... Let's assume there are two cameras with similar color tones reproduction abilities, but with different possible lightness level capturing ability. For example:
- sensor of camera A has 12 stops of DR, 16 billion tones it can distinguish
- sensor of camera B has 10 stops of DR, 16 billion tones it can distinguish

Having a flat scene (i.e. low DR scene) on a shot we'll push an image with, say, 8 DR to be captured with both sensors. And then both images will be edited in post to retrieve lacking contrast. So we need to add:
- 4 stops for 12-stop camera
- 2 stops for 10-stop camera

So my point is: with lower DR camera we'll have lower tone delta (difference of the initial color tone in the scene with reproduced tone by the sensor) when processing the low DR shot made using lower DR sensor. That happens because of decreased amount of modifications made to the file to achieve required result.

What do you guys think about that?

Yeah it doesn't work like that at all, whatsoever. The range of dynamic range is not determined by the camera, but by the data format.

both Canon CR2 and Nikon NEF files have 14 bit depth, or 14 stops.

When you measure a CAMERA'S dynamic range that has nothing to do with how much data it can record from maximum through minimum, that is going to be 14 stops either way. It has to do with taking those 14 stops you start with and subtracting the NOISE floor. So you take your original 14 stops and subtract how many stops are going to be noise, such as say 4.5 and you get a 9.5 stop camera.

Having more dynamic range is never bad because it means there is less noise from the get go. The tone delta is always identical.
 
Upvote 0
Radiating said:
nightbreath said:
An interesting thought came to me before I went to bed. Below you'll find an assumption that came suddenly to my head, so please don't take it too seriously.

So... Let's assume there are two cameras with similar color tones reproduction abilities, but with different possible lightness level capturing ability. For example:
- sensor of camera A has 12 stops of DR, 16 billion tones it can distinguish
- sensor of camera B has 10 stops of DR, 16 billion tones it can distinguish

Having a flat scene (i.e. low DR scene) on a shot we'll push an image with, say, 8 DR to be captured with both sensors. And then both images will be edited in post to retrieve lacking contrast. So we need to add:
- 4 stops for 12-stop camera
- 2 stops for 10-stop camera

So my point is: with lower DR camera we'll have lower tone delta (difference of the initial color tone in the scene with reproduced tone by the sensor) when processing the low DR shot made using lower DR sensor. That happens because of decreased amount of modifications made to the file to achieve required result.

What do you guys think about that?

Yeah it doesn't work like that at all, whatsoever. The range of dynamic range is not determined by the camera, but by the data format.

both Canon CR2 and Nikon NEF files have 14 bit depth, or 14 stops.

When you measure a CAMERA'S dynamic range that has nothing to do with how much data it can record from maximum through minimum, that is going to be 14 stops either way. It has to do with taking those 14 stops you start with and subtracting the NOISE floor. So you take your original 14 stops and subtract how many stops are going to be noise, such as say 4.5 and you get a 9.5 stop camera.

Having more dynamic range is never bad because it means there is less noise from the get go. The tone delta is always identical.

dynamic range is not how many shades you have on your color space
it is related to real-world things: how much brighter can one object be than another, while the camera still captures them both correctly at the same time
 
Upvote 0
NormanBates said:
dynamic range is not how many shades you have on your color space
it is related to real-world things: how much brighter can one object be than another, while the camera still captures them both correctly at the same time
And that's why I ask :). Why would I need DR for portraits? It is what I mainly do with my cameras as wedding photographer, so number of shades sensor produces is more important than DR ;)
 
Upvote 0
Nathaniel Weir said:
Don't worry about it... just start taking pictures and stop blabbing on about sensor designs, when it has little impact on your photography. As the great Ken Rockwell states, "You need to learn to see and compose. The more time you waste worrying about your equipment the less time you'll have to put into creating great images. Worry about your images, not your equipment."
And...
"Your equipment DOES NOT affect the quality of your image. The less time and effort you spend worrying about your equipment the more time and effort you can spend creating great images. The right equipment just makes it easier, faster or more convenient for you to get the results you need."

You have GOT to be kidding me!
 
Upvote 0
nightbreath said:
NormanBates said:
dynamic range is not how many shades you have on your color space
it is related to real-world things: how much brighter can one object be than another, while the camera still captures them both correctly at the same time
And that's why I ask :). Why would I need DR for portraits? It is what I mainly do with my cameras as wedding photographer, so number of shades sensor produces is more important than DR ;)

There are some usage models for which DR is important, there are many usage models for which it doesn't matter at all. If the scene you have in front of you doesn't require more than 8 stops of DR, you're fine with a camera that can capture that, no point in going for one that is the same in every respect but will record 14 stops of DR.

If your portraits are in a studio, with a standard backdrop, your DR needs will probably be pretty modest. If your portraits happen in other less-controlled locations, you may have very high DR needs (e.g. if you want to take a portrait of someone in their bedroom, and there's a window in an interesting area). Wedding photographers take lots of portraits, and, not having a lot of control over their shooting scenarios, they usually need a lot of DR (for this reason, a friend of mine was still using his Fuji S3 pro as his backup body up until the D800 came out: a 12 mpix camera from 2005... with 13.5 stops of DR as measured by dxomark).

In any case, ADC precision will only be a problem if the manufacturer screws up the sensor-ADC matching. No current camera has that issue AFAIK.
 
Upvote 0
NormanBates said:
Radiating said:
nightbreath said:
An interesting thought came to me before I went to bed. Below you'll find an assumption that came suddenly to my head, so please don't take it too seriously.

So... Let's assume there are two cameras with similar color tones reproduction abilities, but with different possible lightness level capturing ability. For example:
- sensor of camera A has 12 stops of DR, 16 billion tones it can distinguish
- sensor of camera B has 10 stops of DR, 16 billion tones it can distinguish

Having a flat scene (i.e. low DR scene) on a shot we'll push an image with, say, 8 DR to be captured with both sensors. And then both images will be edited in post to retrieve lacking contrast. So we need to add:
- 4 stops for 12-stop camera
- 2 stops for 10-stop camera

So my point is: with lower DR camera we'll have lower tone delta (difference of the initial color tone in the scene with reproduced tone by the sensor) when processing the low DR shot made using lower DR sensor. That happens because of decreased amount of modifications made to the file to achieve required result.

What do you guys think about that?

Yeah it doesn't work like that at all, whatsoever. The range of dynamic range is not determined by the camera, but by the data format.

both Canon CR2 and Nikon NEF files have 14 bit depth, or 14 stops.

When you measure a CAMERA'S dynamic range that has nothing to do with how much data it can record from maximum through minimum, that is going to be 14 stops either way. It has to do with taking those 14 stops you start with and subtracting the NOISE floor. So you take your original 14 stops and subtract how many stops are going to be noise, such as say 4.5 and you get a 9.5 stop camera.

Having more dynamic range is never bad because it means there is less noise from the get go. The tone delta is always identical.

dynamic range is not how many shades you have on your color space
it is related to real-world things: how much brighter can one object be than another, while the camera still captures them both correctly at the same time

Face palm. No. No. No.

Raw images are captured in bits by intensity at the photo site. The simplest version would be a 1 bit photo site that either registers full of photons or empty.

So with a simple 2 bit system we can have:

00 = 0-100 photons in a pixel
01 = 100-200 photons in a pixel
10 = 200-400 photons in a pixel
11 = 200-infinity photons in a pixel

Then for different ISO settungs we multiply or divide the photons to produce different exposures.

This gives us 2 stops of dynamic range from 100 photons to 400 (or multiples of that). A stop is a doubling of light so 2x2=4.

This is how cameras work. A cameras dynamic range rating is essentially the theoretical dynamic range minus how ever many stops in the shadows are unreadable information. So in our 2 stop example if photons from 0-200 ISO were too noisy to determine what is supposed to be there then our theoretical camera has 1 stop of DR. You can think of noise as a random number generator that's added to the photon count. So out count of 0-400+ would have a number from 0-100 randomly added or subtracted from it. This is the noise you see when you put fill light to max. Anyways if a ranom number from 0-100 is added or subtracted it is mathematically impossible to determine how many photons were in our pixel in the 1st stop. Literally all you'd see is something resembling TV static if you tried to make a picture from it.

So cameras with more dynamic range have the static come in at a lower stop.

<---- is an engineer.
 
Upvote 0
We are saying the same, I guess I didn't make myself sufficiently clear.

What I mean is that if you add 1 bit to your ADC and instead of having

00 = 0-100 photons in a pixel
01 = 100-200 photons in a pixel
10 = 200-300 photons in a pixel
11 = 300-infinity photons in a pixel

You now have:

000 = 0-42 photons in a pixel
001 = 42-84 photons in a pixel
010 = 84-126 photons in a pixel
011 = 126-168 photons in a pixel
100 = 168-210 photons in a pixel
101 = 210-252 photons in a pixel
110 = 252-294 photons in a pixel
111 = 294-infinity photons in a pixel

...and you have the same read-out noise, you still have the same DR, because neither your full-well capacity nor your read-out noise have changed.

<----- is writing the VHDL code for the FPGA of a motion picture camera
 
Upvote 0
NormanBates said:
There are some usage models for which DR is important, there are many usage models for which it doesn't matter at all. If the scene you have in front of you doesn't require more than 8 stops of DR, you're fine with a camera that can capture that, no point in going for one that is the same in every respect but will record 14 stops of DR.

If your portraits are in a studio, with a standard backdrop, your DR needs will probably be pretty modest. If your portraits happen in other less-controlled locations, you may have very high DR needs (e.g. if you want to take a portrait of someone in their bedroom, and there's a window in an interesting area). Wedding photographers take lots of portraits, and, not having a lot of control over their shooting scenarios, they usually need a lot of DR (for this reason, a friend of mine was still using his Fuji S3 pro as his backup body up until the D800 came out: a 12 mpix camera from 2005... with 13.5 stops of DR as measured by dxomark).

In any case, ADC precision will only be a problem if the manufacturer screws up the sensor-ADC matching. No current camera has that issue AFAIK.
I have only few times felt lack of DR in my camera. And that was before I really used to getting the pictures I won't delete later. There are several techniques to get the shot you need the way you want to see it.

Is there a web-site where I can look at your friend's photos to check why someone chooses high DR cameras?
 
Upvote 0
You need not look at any photos, with 14 stops DR you have more exposure options and with for example d800 you have no banding and pattern noise in lower levels. You can use one raw file and develop the raw file after highlight and one after shadows and mix them together, With a Canon you must take two or more exposure and have the Camera on a tripod and no moving objects, or you can also develope one raw file after the high lights and lift the areas in shadows and a contrasty motive with out pattern noise or banding.
 
Upvote 0
I hope that someday I could get a shot with this much dynamic range in a single exposure. I bracketed 7 shots at 3 EV spacing per bracketed shot. That is 18 EV spread, but each shot has it's total EV range (minus clipping) so the DR spans almost from pure black to pure white. My eye saw these images like this, but with a single exposure (including using ND filters) I could never get these shots without increasing the DR of the camera via multiple exposures. Sorry, not trying to go off topic, just thought it was relevant to the subject.


End of the Road by @!ex, on Flickr


Everything Peels... by @!ex, on Flickr
 
Upvote 0
NormanBates said:
We are saying the same, I guess I didn't make myself sufficiently clear.

What I mean is that if you add 1 bit to your ADC and instead of having

00 = 0-100 photons in a pixel
01 = 100-200 photons in a pixel
10 = 200-300 photons in a pixel
11 = 300-infinity photons in a pixel

You now have:

000 = 0-42 photons in a pixel
001 = 42-84 photons in a pixel
010 = 84-126 photons in a pixel
011 = 126-168 photons in a pixel
100 = 168-210 photons in a pixel
101 = 210-252 photons in a pixel
110 = 252-294 photons in a pixel
111 = 294-infinity photons in a pixel

...and you have the same read-out noise, you still have the same DR, because neither your full-well capacity nor your read-out noise have changed.

<----- is writing the VHDL code for the FPGA of a motion picture camera

Right on! This is exactly what I'm trying to say, and you explained it much more clearly. DR is not the same as the number of gradations and also not the same as the bit depth (which actually just counts the number of "possible" gradations, whether or not the camera actually is capable of resolving all those gradations).

If the number of gradations accurately recorded within a 10 stop dynamic range is the same as the number of gradations accurately recorded within a 14 stop dynamic range, then the 10-stop camera has more precision and better image quality _within that 10-stop interval of light intensity_ versus the 14-stop camera. But outside that 10-stop range, the 10-stop camera has zero image quality, and so the 14-stop camera wins hands-down.

DR is not something to get angry about, just a trade-off between obtaining either greater differentiation between subtle shades of colors (like slide film with lower DR) or greater exposure latitude (like negative film with higher DR).
 
Upvote 0
helpful said:
If the number of gradations accurately recorded within a 10 stop dynamic range is the same as the number of gradations accurately recorded within a 14 stop dynamic range, then the 10-stop camera has more precision and better image quality _within that 10-stop interval of light intensity_ versus the 14-stop camera. But outside that 10-stop range, the 10-stop camera has zero image quality, and so the 14-stop camera wins hands-down.
And that's why I've started the topic. To my understanding my near-12-stop DR camera is perfect for my work and I would think twice before getting next Canon released camera that might have bigger DR with the same number of gradations resolving power.
 
Upvote 0
nightbreath said:
Is there a web-site where I can look at your friend's photos to check why someone chooses high DR cameras?

He's a wedding photographer, I don't think he publishes his pictures online, he gives them to his customers.
But the usual scenario he was referring to was: very sunny day, bride in shiny white, broom in matte black suit with subtle stripes, anything except his fuji (or, now, D800) will result in said suit looking like a black blotch, and there's nothing he can do about it.


Now, back to the technical discussion...


Let me add a twist: the ADC works linearly, but what you see is log

So, if you have a 14-bit ADC (you can count up to 16384)and can record 14 stops of DR, here is how those values will be distributed:

14th stop: 8192 to 16383
13th stop: 4096 to 8191
12th stop: 2048 to 4095
11th stop: 1024 to 2047
10th stop: 512 to 1023
9th stop: 256 to 511
8th stop: 128 to 255
7th stop: 64 to 127
6th stop: 32 to 63
5th stop: 16 to 31
4th stop: 8 to 15
3rd stop: 4 to 7
2nd stop: 2 to 3
1st stop: 0 to 1

So you may actually have very serious issues in the shadows... which I see in the Canons, but not in the D800!

* if you're going to have issues with "too much DR, not enough gradation", they'll be in the very deep shadows, which you wouldn't see anyway if you were shooting with a camera with the same ADC but less DR; your skin tones are unlikely to land anywhere below the 5th stop fro the top, so for them you have way more values than you need (anything above 50 gradations per stop is usually smooth even after heavy grading)

* how come I don't see this in samples from the D800?
 
Upvote 0
NormanBates said:
nightbreath said:
Is there a web-site where I can look at your friend's photos to check why someone chooses high DR cameras?

He's a wedding photographer, I don't think he publishes his pictures online, he gives them to his customers.
But the usual scenario he was referring to was: very sunny day, bride in shiny white, broom in matte black suit with subtle stripes, anything except his fuji (or, now, D800) will result in said suit looking like a black blotch, and there's nothing he can do about it.
Something like the one I've attached? Shot in the middle of the day. So it's another reason why I've started the discussion. Because I don't understand why everyone is so tempted about DR possibilities when everything depends on technique.

P.S. It's not one of the best shots from this day, I've just used one with hasrh shadows.


NormanBates said:
* if you're going to have issues with "too much DR, not enough gradation", they'll be in the very deep shadows, which you wouldn't see anyway if you were shooting with a camera with the same ADC but less DR; your skin tones are unlikely to land anywhere below the 5th stop from the top, so for them you have way more values than you need (anything above 50 gradations per stop is usually smooth even after heavy grading)
As far as I understand each camera applies it's own tone curve to the image, or am I wrong?

Initially I wanted to be brand-agnostic and instead of discussing specific sensors, I want to identify what really matters for my needs (and maybe many others). I'm not able to tell what it is right now, so everyone's input is appreciated :)
 

Attachments

  • 219.jpg
    219.jpg
    100 KB · Views: 713
Upvote 0
@!ex said:
I hope that someday I could get a shot with this much dynamic range in a single exposure. I bracketed 7 shots at 3 EV spacing per bracketed shot. That is 18 EV spread, but each shot has it's total EV range (minus clipping) so the DR spans almost from pure black to pure white. My eye saw these images like this, but with a single exposure (including using ND filters) I could never get these shots without increasing the DR of the camera via multiple exposures. Sorry, not trying to go off topic, just thought it was relevant to the subject.

Shot #1
Shot #2
No offense, but these scenarios look uninspiring to me. And I believe it's not about how you or I see it, it's about everyone's way of thinking towards DR that makes HDR overused by lots of photographers around the world.

I believe that HDR imaging has its own niche, but it should be used when the result doesn't tell you whether it's HDR or not. So better scenes is what really matters for me (rather than increased DR):

gvKkxHAvYG0.jpg
 
Upvote 0
Status
Not open for further replies.