16 bit color anyone?

Status
Not open for further replies.
I'd love to have 16 bit color, if the camera was actually able to resolve color to that level. But considering the bayer interpolation our sensors use to take color from the neighboring sensors (before the RAW image is recorded), I can't see the camera actually reading color to that depth.
 
Upvote 0
jrista said:
Yoshiyuki Blade said:
Wow, does that mean less than 8 bits per channel (with Nikon/Sony at 8 bits)? That's shockingly low! We could sit comfortably with our average 8-bit monitors if that were the case.
Well, its not quite that simple. Remember, bayer sensors have a single color channel per pixel, which are blended post-process into standard RGB image pixels. DXO measures an averaged luminance from bayer quartets...RGBG pixels, passed through a specific mathematical formula, to determine bit depth. Since blue and red pixels tend to be less sensitive than green pixels, and there are twice as many green pixels as either red or blue, its impossible to achieve a full 24 bits of color depth. Every color that can be reproduced by 24-bit RGB images is rarely included in any single image at a given time...so a loss of 2-3 bits is not a huge deal. As I mentioned, the difference between 21 bits and 23 bits or so is not all that huge, and largely imperceptible to the human eye in any visual comparisons. A little saturation boost in post can correct any loss in gamut.
I'm sorry to say, but your understanding of basically all concepts mentioned above is wrong.
- Bayer sensors have reduced spatial color resolution, not color depth.
- The bit depth determines the precision of the color information coding; how many distinct colors occur in an image is completely irrelevant.
- The signal to noise ratio of a sensor sets limits to what a reasonable bit depth is. To justify 16 bits per channel, you'd have to have a SNR of at least 96dB, or 84dB at 14 bit. If that is not the case, your least significant bits statistically will only carry noise, i.e. wasted storage space.
- In post-processing higher bit depths make sense, to minimize color banding coming from the rounding errors inherent in integer math.
- Bit depth does not influence saturation or gamut. However, the bigger your color space, the greater the need for higher bit depth to avoid banding artifacts.

As a sidenote, the human visual system also has a reduced spatial color resolution (and sensitivity) opposed to luma resolution, a fact that is exploited by the color subsampling (4:2:2, 4:2:0) employed by many compression schemes.
 
Upvote 0
The rule of thumb, "If you can see it in the final output it matters, if you can't it doesn't" applies. Often technical differences don't translate proportionately, or at all, to changes in how the output is perceived. That's due in part because human perception of detail and color is the weakest link in the chain.

Back in the days when capturing in 8-bit JPG was the only option there was a noticeable problem with banding in narrow tonal variations such as skies due the the intermediate analog values of blue being pushed into one 0 - 255 RGB value or another. The problem was most evident in areas like sky where there was only one or two channels making up the color. RAW capture and 16-bit editing more or less eliminated that problem even before the bit resolution of the cameras at capture increased.

I waited for bodies to reach 8MP before jumping to a DSLR system. I don't make many prints larger than 8.5 x 11 so that was enough resolution. The bits? They were what they were at the time.

When I upgraded from my 20D to 50D there was a marked improvement in IQ, both in resolution and smoothness of gradients I could see in the output using the same capture PP workflow. Exactly way it occurred is less important to me than the fact I can see the difference, making the cost to upgrade the hardware. The pixels openings are physically smaller packing more MP into the same sensor size, but Canon cleverly changed the shape of the sensor wells so there volume and the sensor DR stayed similar. The DR of the 50D is slightly less, but not enough to notice. So overall however Canon managed to pull it off it works for me and improved the image quality I see on screen and prints versus what I had before, which in turn looked better than the results with my previous cameras.

If I upgraded from the 50D I might see some minor incremental improvement in the type of things I shoot in the sizes I typically view and print them but not much. If I shot different content, like wall size prints of scenics, then my criteria and needs would be different. So baring dropping the 50D and busting I don't see the need for a new camera in my immediate future except for new features such as the planned switch to wireless controlled flash. What would motivate me to buy is a Canon body which could handle the contrast of a sunny cross lit scene like B&W can. But I'm not holding my breath on that one...
 
Upvote 0
Wish lists apart, we could look at the realities of sensor design...

I happen to have an 8-megapixel camera that I use for astrophotography. The pixels are 5.4 microns square and the well depth is iirc 25000 electrons. The camera readout noise is around 9.3 electrons rms and it's digitised to 16 bits. Keeping the noise down means that the camera can only produce 1 frame every eight seconds and it must be cooled to -25C. The camera has a quantum efficiency of about 50% over the visible part of the spectrum - this is very much better than CMOS devices.

So here's the rub - the resolution is sufficient to satisfy resolution for DSLR users - scaled up to 35mm terms it would be around 30 megapixels - but each pixel can only deliver a dynamic range of 25000/9.3 = 2700. This is equivalent to 11.4 bits. THAT'S HOW MUCH SIGNAL YOU CAN HAVE.... you can sample it using a 16 bit, 18 bit or 24 bit A/D, everything beyond 11.4 bits will be noise.

It's reasonably true that you can improve the signal to noise ratio by increasing the pixel area. Let's say you want a real 14 bits of signal, then the area would have to increase by 2^(14-11.4) = 6x. This would make the pixels around 13.3 microns on a side.... the problem is that you would only get 4.9 megapixels on a full frame camera. If you wanted 16 bits of signal you would get around 1.2 megapixels.

Alternatively, you can suppress noise by cooling the camera further - there's a rule of thumb that cooling a device by 5 Celsius roughly halves the noise. Well this would work to an extent (but you could not go beyond about 14 bits because that's the well depth.) So be prepared to carry around a generator to power the cooling system that can bring the camera down to -35C

Of course, there's another gotcha... and this one is completely unavoidable.

Because the camera can only store charge from 25000 pixels, it is only sensitive to the first 50000 photons that arrive. This makes it susceptible to photon noise too, and it turns out that this dominates everything. In practice, the camera which has an internal noise of 11.4 bits cannot achieve a signal to noise ratio that is higher than 223 - or slightly less than eight stops! I don't find this to be a problem - I did some tests on myself using a 10-bit monitor and I can't relieably perceive more than 7 stops either. If you have a calibrated monitor, you might want to try this yourself.

To achieve 14 bits of real data that you really need to record 28 bits of photons. Scaling the pixel area up by 6 bits (that's 64x) means that the pixels are now 43.2 microns on a side and your ff camera suddenly has only 460 000 pixels. Squeezing it to 15 bits and you need to limit the resolution to 115000 pixels and 16 bits would force you to have a spatial resolution of 26000 pixels.

And that is why we can't have a 16 bit camera. We can't even have a true 14 bit camera that's useful. Ever. It's sad, but Nature - in particular physics - is a harsh mistress.

If you want a larger dynamic range, try bracketing.
If you don't want to try bracketing, try a GND filter.
If you don't want to try bracketing or a GND, you're SOL. Sorry.
 
Upvote 0
noisejammer said:
Wish lists apart, we could look at the realities of sensor design...

I happen to have an 8-megapixel camera that I use for astrophotography. The pixels are 5.4 microns square and the well depth is iirc 25000 electrons. The camera readout noise is around 9.3 electrons rms and it's digitised to 16 bits. Keeping the noise down means that the camera can only produce 1 frame every eight seconds and it must be cooled to -25C. The camera has a quantum efficiency of about 50% over the visible part of the spectrum - this is very much better than CMOS devices.

So here's the rub - the resolution is sufficient to satisfy resolution for DSLR users - scaled up to 35mm terms it would be around 30 megapixels - but each pixel can only deliver a dynamic range of 25000/9.3 = 2700. This is equivalent to 11.4 bits. THAT'S HOW MUCH SIGNAL YOU CAN HAVE.... you can sample it using a 16 bit, 18 bit or 24 bit A/D, everything beyond 11.4 bits will be noise.

It's reasonably true that you can improve the signal to noise ratio by increasing the pixel area. Let's say you want a real 14 bits of signal, then the area would have to increase by 2^(14-11.4) = 6x. This would make the pixels around 13.3 microns on a side.... the problem is that you would only get 4.9 megapixels on a full frame camera. If you wanted 16 bits of signal you would get around 1.2 megapixels.

Alternatively, you can suppress noise by cooling the camera further - there's a rule of thumb that cooling a device by 5 Celsius roughly halves the noise. Well this would work to an extent (but you could not go beyond about 14 bits because that's the well depth.) So be prepared to carry around a generator to power the cooling system that can bring the camera down to -35C

Of course, there's another gotcha... and this one is completely unavoidable.

Because the camera can only store charge from 25000 pixels, it is only sensitive to the first 50000 photons that arrive. This makes it susceptible to photon noise too, and it turns out that this dominates everything. In practice, the camera which has an internal noise of 11.4 bits cannot achieve a signal to noise ratio that is higher than 223 - or slightly less than eight stops! I don't find this to be a problem - I did some tests on myself using a 10-bit monitor and I can't relieably perceive more than 7 stops either. If you have a calibrated monitor, you might want to try this yourself.

To achieve 14 bits of real data that you really need to record 28 bits of photons. Scaling the pixel area up by 6 bits (that's 64x) means that the pixels are now 43.2 microns on a side and your ff camera suddenly has only 460 000 pixels. Squeezing it to 15 bits and you need to limit the resolution to 115000 pixels and 16 bits would force you to have a spatial resolution of 26000 pixels.

And that is why we can't have a 16 bit camera. We can't even have a true 14 bit camera that's useful. Ever. It's sad, but Nature - in particular physics - is a harsh mistress.

If you want a larger dynamic range, try bracketing.
If you don't want to try bracketing, try a GND filter.
If you don't want to try bracketing or a GND, you're SOL. Sorry.

What would happen if they successed to deeper the well depth beyond 25k electrons/per 5.4 microns square?
 
Upvote 0
Rav said:
jrista said:
Yoshiyuki Blade said:
Wow, does that mean less than 8 bits per channel (with Nikon/Sony at 8 bits)? That's shockingly low! We could sit comfortably with our average 8-bit monitors if that were the case.
Well, its not quite that simple. Remember, bayer sensors have a single color channel per pixel, which are blended post-process into standard RGB image pixels. DXO measures an averaged luminance from bayer quartets...RGBG pixels, passed through a specific mathematical formula, to determine bit depth. Since blue and red pixels tend to be less sensitive than green pixels, and there are twice as many green pixels as either red or blue, its impossible to achieve a full 24 bits of color depth. Every color that can be reproduced by 24-bit RGB images is rarely included in any single image at a given time...so a loss of 2-3 bits is not a huge deal. As I mentioned, the difference between 21 bits and 23 bits or so is not all that huge, and largely imperceptible to the human eye in any visual comparisons. A little saturation boost in post can correct any loss in gamut.
I'm sorry to say, but your understanding of basically all concepts mentioned above is wrong.
- Bayer sensors have reduced spatial color resolution, not color depth.
- The bit depth determines the precision of the color information coding; how many distinct colors occur in an image is completely irrelevant.
- The signal to noise ratio of a sensor sets limits to what a reasonable bit depth is. To justify 16 bits per channel, you'd have to have a SNR of at least 96dB, or 84dB at 14 bit. If that is not the case, your least significant bits statistically will only carry noise, i.e. wasted storage space.
- In post-processing higher bit depths make sense, to minimize color banding coming from the rounding errors inherent in integer math.
- Bit depth does not influence saturation or gamut. However, the bigger your color space, the greater the need for higher bit depth to avoid banding artifacts.

As a sidenote, the human visual system also has a reduced spatial color resolution (and sensitivity) opposed to luma resolution, a fact that is exploited by the color subsampling (4:2:2, 4:2:0) employed by many compression schemes.

+1
this
 
Upvote 0
noisejammer said:
It's reasonably true that you can improve the signal to noise ratio by increasing the pixel area. Let's say you want a real 14 bits of signal, then the area would have to increase by 2^(14-11.4) = 6x. This would make the pixels around 13.3 microns on a side.... the problem is that you would only get 4.9 megapixels on a full frame camera. If you wanted 16 bits of signal you would get around 1.2 megapixels.

I thought we already had some that deliver virtually 14bits (13.8-13.9) and that even the 1D4 sensor itself (before bad read electronics chop off a couple of stops or more) can grad 14+ bits if we are talking normalizing to 8MP equivalent....
 
Upvote 0
Where I find the extra bit most helpful is in the post processing stage.

Most outdoor scenes exceed the range of the sensor and some very dark shadow detail is lost, but the brain of the viewer is easily fooled into thinking the range is fuller if the midtones and 3/4 tones are lightened beyond what the SOOC file shows by various adjustments at the RAW or CS5 stages.

The simplest way to see this is to open an outdoor shot exposed for highlight detail in Levels and move the middle slider left towards the shadows. A relatively minor tweek will produce a significant increase in the amount of detail the photo appears to have. The DR isn't expanded, but the brain looking at a photo with a 0,0,0 black and 255,255,255 white areas to anchor it's perception of the overall tonal range as being "normal" will equate any lighter shadow tones with detail even when there isn't any there. Surrounding an overall dark photo with a 0.0,0 mat has the same effect perceptually — 10,10,10 level tones will be assumed to have detail by comparison, at least in smaller insignificant areas of the photo. Photography is after all just an optical illusion, tricking the brain into recognizing 2D contrast patterns memories of 3D objects previously seen in person.

Back in the days of 8-bit workflow a Levels manipulation like that would often cause banding in the sky and other smooth gradients as a result of the original values being shifted — evidenced by gaps in the histogram depiction. But with thousands of data points per color vs. just 256 in a 16-bit workflow that same degree of manipulation doesn't result in any visible defects in the image.

Forcing the mid- and 3/4-tones lighter will amplify any inherent noise in the shadows but I address that problem in editing by apply NR to a duplicate layer then with a mask blend in the reduced noise layer in the shadows where the noise it noticeable. Since there isn't much important detail there to begin with in term of story content the loss of detail due it the NR doesn't have an net impact on the image — the more important information in the mid-tones and highlights remain the same, the viewer is just tricked into thinking a full tonal range was recorded.
 
Upvote 0
Status
Not open for further replies.