Why not 16bit?

Status
Not open for further replies.

LifeAfter

Photo is only 1 media to express among the others
Dec 1, 2011
90
0
45
Switzerland / Kosova
Why not 16 bit?

Shouldn't it contribute to a better DR?
There were some cameras that had 16 bit RAW files (so it is possible)

While the Digital SLR's are replacing the gelatine 35 mm SLR's
wich had astounding Dynamic Range (especially Black & White Film)
they are relatively matching the resolution,
and surpassing the noise/grain (ISO),

The only thing they develope sooo slowly is the DYNAMIC RANGE.
If it continues with this rythm, we are too far away of matching the film

I don't think that there is someone who wouldn't want this,
and i think that at the stage of the digital photography that we are now,
the DR should be the priority of developing instead of resolution and ISO.

Thank you for sharing with me a word weather you agree
with my opinion or not.
and i feel that we need a revolutionary discovery to match the film DR
 
Main constraint is the relatively low speed of actually available 16 bit A/D converters. Of course costs are also an issue.
Given actual sensor densities and consequent data rates, multiple parallel pipelines of A/D C units are necessary for a fast operation (i.e. EOS 1D-X), increasing system costs and complexity; thus 16 bits seems to be a viable option only for low-speed, low-ISO digital medium format backs (i.e. Phase One IQ180).
As per DXOmark results, dynamic range at low ISO is incredible, but it would not fit the typical use scenario of an actual APS-C or full frame digital camera.
Moreover, effective performance of actual 16bit A/D Cs seems to be not much better than top-of-the-line 14bit ones.
As a final consideration, to get the best out of a 16bit depth converter a relatively large photosite is probably necessary: a condition that may fit digital medium format backs but not actual high density DLSR sensors.
We did discuss this issue in an old thread http://www.canonrumors.com/forum/index.php/topic,1902.0.html, with reference to actual increase in sensor Mpxls.
I still use, and enjoy, a lot of BW and color film, but honestly dynamic range/SNR combo from actual digital sensors wins hands down, delivering a much cleaner and usable image.
 
Upvote 0
I for one would REALLY like to know the DR of the 1d X. They've never said it was imrpoved before, it has gone unnoticed by most, but this time they list it as a main new feature, which is stupid if it's only 1/3 of a stop. the mk4 is, what 11,9? But with some trickery in Lightroom can go way beyond that. So do we get a 13,5 stops? at least up to iso 800 or 1600? I certainly hope so...
 
Upvote 0
Maybe the best we can hope for in the near future is an integral software solution - hit the shutter release - camera takes 3 bracketed shots and combines automatically. Should be doable.
 
Upvote 0
Medium format backs generally have 16 bit, but they are noisy enough to not need it, 14 bit would be enough. But it is nice to have "16 bit" in the data sheet to differ from the smaller formats, although it has no real meaning in practice.

In the smaller formats speed and file size is also important, so 16 bit will only come when noise levels in the sensor go down further so it starts to provide a real benefit. I don't think we are there yet.
 
Upvote 0
I read somewhere that even those cameras with 16bit a/d actually only DXO'd in at around 13-14bits - 1Ds3 territory.

Whilst it's nice to have the best images possible nearly 99.9999% of all photographic images viewed are A4 and smaller (think magazines and ipads).

The human eye can't discern the level detail on an image that size.
 
Upvote 0
Viggo said:
I for one would REALLY like to know the DR of the 1d X. They've never said it was imrpoved before, it has gone unnoticed by most, but this time they list it as a main new feature, which is stupid if it's only 1/3 of a stop. the mk4 is, what 11,9? But with some trickery in Lightroom can go way beyond that. So do we get a 13,5 stops? at least up to iso 800 or 1600? I certainly hope so...

I too am curious about the 1DX as I am sure are a lot of others. Hopefully we will see some full res high ISO shots soon as well as detailed info with respect to DR, etc.
 
Upvote 0
LifeAfter said:
Why not 16 bit?

Shouldn't it contribute to a better DR?
There were some cameras that had 16 bit RAW files (so it is possible)

While the Digital SLR's are replacing the gelatine 35 mm SLR's
wich had astounding Dynamic Range (especially Black & White Film)
they are relatively matching the resolution,
and surpassing the noise/grain (ISO),

The only thing they develope sooo slowly is the DYNAMIC RANGE.
If it continues with this rythm, we are too far away of matching the film

I don't think that there is someone who wouldn't want this,
and i think that at the stage of the digital photography that we are now,
the DR should be the priority of developing instead of resolution and ISO.

Thank you for sharing with me a word weather you agree
with my opinion or not.
and i feel that we need a revolutionary discovery to match the film DR


I'm totally with you. Problem seems to be that it wouldn't really matter since the output (screens or prints) wouldn't match it anyway. If we ever want to see the same kind of depth we saw with film then we need better output technologies first. It's my opinion that cameras have reached a point of diminishing returns as long as nothing new happens on that side of the equation. I have no numbers or detailed technical knowledge to back this up other than looking at my prints from pre-digital printing days and compare it to anything I've gotten back from any lab since the late 90s or so. I compare it to the audio world. We have reached a point where digital audio processing is really really good. So we're back to the point where it really matters a) how good the analog input is (if there is any) and b) how good the analog output device is in the end. You can have the latest greatest guitar amp and microphone or the most nifty digital amp simulation and run all this with a sample rate of 96khz - once it hits your cheap little iPod headphones as poor MP3 files it really makes not much of a difference anymore.

Same will be true with you 16bit files and your L glass and your 48MP 5D Mark VI once your photos come out of that Walmart inkjet (or their more expensive equivalents from a slightly better lab).
 
Upvote 0
Thanks for sharing the same opinion with me 7enderbender,
just that the output (screens or prints) wouldn't match it anyway, is also right,
but we don't need better DR for the output directly,
we NEED IT for editing the image, getting more details where we need it.

Imagine the miracle we could have from 35mm film
when we developed it and saw some exposures being already completely transparent
and we could still get the hole image with every detail on it,

or when the sky was too bright, we could still
get the clouds by developing (partially) only the sky for 2 or 3 sec.
without losing anything nor having additional grain (Noise?)

We could almost have a High-Dynamic-Range photo from one 35mm film frame
without having to do 3 or more frames with tripod and all the stuff..

Maybe the new generation don't miss this,
but it's just because they did never do it - they don't know it could be possible

Sorry for sounding maybe a bit nostalgic,
but i really don't care about the dark room, 35mm film and everything else,

All i care is that it could be nice to have the DSLR's as they are,
without losing this Very IMPORTANT thing - DR.
 
Upvote 0
Let me dispel a myth here, more bits will not make any difference to dynamic range at all. you still have the same old 0 - 255 in each channel , but the number of steps between the two increases giving better colour and shades. Dynamic range still remains the difference between clipping and the noise floor.

Nikon cameras allow switching between 14 & 12 bits (contentious statment comming !) to allow them to acheive the same frame rates as comparable Canon cameras, but I don't think anyone can really tell much difference between 12 & 14 bits in real world situations
 
Upvote 0
Also worth noting if it hasn't been mentioned, quite a few filters in Photoshop don't work in 16 bit mode (although I wouldn't consider them essential). Even less tools work in 32 bit. GIMP doesn't even have 16 bit support at all yet but it's in the pipeline. I'd say it's not fully necessary to be working in 16 bit or higher, and if you have a specific use for it, you don't need to be in this conversation, lol, and if you don't know what specific use you would have for it, then it probably isn't the most immediate thing that will help you/your photography/etc.. By default, photoshop imports RAW files as 8 bit and I've never seen a specific instance where I thought "wow, I would have been so much better off in 16 bit" when the end result is concerned.
 
Upvote 0
Flake said:
Let me dispel a myth here, more bits will not make any difference to dynamic range at all. you still have the same old 0 - 255 in each channel , but the number of steps between the two increases giving better colour and shades. Dynamic range still remains the difference between clipping and the noise floor.
Dynamic range can be limited by the number of bits used to record the data.

If only 8 bits are used to record a value, then the dynamic range is limited to 8 stops. The value 0 denotes a reading that is below the dynamic range. The value 255 denotes a value that is above the dynamic range. Neither of these values are "inside" the dynamic range. The values that fall inside the dynamic range are 1-254. Because values indicate a linear increase in brightness, this range of values allows for the brightest value to be only 254x brighter than the darkest value. Thus the dynamic range of 8-bit data is only 7.989 stops.

Even if the sensor was originally able to record a wider dynamic range, any data from that larger range is discarded along with the bits that recorded it. Some data loss can be avoided by using a non-linear value system. With the gamma correction of 1, Photoshop bends the curve to get 14 stops of dynamic range out of the 8 bit space.

When cameras record data in raw format, they do not record three channels for each pixel. Each pixel only records one channel. When the data is converted from a raw format into a conventional format, the additional channel data is generated based on the color filter array used in the camera. << http://en.wikipedia.org/wiki/Color_filter_array >>

While the number of bits used to record the raw data can limit the dynamic range, it cannot expand it beyond the sensor's capability. If noise from the sensor is randomly assigning the lower bits, there is no point on recording them. Recording 16-bits of data where the last 2 or 4 bits are pure noise is wasteful. RAW files are compressed using loss-less compression. Random data (noise) cannot be compressed using loss-less compression. Adding more bits of noise to the files would increase the file size substantially while doing nothing to improve image quality. Therefore, manufactures balance the number of bits used to record the data against the actual dynamic range of the sensor. You can expect that manufactures will move to 16-bit A/D coversion when their sensors have the dynamic range to use that data.
 
Upvote 0
chriswatters said:
When cameras record data in raw format, they do not record three channels for each pixel. Each pixel only records one channel. When the data is converted from a raw format into a conventional format, the additional channel data is generated based on the color filter array used in the camera. << http://en.wikipedia.org/wiki/Color_filter_array >>
If you're going to be this picky, you ought to admit that at this stage they aren't called pixels, but photosites; the source you link contradicts you on this, hewing to the standard convention. Also, it seems a bit strange that you would only provide a cite for one of the least controversial parts of your statement.

For example:

Because values indicate a linear increase in brightness, this range of values allows for the brightest value to be only 254x brighter than the darkest value. Thus the dynamic range of 8-bit data is only 7.989 stops.
While thought-provoking and news to me, this also strikes me as misleading (you do not provide a cite here, which would be very helpful).

Even if values DR=0 and DR=255 denote brightness levels below or above the "measured" dynamic range that do not plot along with those other values in a linear fashion, it is simply missing the point to argue that they are not at least usable parts of the dynamic range of an image and therefore record something of the dynamic range of a scene. Even if full black and full brightness are not "accurate," they provide points of contrast for the final image, and so the usable, not real, DR is barely more than you suggest. It's a bit like asking the color of black or white, and then arguing from the answer that because they aren't technically colors that they cannot be mixed on a palette. If there is some photographer who seriously believes that DR=0 and DR=255 are unreliable (or even worse, useless), they have not spent enough time taking actual photographs.

I also question the assumption that DR always scales in a linear fashion. Silicon is peculiar and, as evidence of this, the DxO data shows that DR (as it is represented in RAW files, at least) falls off in a non-linear fashion when ISO is raised. In fact, there doesn't appear to be any set relationship at all.

chriswatters said:
Even if the sensor was originally able to record a wider dynamic range, any data from that larger range is discarded along with the bits that recorded it. Some data loss can be avoided by using a non-linear value system. With the gamma correction of 1, Photoshop bends the curve to get 14 stops of dynamic range out of the 8 bit space.
You act as if Photoshop has done something scurrilous. No, my friend, the fact that "Photoshop" (in truth, every type of image processing software, including the camera manufacturer's RAW converter, those of third party makers, and that done in-camera) can get "14 stops" means that there are, in fact, 14 stops compressed (in a linear or non-linear matter hardly matters; you run into similar issues in color switching your camera from sRGB to Adobe RGB) into that 8 bit space. The camera, and the camera's RAW and any third-party RAW converters, have to set or assume a floor, ceiling, and steps (or a "curve," though I suspect that in many cases there is no actual function, but instead something analogous to a look-up table for quick transcribing and interpretation of brightness levels). Just because the steps are coarser or finder does not mean that the top inside value cannot represent a shade nearly 14 stops, or 9 or 31 or 1000, brighter than the lowest. The 8-bit value is, in truth, arbitrary.
 
Upvote 0
Flake said:
Let me dispel a myth here, more bits will not make any difference to dynamic range at all. you still have the same old 0 - 255 in each channel , but the number of steps between the two increases giving better colour and shades. Dynamic range still remains the difference between clipping and the noise floor.

Where is it that you have the same old 0-255 in each channel ?

JPEG is limited to 8 bits per channel, but TIFF is not. If 16 bits per channel becomes sufficiently popular, the JPEG standard will be extended to support it.

Monitors, graphics cards, and O/Ses aren't there yet, but are on the way.
 
Upvote 0
try multiplying 255 three times (3 channels ) and you'll realise that it it a 16 bit format If 16 bit per channel becomes popular you won't be processing it on a PC or a Mac!

PC can handle 32 bit colour, the Mac 48 bit colour, but the human eye cannot detect the difference so there's little point.
 
Upvote 0
Flake said:
try multiplying 255 three times (3 channels ) and you'll realise that it it a 16 bit format If 16 bit per channel becomes popular you won't be processing it on a PC or a Mac!
24-bit color is actually 8 bits per channel. 48-bit color is 16 bits per channel.

32-bit color (I had to look this up, but it confirmed what I thought) color is just 8 bits per channel, across three channels, plus 8 bits for an alpha channel (transparency, which is vitally important for "modern" OSes and graphics applications going back to earlier releases of Windows and the first 3D games).

but the human eye cannot detect the difference so there's little point.
Almost always this argument fails (have you ever heard "the human eye cannot distinguish scene changes above 60Hz," for example?). In this case, it's debatable at best - some people can see more colors than others. It might be worthwhile to ask whether the tradeoffs in processing power and storage space are "worth it," but considering how much processing power is available these days, this argument holds far less water than it once did.
 
Upvote 0
SO why not make a new system like -255-510 per channel instead the actual one 0-255,
it would increase the DR, wouldn't it?

Maybe my idea is stupid, not logic,
but one thing i know for sure, there's always room for new discoveries,
and the actual DR is far from being perfect (film-like)
 
Upvote 0
LifeAfter said:
Why not 16 bit? Shouldn't it contribute to a better DR?

Only if the the signal resolution of the encoding was the limiting part. The DR is the ratio between the strongest and the weakest signal you can record. The maximum dynamic range you can hope for is equal to the electron-well depth, which depeneds on the sensor but rarely is above the 2^16 = 65536 electrons implied by 16 bits. In practice, the weakest signal you can detect is limited by read-out noise at a few electrons (again depending on sensor). It also depends a bit on what you mean by how well the weakest signal need to be recorded in order to be "detected", if you e.g. require 3-sigma confidence in the signal, that in itself would mean you need at least 9 electrons to call a signal detected (below that and you cannot distinguish it from noise). That would give your 16-bit electron well a DR of 13. So 14 bits would really be sufficient. The only way out would be to make the electron wells bigger. Since the well-depth normally scales with pixel area, multiplying the linear pixel dimensions by three would be sufficient to bring up the theoretical digitised DR requirement to 16 bit.

Using 2000 electrons/µm^2 as an effective upper limit for the electron storage capacity per area (to limit blooming effects), an electron well depth of 589000 corresponds to a pixel pitch of 17.2 µm, which in turn corresponds to a 3 Mpix FF camera.

So, in the interest of getting the best DR out of our photos, should we go for the biggest pixels and lowest resolutions? Only if you're interested in the DR per pixel, which normally isn't the case. No, normal photographers are more interested in the DR per sensor area. It will be a trade off between readout noise and resolution.

If you're interested in digging deeper into the topic, I would recommend you to have a look at clarkvision.com.

LifeAfter said:
The only thing they develope sooo slowly is the DYNAMIC RANGE.

I totally agree, but the best the manufacturers can do is not to increase the number of encoding bits but to limit sources of noise other than shot-noise. E.g., for the 5D2 sensor, the DR is severely limited by pattern noise. If Canon could limit that for the 5D3, the DR would improve significantly.

Edwin Herdman said:
I also question the assumption that DR always scales in a linear fashion.

CMOS sensors, like CCDs, are normally linear devices. Non-linearity (normally on the order of a few %) can be important if you are interested in accurate photometry, but for S/N calculations they are irrelevant. See, e.g., Hain et al. (2007), Comparison of CCD, CMOS and intensified cameras for CMOS linearity measurements.
 
Upvote 0
Why not 16 bit (data capture you must mean)? Yes, there is a considerable cost difference, (you have to make everything 16 bit, the sensors, the Digic chip, etc. Look at modems, they originally were 7 data bits ( common was 1 start bit, 7 data, one or no parity bits and 2 stop bits), and the changes were all brought about by making more of them = cheaper (there was a time all computers came with a 56K modem included) - we still have a technology issue with manufacturing 16 bit stuff compared to 14 bit)

And, Uh, NO digital does not come close to matching film for resolution (1,000 plus times difference) but digital is not enlarged, cropped, etc. the way film was and it is reproduced at 300 dpi (generally, my printer claims 9600 dpi max resolution which no digital known to man has yet come anywhere close to being able to do - (the 1Dx max resolution is 5184 x 3456 - imagine the size of the print at 9600 dpi and then understand that film is still many times greater resolution)

NO, digital actually has much more noise / grain; you just do not enlarge it like film and being electronic they can filter some noise / smooth the grain (despeckle, etc.)

Yes, you have the germ of an idea that 16 bits can convey more shades than 14, that does not necessarily translate into more dynamic range; that has to do with sensitivity of the sensor and amplification / control of amplification

One other thing you need to understand - digital sensors have microlenses in front of them that are color-filtered - each individual pixel is filtered red, green, or blue and gets the other two colors from averaging neighboring pixels with those filters such that NO COLORS on a digital image are true colors in nature (in film the chemistry and the age of the film can offset the colors but usually this effect is even throughout the image)

so, why not 16 bit? Not enough gain for the cost
 
Upvote 0
Status
Not open for further replies.