May 24, 2018, 08:20:49 AM

Author Topic: Why not 16bit?  (Read 13342 times)

Flake

  • Guest
Re: Why not 16bit?
« Reply #15 on: December 17, 2011, 03:17:19 PM »
try multiplying 255 three times (3 channels ) and you'll realise that it it a 16 bit format If 16 bit per channel becomes popular you won't be processing it on a PC or a Mac!

PC can handle 32 bit colour, the Mac 48 bit colour, but the human eye cannot detect the difference so there's little point.

canon rumors FORUM

Re: Why not 16bit?
« Reply #15 on: December 17, 2011, 03:17:19 PM »

Edwin Herdman

  • EOS 7D Mark II
  • *****
  • Posts: 541
Re: Why not 16bit?
« Reply #16 on: December 17, 2011, 10:18:27 PM »
try multiplying 255 three times (3 channels ) and you'll realise that it it a 16 bit format If 16 bit per channel becomes popular you won't be processing it on a PC or a Mac!
24-bit color is actually 8 bits per channel.  48-bit color is 16 bits per channel.

32-bit color (I had to look this up, but it confirmed what I thought) color is just 8 bits per channel, across three channels, plus 8 bits for an alpha channel (transparency, which is vitally important for "modern" OSes and graphics applications going back to earlier releases of Windows and the first 3D games).

Quote
but the human eye cannot detect the difference so there's little point.
Almost always this argument fails (have you ever heard "the human eye cannot distinguish scene changes above 60Hz," for example?).  In this case, it's debatable at best - some people can see more colors than others.  It might be worthwhile to ask whether the tradeoffs in processing power and storage space are "worth it," but considering how much processing power is available these days, this argument holds far less water than it once did.

LifeAfter

  • EOS Rebel SL2
  • ***
  • Posts: 90
  • Photo is only 1 media to express among the others
Re: Why not 16bit?
« Reply #17 on: December 18, 2011, 08:16:07 AM »
SO why not make a new system like -255-510 per channel instead the actual one 0-255,
it would increase the DR, wouldn't it?

Maybe my idea is stupid, not logic,
but one thing i know for sure, there's always room for new discoveries,
and the actual DR is far from being perfect (film-like)
5D MK III / EF 16-35mm f2.8 L USM II / EF 24-70 f2.8 L II /EF 70-200mm f2.8 L / Extender 1.4 / EF 35mm f2 IS / EF 50mm f1.4 USM / Sigma 85mm f1.4
Fujifilm X100S / Film: EOS 1000Fn, CANON AT1, MAMIYA DSX 1000

epsiloneri

  • EOS 6D Mark II
  • *****
  • Posts: 416
Re: Why not 16bit?
« Reply #18 on: December 18, 2011, 08:42:18 AM »
Why not 16 bit? Shouldn't it contribute to a better DR?

Only if the the signal resolution of the encoding was the limiting part. The DR is the ratio between the strongest and the weakest signal you can record. The maximum dynamic range you can hope for is equal to the electron-well depth, which depeneds on the sensor but rarely is above the 2^16 = 65536 electrons implied by 16 bits. In practice, the weakest signal you can detect is limited by read-out noise at a few electrons (again depending on sensor).  It also depends a bit on what you mean by how well the weakest signal need to be recorded in order to be "detected", if you e.g. require 3-sigma confidence in the signal, that in itself would mean you need at least 9 electrons to call a signal detected (below that and you cannot distinguish it from noise). That would give your 16-bit electron well a DR of 13. So 14 bits would really be sufficient. The only way out would be to make the electron wells bigger. Since the well-depth normally scales with pixel area, multiplying the linear pixel dimensions by three would be sufficient to bring up the theoretical digitised DR requirement to 16 bit.

Using 2000 electrons/µm^2 as an effective upper limit for the electron storage capacity per area (to limit blooming effects), an electron well depth of 589000 corresponds to a pixel pitch of 17.2 µm, which in turn corresponds to a 3 Mpix FF camera.

So, in the interest of getting the best DR out of our photos, should we go for the biggest pixels and lowest resolutions? Only if you're interested in the DR per pixel, which normally isn't the case. No, normal photographers are more interested in the DR per sensor area. It will be a trade off between readout noise and resolution.

If you're interested in digging deeper into the topic, I would recommend you to have a look at clarkvision.com.

The only thing they develope sooo slowly is the DYNAMIC RANGE.

I totally agree, but the best the manufacturers can do is not to increase the number of encoding bits but to limit sources of noise other than shot-noise. E.g., for the 5D2 sensor, the DR is severely limited by pattern noise. If Canon could limit that for the 5D3, the DR would improve significantly.

I also question the assumption that DR always scales in a linear fashion.

CMOS sensors, like CCDs, are normally linear devices. Non-linearity (normally on the order of a few %) can be important if you are interested in accurate photometry, but for S/N calculations they are irrelevant. See, e.g., Hain et al. (2007), Comparison of CCD, CMOS and intensified cameras for CMOS linearity measurements.

archangelrichard

  • Guest
Re: Why not 16bit?
« Reply #19 on: December 18, 2011, 01:20:04 PM »
Why not 16 bit (data capture you must mean)? Yes, there is a considerable cost difference, (you have to make everything 16 bit, the sensors, the Digic chip, etc. Look at modems, they originally were 7 data bits ( common was 1 start bit, 7 data, one or no parity bits and 2 stop bits), and the changes were all brought about by making more of them = cheaper (there was a time all computers came with a 56K modem included) - we still have a technology issue with manufacturing 16 bit stuff compared to 14 bit)

And, Uh, NO digital does not come close to matching film for resolution (1,000 plus times difference) but digital is not enlarged, cropped, etc. the way film was and it is reproduced at 300 dpi (generally, my printer claims 9600 dpi max resolution which no digital known to man has yet come anywhere close to being able to do - (the 1Dx max resolution is 5184 x 3456 - imagine the size of the print at 9600 dpi and then understand that film is still many times greater resolution)

NO, digital actually has much more noise / grain; you just do not enlarge it like film and being electronic they can filter some noise / smooth the grain (despeckle, etc.)

Yes, you have the germ of an idea that 16 bits can convey more shades than 14, that does not necessarily translate into more dynamic range; that has to do with sensitivity of the sensor and amplification / control of amplification

One other thing you need to understand - digital sensors have microlenses in front of them that are color-filtered - each individual pixel is filtered red, green, or blue and gets the other two colors from averaging neighboring pixels with those filters such that NO COLORS on a digital image are true colors in nature (in film the chemistry and the age of the film can offset the colors but usually this effect is even throughout the image)

so, why not 16 bit? Not enough gain for the cost

Edwin Herdman

  • EOS 7D Mark II
  • *****
  • Posts: 541
Re: Why not 16bit?
« Reply #20 on: December 19, 2011, 02:10:44 AM »
I also question the assumption that DR always scales in a linear fashion.

CMOS sensors, like CCDs, are normally linear devices. Non-linearity (normally on the order of a few %) can be important if you are interested in accurate photometry, but for S/N calculations they are irrelevant. See, e.g., Hain et al. (2007), Comparison of CCD, CMOS and intensified cameras for CMOS linearity measurements.
To confirm - I wasn't talking about S/N, but the scaling of DR along with ISO.  Is that what you are referring to?

Again, to link http://sensorgen.info/ - click on any two cameras, and compare the graphs of read noise and DR.  For one camera, the graphs do not correspond - and across any two cameras, the graphs are usually quite different.

I will point out that I do not disagree that CMOS and CCD devices are essentially linear; I think what is going on is that gain is not applied evenly at various ISO settings in order to target specific areas of importance to the designers - in the "Hi+" ISO settings, for example, it appears obvious that the goal by that point is to have as clean an image as possible, even allowing DR to take a disproportionate hit.

epsiloneri

  • EOS 6D Mark II
  • *****
  • Posts: 416
Re: Why not 16bit?
« Reply #21 on: December 28, 2011, 01:33:02 PM »
To confirm - I wasn't talking about S/N, but the scaling of DR along with ISO.  Is that what you are referring to?

Ah - sorry, no, I misunderstood you. I was talking about the linear response of the detector with time (i.e., twice the exposure giving twice the number of photo-electrons). I assume you are referring to the break in the DR function close to the point corresponding to the unity gain (when one registred photo-electron corresponds to one digital number, DN). For e.g. 5D2 the unity gain is achieved at ~ISO 1600 (e.g. this graph from sensorgen). Going beyond the unity gain results in numerical oversampling of the number of electrons with very little benefit. At the same time you limit the numerical DR.

Basically, beyond unity gain, the DR will decrease linearly with ISO, due to the maximum number of electrons being limited by the numercial encoding. Below the unity gain this limitation in the DR due to the reduction in the maximum number of electrons possible to encode is countered by the number of electrons being better sampled, reducing the numerical noise.

That's why there's no IQ point in going beyond ISO 3200 on the 5D2 (for the 7D, the maximum useful ISO is ~2000). Check fig 6 from clarkvision. That's also why any ISO beyond 6400 for the D1X (with a pixel pitch of 6.9µm) is a gimmick.

canon rumors FORUM

Re: Why not 16bit?
« Reply #21 on: December 28, 2011, 01:33:02 PM »