September 02, 2014, 09:18:43 PM

Author Topic: More Big Megapixel Talk [CR1]  (Read 60210 times)

caruser

  • Rebel T5i
  • ****
  • Posts: 125
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #180 on: October 04, 2012, 03:37:53 AM »
Why is 14 a hard limit ? I understand that it's impossible to represent more than 2^14 different intensities but that's not what dynamic range is. DR is log2(saturation point) - log2( blackpoint). Why can't this be greater than the number of bits in the ADC converter ?

It could with a nonlinear ADC, except that almost all IC-based ADCs are linear.  So, while the analog DR is the delta between the full well capacity and the noise floor (in e-), a 14-bit ADC maps signal at the noise floor to 0 and signal at full well capacity to 16,383, binning intermediate e- values incrementally, subject to quantization error.

You have the response (possibly nonlinear of the ADC). What about the response of the sensor itself to light ? Must this always be exactly linear ?

Also, if I pool four adjacent signals into one supersize pixel, how many bits do I have in my new "superpixel" ? Do I not have 56 bits ?

That would be too easy, no, when merging 4 pixels you gain (at best) two more bits, because 4 is 2^2 (where the exponent is the one we're interested in). Think about it, you have 4 times a value from 0 to X, so the combination gives you a value from 0 to X*4, which is two additional bits, not X^4 or whatever you need to go to 56 bits! Said differently, you can't multiply the bits by 4 when you multiply the pixels by 4 because the pixels are on a linear, and the bits on a logarithmic scale.

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #180 on: October 04, 2012, 03:37:53 AM »

generalstuff

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #181 on: October 04, 2012, 03:53:03 AM »
Guys, no matter what facts you have or what you say it's not going to help much, lol.
I know, I know.  But...the windmill is right there, just sticking straight up out of the field and begging to be tilted at and charged...


Really, the D3x has a 12-bit ADC?  Quick, you'd better call Nobuyoshi Gokyu (President & CEO of Nikon, Inc., but I'm sure you know that) and tell him that the features page for the D3x, which states, "Fast 14-bit A/D conversion incorporated onto the image sensor," is wrong and needs to be corrected immediately, based on your thorough understanding of the read out from Nikon sensors.

So, if we can agree that Nikon is correct about their own D3x specifications, and that you, despite your extensive understanding and studying, are wrong about their D3x specifications, let's just say that the D3x has a 14-bit ADC and move on...

In that case, DxO's measurements show a very decent 12.84 EV of DR for the D3x.  Their 'Landscape Score', however, is an artifically inflated 13.65 EV - still technically possible (unlike the D800), but again, artifically inflated as a direct result of a flawed method of data analysis.

Study the subject, indeed.  Oh, puuullllleeeeeeeze.  It's almost as if I didn't have a day job that includes both analyzing quantitative image analysis data and managing a large group of scientists who do the same...  ::)
[/quote]





Sorry but you are wrong  :)

D3x sensor is the same as in  the Sony A900, there has been much discussion whether it is a 12 or 14-bit column ADC and if Nikon has changed it  . In  the Sony's sensor is it determined that  is a 12-bit  column ADC.

Whether it's  a 12 or 14 bit ADC in Nikon . In the d3x . D90, d7000, d800 it takes place two readouts of data and these  data are merged so the number of dynamic stops are higher than  the number of stops from a normal read out  to a  common ADC.

With Canons sensor layout it's different . Based on the 18MP APS-C pixel with a very good and very expensive external ADC - probably 16 bit  could give them better "more mega pixel camera " and  DR, if they can deal with the banding. One sensor expert always maintains that they (Canon) could deal with banding by leaving larger areas of masked black pixels (allows them to make the required changes in line by line and column by column gain.

So do not blaim DXO meassurements as incorrect  when  Canon  is showing  poor data  like DR and other meassurable parameters.

« Last Edit: October 04, 2012, 04:06:10 AM by generalstuff »

neuroanatomist

  • CR GEEK
  • ********
  • Posts: 14045
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #182 on: October 04, 2012, 05:52:19 AM »
So do not blaim DXO meassurements as incorrect  when  Canon  is showing  poor data  like DR and other meassurable parameters.

I'm not. Their measurements are fine. It's their analysis of those measurements, specifically the method of normalization which pushes values beyond the possible measured range, that is flawed (and their factor weighting for the overall score is a black box of 'weighted' combination where the weighting is undisclosed, and thus may not even be consistent). 

Nor am I saying that Canon's sensors have as good a DR as Nikon's, at least at low ISO (they don't, because their noise floor is too high, which diminishes in importance as gain is applied). 

But...it really doesn't matter, so I'll not continue to respond to your obviously specious statements, because:

D3x sensor is the same as in  the Sony A900, there has been much discussion whether it is a 12 or 14-bit column ADC and if Nikon has changed it  . In  the Sony's sensor is it determined that  is a 12-bit  column ADC.

Ok, so Nikon says it's 14-bit, but you say Nikon is lying and it's actually 12-bit....

Guys, no matter what facts you have or what you say it's not going to help much, lol.

I concede the point.  ;)
« Last Edit: October 04, 2012, 06:09:28 AM by neuroanatomist »
EOS 1D X, EOS M, and lots of lenses
______________________________
Flickr | TDP Profile/Gear List

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #183 on: October 04, 2012, 06:06:07 AM »


You have the response (possibly nonlinear of the ADC). What about the response of the sensor itself to light ? Must this always be exactly linear ?

Also, if I pool four adjacent signals into one supersize pixel, how many bits do I have in my new "superpixel" ? Do I not have 56 bits ?

That would be too easy, no, when merging 4 pixels you gain (at best) two more bits, because 4 is 2^2 (where the exponent is the one we're interested in). Think about it, you have 4 times a value from 0 to X, so the combination gives you a value from 0 to X*4, which is two additional bits, not X^4 or whatever you need to go to 56 bits! Said differently, you can't multiply the bits by 4 when you multiply the pixels by 4 because the pixels are on a linear, and the bits on a logarithmic scale.

Sorry, yes, that's what I meant (and what I hammered away at earlier). You get log2( number pixels merged).

It seems the conclusion though is that no, the number of bits in the ADC really is not a hard limit.

neuroanatomist

  • CR GEEK
  • ********
  • Posts: 14045
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #184 on: October 04, 2012, 06:20:30 AM »
It seems the conclusion though is that no, the number of bits in the ADC really is not a hard limit.

Agreed, at least that it's not a hard theoretical limit.  I believe it is a hard practical limit for the sensors under discussion, though.  Could they be nonlinear?  Unlikely - real data from previous sensors indicates linearity (e.g. Roger Clark's data), and as I stated, almost all IC-based ADC's are linear, except the very earliest ones.  As for binning, yes, you can gain DR, as well as sensitivity.  But you lose resolution - and since that's linear, not logarithmic, you trade a lot of resolution for that DR gain (having said that, many fluorescent imaging systems, where the signal is faint, do make just such a trade off, although the goal there is usually higher sensitivity, not the higher DR that comes with it).  If you interpolate back to full res, you get a softer image - and the D800 images are plenty sharp, so I don't think binning is going on here, either (for luminance, obviously color interpolation is a different story).
EOS 1D X, EOS M, and lots of lenses
______________________________
Flickr | TDP Profile/Gear List

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #185 on: October 04, 2012, 07:24:42 AM »
It seems the conclusion though is that no, the number of bits in the ADC really is not a hard limit.

Agreed, at least that it's not a hard theoretical limit.  I believe it is a hard practical limit for the sensors under discussion, though.  Could they be nonlinear?  Unlikely - real data from previous sensors indicates linearity (e.g. Roger Clark's data),

This seems a bit counterintuitive -- because generally, physical saturation affects manifest as asymptotic bounds, not straight lines that run into a ceiling. So I'm a bit sceptical here. Could you point me to some reference that demonstrates this linearity ?

Quote
As for binning, yes, you can gain DR, as well as sensitivity.  But you lose resolution - and since that's linear, not logarithmic, you trade a lot of resolution for that DR gain
...
If you interpolate back to full res, you get a softer image - and the D800 images are plenty sharp, so I don't think binning is going on here, either
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

As long as you're working with the premise that the viewing size of the image does not depend on the number of megapixels on the camera, the print number is the one you want. The "screen" number is nice to have also as it is (one of many) example(s) of DxO being thorough about documenting the intermediate steps of their process.

marinien

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #186 on: October 04, 2012, 07:52:24 AM »
But...it really doesn't matter, so I'll not continue to respond to your obviously specious statements, because:

D3x sensor is the same as in  the Sony A900, there has been much discussion whether it is a 12 or 14-bit column ADC and if Nikon has changed it  . In  the Sony's sensor is it determined that  is a 12-bit  column ADC.

Ok, so Nikon says it's 14-bit, but you say Nikon is lying and it's actually 12-bit....


@generalstuff: you should do more research before jumping on any conclusion. The Sony A900 has a 12-bit ADC does not mean that the sensor is limited to 12 bits! It means that Sony only includes a 12-bit ADC in their A900.
BTW, the Nikon D3X has a max of 5fps in 12-bit mode (Nikon specs) and of about 2fps in 14-bit mode (estimation).
7D | EF-S 17-55 | EF 100mm f/2.8 L IS Macro | 580EX II | Benro C3780T + Markins M20

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #186 on: October 04, 2012, 07:52:24 AM »

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #187 on: October 04, 2012, 08:05:55 AM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

neuroanatomist

  • CR GEEK
  • ********
  • Posts: 14045
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #188 on: October 04, 2012, 09:12:25 AM »
...physical saturation affects manifest as asymptotic bounds, not straight lines that run into a ceiling. So I'm a bit sceptical here. Could you point me to some reference that demonstrates this linearity ?

http://scien.stanford.edu/pages/labsite/2007/psych221/projects/07/camera_characterization/sensor_linearity.html

Granted, it's not peer reviewed, an as a Cal alum I am properly skeptical of data from students of Leland Stanford Junior College University...  :P

But seriously, the photodiode fills in exactly that fashion - linear up to the full well capacity, then it hits a ceiling where no more photons can be absorbed (if it were a CCD sensor, those surplus photons would just spill over to adjacent photodiodes, i.e. blooming).  The ADC is linear by design (although nonlinearity is introduced to the image later, intentionally, as a gamma correction to simulate human visual processing, i.e. to 'make the picture look good').
EOS 1D X, EOS M, and lots of lenses
______________________________
Flickr | TDP Profile/Gear List

nightowl

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #189 on: October 04, 2012, 09:33:26 AM »
About DXO
we can use the DXOmark data to derive the sensor parameters. From the DXOmark dynamic range we get the ratio of full well signal to read noise. From the SNR plot we get the ratio between the signal and the total noise at a given exposure. By using the dynamic range and two SNR values at full capacity and one other level from the fullSNR plots we can solve for the read noise, full well capacity, and gain variance.etc etc

5dmk3  and read noise  and dynamic range  http://www.sensorgen.info/CanonEOS_5D_MkIII.html
D800    and read noise and dynamic range http://www.sensorgen.info/NikonD800.html

and as some member mention here  before , with a very good and very expensive external ADC  16 bit  could give Canon a better DR and if they can deal with the banding.
« Last Edit: October 04, 2012, 09:39:06 AM by nightowl »

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: More Big Megapixel Talk [CR1]
« Reply #190 on: October 05, 2012, 10:31:19 PM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table). I see other problems with this line of reasoning. For one, you don't halve the standard deviation by throwing away half the distribution because it's heavily skewed (e.g. the left tail is bounded and the right isn't). I might be missing something, but the above looks like nonsense to me.

Quote
Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

When we talk about how "theory" plays out in the real world, it is far from "trivial".

In the case of signal to noise and its application to dynamic range -- even we fail to realize the "theoretical" blackpoint due to quantization error (because the actual noise is less than the quantization error), we still increase usable dynamic range.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4158
  • POTATO
    • View Profile
    • Nature Photography
Re: More Big Megapixel Talk [CR1]
« Reply #191 on: October 07, 2012, 11:55:43 PM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table). I see other problems with this line of reasoning. For one, you don't halve the standard deviation by throwing away half the distribution because it's heavily skewed (e.g. the left tail is bounded and the right isn't). I might be missing something, but the above looks like nonsense to me.

Quote
Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

When we talk about how "theory" plays out in the real world, it is far from "trivial".

In the case of signal to noise and its application to dynamic range -- even we fail to realize the "theoretical" blackpoint due to quantization error (because the actual noise is less than the quantization error), we still increase usable dynamic range.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

Only photon shot noise follows a Poisson distribution, and is therefor proportional to the sqrt of the signal. But were not talking about photon shot noise...were talking about read noise, the fixed amount of noise that exists in the lower range of the image signal, and how downscaling affects that kind of noise. I'm not sure you can simply and cleanly apply poisson statistics to read noise, or how scaling affects read noise, especially considering that there is also photon shot noise to content with at those levels. I think your making the problem far simpler than it really is, and ignoring a key factor about the discussion involving DR, and why a Sony Exmor sensor does have more DR than any other sensor on the market right now, but not as much as DXO's results indicate.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #192 on: October 08, 2012, 02:53:13 AM »
I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table).

Negative values in reference to the RAW file blackpoint, which for NEFs is 0. Canon RAWs use either 1024 or 2048. See e.g. here http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/D300_40D_tests/. The values clipped are not "relatively high", they are on the low end of the scale.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

No, it's about the clipping. See the curves in the link I posted. In the case of 40D, the read noise follows normal distribution. You can see that by binning pixels, the resultant noise will converge towards 1024, i.e. the blackpoint. In Nikon's case it would approach something > 0, i.e. a value over the black point set in the NEF.

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #192 on: October 08, 2012, 02:53:13 AM »

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #193 on: October 08, 2012, 03:03:34 AM »
But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

Do you understand that on a 14-bit scale, the level 4 corresponds to LV roughly 12 stops down from the maximum? Minimizing noise by pixel binning at LV -12 doesn't do anything to the resultant DR if the per-pixel DR is already over 13 stops. The quantization and 14-bit representation only limit the maximum attainable DR to 14 stops. The clipping of the bottom-end limits it to something <14 stops.

luftweg

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #194 on: October 11, 2012, 10:49:10 AM »
They should call it the EOS 1DZ.......   because all this talk is making me 'DZ'.......

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #194 on: October 11, 2012, 10:49:10 AM »