December 11, 2016, 01:14:35 AM

Author Topic: More Big Megapixel Talk [CR1]  (Read 85340 times)

elflord

  • 5D Mark III
  • ******
  • Posts: 693
Re: More Big Megapixel Talk [CR1]
« Reply #180 on: October 04, 2012, 07:24:42 AM »
It seems the conclusion though is that no, the number of bits in the ADC really is not a hard limit.

Agreed, at least that it's not a hard theoretical limit.  I believe it is a hard practical limit for the sensors under discussion, though.  Could they be nonlinear?  Unlikely - real data from previous sensors indicates linearity (e.g. Roger Clark's data),

This seems a bit counterintuitive -- because generally, physical saturation affects manifest as asymptotic bounds, not straight lines that run into a ceiling. So I'm a bit sceptical here. Could you point me to some reference that demonstrates this linearity ?

Quote
As for binning, yes, you can gain DR, as well as sensitivity.  But you lose resolution - and since that's linear, not logarithmic, you trade a lot of resolution for that DR gain
...
If you interpolate back to full res, you get a softer image - and the D800 images are plenty sharp, so I don't think binning is going on here, either
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

As long as you're working with the premise that the viewing size of the image does not depend on the number of megapixels on the camera, the print number is the one you want. The "screen" number is nice to have also as it is (one of many) example(s) of DxO being thorough about documenting the intermediate steps of their process.

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #180 on: October 04, 2012, 07:24:42 AM »

marinien

  • Rebel SL1
  • ***
  • Posts: 78
Re: More Big Megapixel Talk [CR1]
« Reply #181 on: October 04, 2012, 07:52:24 AM »
But...it really doesn't matter, so I'll not continue to respond to your obviously specious statements, because:

D3x sensor is the same as in  the Sony A900, there has been much discussion whether it is a 12 or 14-bit column ADC and if Nikon has changed it  . In  the Sony's sensor is it determined that  is a 12-bit  column ADC.

Ok, so Nikon says it's 14-bit, but you say Nikon is lying and it's actually 12-bit....


@generalstuff: you should do more research before jumping on any conclusion. The Sony A900 has a 12-bit ADC does not mean that the sensor is limited to 12 bits! It means that Sony only includes a 12-bit ADC in their A900.
BTW, the Nikon D3X has a max of 5fps in 12-bit mode (Nikon specs) and of about 2fps in 14-bit mode (estimation).
7D | EF-S 17-55 | EF 100mm f/2.8 L IS Macro | 580EX II | Benro C3780T + Markins M20

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #182 on: October 04, 2012, 08:05:55 AM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

neuroanatomist

  • CR GEEK
  • ************
  • Posts: 20051
Re: More Big Megapixel Talk [CR1]
« Reply #183 on: October 04, 2012, 09:12:25 AM »
...physical saturation affects manifest as asymptotic bounds, not straight lines that run into a ceiling. So I'm a bit sceptical here. Could you point me to some reference that demonstrates this linearity ?

http://scien.stanford.edu/pages/labsite/2007/psych221/projects/07/camera_characterization/sensor_linearity.html

Granted, it's not peer reviewed, an as a Cal alum I am properly skeptical of data from students of Leland Stanford Junior College University...  :P

But seriously, the photodiode fills in exactly that fashion - linear up to the full well capacity, then it hits a ceiling where no more photons can be absorbed (if it were a CCD sensor, those surplus photons would just spill over to adjacent photodiodes, i.e. blooming).  The ADC is linear by design (although nonlinearity is introduced to the image later, intentionally, as a gamma correction to simulate human visual processing, i.e. to 'make the picture look good').
EOS 1D X, EOS M2, lots of lenses
______________________________
Flickr | TDP Profile/Gear List

nightowl

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #184 on: October 04, 2012, 09:33:26 AM »
About DXO
we can use the DXOmark data to derive the sensor parameters. From the DXOmark dynamic range we get the ratio of full well signal to read noise. From the SNR plot we get the ratio between the signal and the total noise at a given exposure. By using the dynamic range and two SNR values at full capacity and one other level from the fullSNR plots we can solve for the read noise, full well capacity, and gain variance.etc etc

5dmk3  and read noise  and dynamic range  http://www.sensorgen.info/CanonEOS_5D_MkIII.html
D800    and read noise and dynamic range http://www.sensorgen.info/NikonD800.html

and as some member mention here  before , with a very good and very expensive external ADC  16 bit  could give Canon a better DR and if they can deal with the banding.
« Last Edit: October 04, 2012, 09:39:06 AM by nightowl »

elflord

  • 5D Mark III
  • ******
  • Posts: 693
Re: More Big Megapixel Talk [CR1]
« Reply #185 on: October 05, 2012, 10:31:19 PM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table). I see other problems with this line of reasoning. For one, you don't halve the standard deviation by throwing away half the distribution because it's heavily skewed (e.g. the left tail is bounded and the right isn't). I might be missing something, but the above looks like nonsense to me.

Quote
Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

When we talk about how "theory" plays out in the real world, it is far from "trivial".

In the case of signal to noise and its application to dynamic range -- even we fail to realize the "theoretical" blackpoint due to quantization error (because the actual noise is less than the quantization error), we still increase usable dynamic range.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

jrista

  • Canon EF 600mm f/4L IS II
  • **********
  • Posts: 5336
  • EOL
    • Nature Photography
Re: More Big Megapixel Talk [CR1]
« Reply #186 on: October 07, 2012, 11:55:43 PM »
But I think that is essentially what is going on, because DxOmark's print score is based on a normalization to 8 megapixels. I'd think that would buy you a gain of a stop or so. ( something like log2 ( sqrt(36 megapixels  / 8 megapixels ) )

That's the theoretical part, which would work in an ideal situation with ideal noise characteristics and real numbers instead of quantized integers.

AFAIK, Nikon RAWs currently clamp any digitized negative values to zero (compare that to Canon having a bias value of 2048 in the data). This roughly halves the stdev of the dark frame image captures in Nikon's case, in the end inflating the "measured DR" by roughly one stop.

I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table). I see other problems with this line of reasoning. For one, you don't halve the standard deviation by throwing away half the distribution because it's heavily skewed (e.g. the left tail is bounded and the right isn't). I might be missing something, but the above looks like nonsense to me.

Quote
Another result of this is that for values of low magnitude, oversampling the individual pixel values in SW does not result in the expected behavior of noise converging towards zero. Since the noise-converging-to-zero is a key assumption in the whole "increase-DR-by-binning" scenario, it's quite trivial to notice that the theory doesn't hold water in this case.

When we talk about how "theory" plays out in the real world, it is far from "trivial".

In the case of signal to noise and its application to dynamic range -- even we fail to realize the "theoretical" blackpoint due to quantization error (because the actual noise is less than the quantization error), we still increase usable dynamic range.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

Only photon shot noise follows a Poisson distribution, and is therefor proportional to the sqrt of the signal. But were not talking about photon shot noise...were talking about read noise, the fixed amount of noise that exists in the lower range of the image signal, and how downscaling affects that kind of noise. I'm not sure you can simply and cleanly apply poisson statistics to read noise, or how scaling affects read noise, especially considering that there is also photon shot noise to content with at those levels. I think your making the problem far simpler than it really is, and ignoring a key factor about the discussion involving DR, and why a Sony Exmor sensor does have more DR than any other sensor on the market right now, but not as much as DXO's results indicate.

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #186 on: October 07, 2012, 11:55:43 PM »

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #187 on: October 08, 2012, 02:53:13 AM »
I don't follow this at all. Luminance isn't negative, so why would it make sense to have negative numbers ? If this clipping really takes place, does this show up on the SNR curves ? I don't really buy that they can inflate the estimated dynamic range by clipping relatively high values (one problem with this is that it leaves some dynamic range on the table).

Negative values in reference to the RAW file blackpoint, which for NEFs is 0. Canon RAWs use either 1024 or 2048. See e.g. here http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/D300_40D_tests/. The values clipped are not "relatively high", they are on the low end of the scale.

Suppose for example our "shadow noise level" (noise at signal level of 1) is 1 -- so 1 on our scale corresponds to the blackpoint.  If we average, theoretically, we could reduce the blackpoint, but our error is stuck at 1 due to quantization. That if I understand it is your argument.

No, it's about the clipping. See the curves in the link I posted. In the case of 40D, the read noise follows normal distribution. You can see that by binning pixels, the resultant noise will converge towards 1024, i.e. the blackpoint. In Nikon's case it would approach something > 0, i.e. a value over the black point set in the NEF.

straub

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #188 on: October 08, 2012, 03:03:34 AM »
But lets step up a couple of stops. At a signal level of 4, our noise level is 2 (proportional to sqrt of the signal), so at this level we would reduce signal to noise by binning (that is, quantization error isn't the limiting factor).

So you will gain usable dynamic range by increasing resolution, even quantization places a kind of floor on your blackpoint.

Do you understand that on a 14-bit scale, the level 4 corresponds to LV roughly 12 stops down from the maximum? Minimizing noise by pixel binning at LV -12 doesn't do anything to the resultant DR if the per-pixel DR is already over 13 stops. The quantization and 14-bit representation only limit the maximum attainable DR to 14 stops. The clipping of the bottom-end limits it to something <14 stops.

luftweg

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #189 on: October 11, 2012, 10:49:10 AM »
They should call it the EOS 1DZ.......   because all this talk is making me 'DZ'.......

jbayston

  • Guest
Re: More Big Megapixel Talk [CR1]
« Reply #190 on: October 12, 2012, 08:33:22 AM »
the trouble with these massive file sizes is that it shows the lenses to be less than perfect. i still think that a great lens and a reasonable size sensor would beat and average lens and massive sensor.

jrista

  • Canon EF 600mm f/4L IS II
  • **********
  • Posts: 5336
  • EOL
    • Nature Photography
Re: More Big Megapixel Talk [CR1]
« Reply #191 on: October 12, 2012, 06:44:33 PM »
the trouble with these massive file sizes is that it shows the lenses to be less than perfect. i still think that a great lens and a reasonable size sensor would beat and average lens and massive sensor.

I would argue that it shows there is a renewed need for image stabilization. Or, alternatively, that the user doesn't have nearly as steady of hands as they think they do. ;)

I can't say much about consumer lenses, as they are mass produced and use lower quality materials for the optical glass. But professional grade lenses, particularly Canon L-series telephoto lenses, are made with much higher quality glass and usually hand crafted for precision. I believe Canon's latest Mark II telephotos are plenty capable of resolving enough detail for a 46.1mp sensor. I recently used the new EF 300mm f/2.8 L II IS, the successor to what was previously considered Canon's sharpest lens ever, period. The sharpness of that lens is unbelievable, and was fully capable of keeping up with my Canon 7D for birds (lots and lots of super fine feather detail). Even my slightly older Canon EF 16-35mm f/2.8 L II lens is capable of keeping up with the resolution of my 7D, and it isn't anywhere close to the engineering feat that the 300mm lens is.

The 7D, BTW, has a pixel pitch about the same as a 47.6mp FF sensor would have... So I seriously doubt anyone will have a problem with lens resolution, so long as they use professional-grade glass, and use newer lens models.

canon rumors FORUM

Re: More Big Megapixel Talk [CR1]
« Reply #191 on: October 12, 2012, 06:44:33 PM »