November 23, 2014, 02:16:16 PM

Author Topic: Pixel density, resolution, and diffraction in cameras like the 7D II  (Read 28028 times)

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4626
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #90 on: March 13, 2013, 12:03:41 PM »
Relevant to this topic, but perhaps not the latest posts:
http://www.maxmax.com/olpf_study.htm

"In the 400X zoom pictire, you can better see the CFA, the amount of blur and the 10 micron scale.  For the Canon 5D II sensor, it appears that they displace the image approximately by one pixel.  The complete OLPF has two layers.  These pictures show 1 layer or 1/2 of the blur filter.  The 2nd part blurs the image 1 pixel in the vertical direction.  This means that for any one point of light, you end up with 4 points separated by 1 pixel or the same size as one R-G-G-B CFA square.  You have 4 points because the 1st layer gives you 2 points, and then the 2nd layer doubles those to 4 points. 

For another camera, the manufacturer might choose to displace the light differently.  For many 4/3 cameras, we see more blur than for APS and full frame sensors. Sometimes manufacturers make odd choices in the amount of blur.  For example, The APS Nikon D70 sensor had much less physical blur than the APS Nikon D200 sensors despite the D70 having a pixel pitch of 7.8 microns and the D200 having a pixel pitch of 5.8 microns. "

Very interesting. I wonder what king of OLPF the 7D II will have...with such small pixels, I imagine it wouldn't need as strong a filter as the 7D or any FF sensor. What is also interesting is how much surface area on a sensor is still wasted, despite the use of microlenses. I always thought the microlenses were square...being round, that leaves gaps of "unused surface area" at the intersection of every 2x2 set of pixels.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #90 on: March 13, 2013, 12:03:41 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4626
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #91 on: March 14, 2013, 04:25:13 PM »
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10┬Ám.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)

This is written by John Sheehy and as the Suede, Emil Martinec  BobN2  John  has no emotional ties to his own camera brand Canon.

Noise isn't monolithic. It comes in various types and sources.

The most universal noise is photon shot noise, which really isn't noise, per se, but is actually the texture of the signal, as light is a finite number of randomly timed events. The more light the sensor collects, the less grainy the capture and the closer it comes to a smooth thing, like you "see" in the real world (even though that smoothness is an illusion created by the brain). This type of noise will always be cleaner at ISO 100 than ISO 160, by 1/3 stop. Every stop of increased exposure increases the signal-to-noise ratio of photon noise by a half stop. This noise is only related to the sensor exposure, and has nothing directly to do with ISO settings.

Then, there is noise that is generated at the photosite while reading it. Again, this noise is independent of ISO setting, and related only to exposure. The difference between this read noise and shot noise is that it can have blotchier character and line noise or banding, usually only becoming an issue at high ISOs where it is amplified more. Also, unlike the shot noise, the SNR of read noise increases by a full stop when the sensor exposure is increased by one stop.

Then, there is late-stage noise, which occurs after amplification of the photo-site readout. This is where the camera creates its greatest anomalies. Since it occurs after amplification, it is the same strength at all analog sensor amplifications, and exists relative to the digitized values, rather than the absolute sensor signal. It is what gives Canon DSLRs the lowest DR in the industry. Canon, rather than amplifying in 1/3 stop steps at the photosite, uses a very cheesy method to get 1/3 stop ISOs; it simply under-exposes or over-exposes the full-stop ISOs by 1/3 stop, and then multiplies the RAW data by 0.8 or 1.25 to make it look like normal RAW data. The problem with this is that the total read noise for ISOs 100, 200, and 400 are about the same, so when ISO 100 gain is used for ISO 125, the read noise of ISO 125 is actually greater than the read noise of ISO 400, and closer to the read noise of ISO 640 on most Canons! Conversely, ISO 160 is ISO 200 gain multiplied by 0.8, so the read noise is about 80% of that of ISO 100.

So basically, ISO 160 is cleaner in the deep shadows than ISO 100, by about 1/3 stop. In the highlights, however, which are dominated by photon shot noise, ISO 100 is actually 1/3 stop cleaner. Chances are, however, that you would not fully appreciate the benefits of ISO 100, compared to the benefits of ISO 160 in the shadows, as photon shot noise is very aesthetic noise, and does little to obscure image detail, as opposed to read noise which is often more like a cheese grater across your eyes.

However, if are shooting RAW and "exposing to the right", you are already creating ISOs of 160, 200, 180, whatever, out of ISO 100 gain, and are moving the read noise floor down anyway. If you are shooting JPEGs, or movies, then ISO 160 is the way to go for reduced noise, as the camera was going to discard the 1/3 stop of extra DR that ISO 100 has in the highlights that ISO 160 moves to the shadows, anyway, so there is no loss.

I'm not sure what this has to do with the post of mine that you quoted. When it comes to ADC bit depth and "ADC slop", the kind of noise we are talking about is quite specifically read noise. The mechanism that Canon uses to achieve 1/3rd stop ISO settings doesn't matter, especially in a hypothetical context where Canon is using new sensor technology and active cooling.

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #92 on: March 15, 2013, 10:42:17 PM »
Yes, point - not line! - resolution is ultimately noise limited, not optically limited. Until you get to astro-size fixed installations, that is...

Canon's effective pixel area has never been 100%. In the 5D classic, I think we measured 47%, with the newest "100% coverage microlenses" I'd guess they reach maybe 80%. The collimated angle light efficiency depends on how strong you make the micro-lens, and how far above the sensor surface proper it is situated. The actual light-sensitive area on the sensor is smaller than 50% in most CMOS cameras (excepting back-lit of course...). The microlens has to be absolutely centered above this area, with an angle compensation for an estimated average main lens exit pupil distance (usually around 70-80mm) in the corners.

As soon as the ray angles stray outside of the optimal, the microlens starts to both reflect (due to a very high incident light angle) on the far side of the dome, and project outside the sensor active surface on the near side of the dome. In Canon sensors, this is usually at about F2.4. From F1.6 down to F1.2 region of angles, less than half of the original light amount reaches the active pixel surface. There's a built-in compensation for this in firmware, that you can trace by looking at the gaps in the raw file caused by integer multiplications...

BTW, the MaxMax microscopy images show a line spread of about 0.4 pixels, not one full pixel... The full spread is about 0.8-0.9px, giving a +/-0,3px line spread after subtracting birefringency loss (the ghost image is almost 2Ev down from the non-refracted image). That is usually enough to give the interpolation engine some neighboring-area support to work with. This small increase in support can increase the interpolation accuracy by several hundred percent.

9VIII

  • 5D Mark III
  • ******
  • Posts: 666
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #93 on: March 18, 2013, 05:03:03 PM »
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.

If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.

Off topic, sorry but I just had to scratch the itch. We know the negatives (more processing, incompatibility with current standards, more focus should be spent on different sensor types), but every time I read something like this I can't help but think a RGBW array would be awesome.
-100% RAW-

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #93 on: March 18, 2013, 05:03:03 PM »