Rumors > EOS Bodies

Pixel density, resolution, and diffraction in cameras like the 7D II

(1/19) > >>

jrista:
I'm starting this thread to continue a tangent from another. Rather than derail the other thread, but in order not to lose the discussion, I thought we could continue it in its own thread. I think there is important information to be gleaned from the discussion, which started when I responded to a comment by @rs:


--- Quote from: jrista on February 26, 2013, 10:47:50 PM ---
--- Quote from: rs on February 25, 2013, 06:41:45 PM ---Ps - I really hope Canon resist the temptation to take their 1.6x crop sensor up to 24mp. It'll suffer from softness due to diffraction from f6.0 onwards - mount an f5.6 lens on there and you've got little in the way of options. Even the legendary 300/2.8 II with a 2x TC III will underperform, and leave you with just one aperture option if you want to attempt to utilise all of those megapixels. Leave the MP lower, and let those lower processing overheads allow them to push the hardware of the small mirror and shutter to its limits.

--- End quote ---

Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.

--- End quote ---

You can follow the quote above to read the precursor comments on this topic. So, continuing on from the last reply by @rs:



--- Quote from: rs on February 27, 2013, 04:03:00 AM ---
--- Quote from: jrista on February 26, 2013, 10:47:50 PM ---Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.

--- End quote ---
I'm not saying its worse, its just the extra MP don't make any difference to the resolving power once diffraction has set in. Take another example - scan a photo which was a bit blurry - if a 600dpi scan looks blurry on screen at 100%, you wouldn't then think 'let's find out if anyone makes a 10,000dpi scanner so I can make this look sharper?' You'd know it would offer no advantages - at that point you're resolving more detail than is available - weakest link in the chain and all that...

--- End quote ---

I think you are generally misunderstanding resolution in a multi-component system. It is not the lowest common denominator that determines resolution...total system resolution is the root mean square of all the components. To keep things simple for this forum, and in general this is adequate for most discussion, we'll just factor in the lens resolution and sensor resolution, in terms of spatial resolution. The way I approach this is to determine the "system blur". Diffraction itself is what we call "blur" from the lens, assuming the lens is diffraction limited (and, for this discussion, we'll just assume the lens is always diffraction limited, as determining blur from optical aberrations is more complex), and it is caused by the physical nature of light. Blur from the lens changes depending on the aperture used, and as the aperture is stopped down, diffraction limits the maximum spatial resolution of the lens.

The sensor also introduces "blur", however this is a fixed, intrinsic factor determined by the size and spacing of the pixels, whether micro lenses are used, etc. For the purposes of discussion here, lets just assume that 100% of the pixel area is utilized thanks to "perfect" microlensing. That leaves us with a sensor blur equal to the pixel pitch (scalar size, horizontal or vertical, of each pixel) times two (to get us lp/mm or line pairs per millimeter, rather than simply l/mm or lines per millimeter).

[NOTE: I assume MTF50 as that is the standard that historically represents what we perceive as clear, crisp, sharp, with high microcontrast. MTF10, in contrast, is usually used to determine what might be considered the maximum resolution at the lowest level of contrast the human eye could detect...which might be useful for determining the resolution of barely perceptible features on the surface of the moon...assuming atmospheric conditions are perfect, but otherwise it is not really adequate for the discussion here. Maximum spatial resolution in MTF10 can be considerably higher than in MTF50, but there is no guarantee that the difference between one pixel and the next is detectable by the average person (Rayleigh Criterion, often described as the limit of human visual acuity for 20/20 vision)...it is more of the "true mathematical/theoretical" limit of resolution at very low, barely detectable levels of contrast. MTF0 would be spatial resolution where contrast approaches zero, which is largely useless for general photography, outside of the context of astronomy endeavors where minute changes in the shape and structure of an airy disk for a star can be used to determine if it is a single, binary, or tertiary system...or other scientific endeavors where knowing the shape of an airy disk at MTF0, or Dawe's Limit (the theoretical absolute maximum resolving power of an optical system at near zero contrast level) is useful.]

For starters, lets assume we have a perfect (diffraction-limited) lens at f/8, on a 7D sensor which has a pixel pitch of 4.3 microns. The lens, at f/8, has a spatial resolution of 86 lp/mm at MTF50. The sensor has a raw spatial resolution of approximately 116 lp/mm (assuming the most ideal circumstances, and ignoring the difference between green and red or blue pixels.) Total system blur is derived by taking the root mean square of all the blurs of each component in the system. The formula for this is:


--- Code: ---tb = sqrt(lb^2 + sb^2)
--- End code ---

Where tb is Total Blur, lb is Lens Blur, and sb is Sensor Blur. We can convert spatial resolution, from lp/mm, into a blur circle in mm, by simply taking the reciprocal of the spatial resolution:


--- Code: ---blur = 1/sr
--- End code ---

Where blur is the diameter of the blur circle, and sr is the spatial resolution. We get  0.01163mm for the blur size of the lens @ f/8, and 0.00863 for the blur size of the sensor. From these, we can compute the total blur of the 7D with an f/8 lens:


--- Code: ---tb = sqrt(0.01163mm^2 + 0.00863mm^2) = sqrt(0.0001352mm + 0.0000743mm) = sqrt(0.0002095mm) = 0.014475mm
--- End code ---

We can convert this back into lp/mm simply by taking the reciprocal again, which gives us a total system spatial resolution for the 7D of ~69lp/mm. Seems surprising, given the spatial resolution of the lens...but then again, that is for f/8. If we move up to f/4, the spatial resolution of the lens jumps from 86lp/mm to 173lp/mm. Refining our equation to stay in lp/mm:


--- Code: ---tsr = 1/sqrt((1/lsr)^2 + (1/ssr)^2)
--- End code ---

Where tsr is total spatial resolution, lsr is lens spatial resolution, and ssr is sensor spatial resolution, plugging in 173lp/mm and 116lp/mm for lens and sensor respectively gets us:


--- Code: ---tsr = 1/sqrt((1/173)^2 + (1/116)^2) = 1/sqrt(0.0000334 + 0.0000743) = 1/0.0001077 = 96.34
--- End code ---

With a diffraction limited f/4 lens, the 7D is capable of achieving ~96lp/mm spatial resolution.

The debate at hand is whether a 24.1mp APS-C sensor is "worth it", and whether it will provide any kind of meaningful benefit over something like the 7D's 18mp APS-C sensor. My response is absolutely!! However, we can prove the case by applying the math above. A 24.1mp APS-C sensor (Canon-style, 22.3mmx14.9mm dimensions) would have a pixel pitch of 3.7µm, or ~135lp/mm:


--- Code: ---(1/(pitch µm / 1000µm/mm)) / 2 l/lp = (1/(3.7µm / 1000µm/mm)) / 2 l/lp = (1/(0.0037mm)) / 2 l/lp = 270l/mm / 2 l/lp = 135 lp/mm
--- End code ---

Plugging that, for an f/4 lens, into our formula from above:


--- Code: ---tsr = 1/sqrt((1/173)^2 + (1/135)^2) = 1/sqrt(0.0000334 + 0.0000549) = 1/sqrt(0.0000883) = 1/0.0094 = 106.4
--- End code ---


The 24.1mp sensor, with the same lens, produces a better result...we gained 10lp/mm, up to 106lp/mm from 96lp/mm on the 18mp sensor. That is an improvement of 10%! Certainly nothing to shake a stick at! But...the lens is outresolving the sensor...there wouldn't be any difference at f/8, right? Well...not quite. Because of the nature of "total system blur" being a factor of all components in the system, we will still see improved resolution at f/8. Here is the proof:


--- Code: ---tsr = 1/sqrt((1/86)^2 + (1/135)^2) = 1/sqrt(0.0001352 + 0.0000549) = 1/sqrt(0.00019) = 1/0.0138 = 72.5
--- End code ---

Despite the fact that the theoretical 24.1mp sensor from the hypothetical 7D II is DIFFRACTION LIMITED at f/8, it still resolves more! In fact, it resolves about 5% more than the 7D at f/8. So, according to the theory, even if the lens is not outresolving the sensor, even if the lens and sensor are both thoroughly diffraction limited, a higher resolution sensor will always produce better results. The improvements will certainly be smaller and smaller as the lens is stopped down, thus producing diminishing returns. If we run our calculations for both sensors at f/16, the difference between the two is less than at f/8:

18.0mp @ f/16 = 40lp/mm
24.1mp @ f/16 = 41lp/mm

The difference between the 24mp sensor and the 18mp sensor at f/16 has shrunk by half to 2.5%. By f/22, the difference is 29.95lp/mm vs. 30.21lp/mm, or an improvement of only 0.9%. Diminishing returns...however even at f/22, the 24mp is still producing better results...not that anyone would really notice...but it is still producing better results.


--- Quote from: rs on February 27, 2013, 04:03:00 AM ---
--- Quote from: jrista on February 26, 2013, 10:47:50 PM ---The aperture used was f/9, so diffraction has definitely "set in" and is visible given the 7D's f/6.9 DLA. The subject, in this case a Juvenile Baird's Sandpiper, comprised only the center 25% of the frame, and the 300 f/2.8 II w/ 2x TC STILL did a superb job resolving a LOT of detail:

--- End quote ---
You've got some great shots there, very impressive  ;) - and it clearly does show the difference between good glass and great glass. But the f9 300 II + 2x shot isn't 100% pixel sharp like your native 500/4 shot is. I'm not saying there's anything wrong with the shot - it's great, and the detail there is still great. Its just not 18MP of perfection great. A 15MP sensor wouldn't have resolved any less detail behind that lens, but that wouldn't have made a 15MP shot any better. This thread is clearly going off on a tangent here, as pixel peeping is rarely anything to do with what makes a great photo - its just we are debating whether the extra MP are worth it. And just to re-iterate, great shots jrista  :)

--- End quote ---

No, it certainly isn't 18mp of perfection great, because it is only a quarter of the frame. It is more like 4.5mp "great". :P My 100-400 wouldn't do as well, not because it doesn't resolve as much, at f/9 it would resolve roughly the same...but because it would produce lower contrast. Microcontrast from the 300mm f/2.8 II lens is beyond excellent....microcontrast from the 100-400 is bordering on piss-poor. There is also the advancements in IS technology to consider. I forgot to mention this before, but Canon has greatly improved the image stabilization of their new generation of lenses. Where we MAYBE got two stops of hand-holdability before, we easily get at least four stops now, and I've managed to get some good shots at five stops. As a matter of fact, the Sandpiper photo was hand held (with me squatting in an awkward manner on soggy, marshy ground that made the whole thing a real pain), at 600mm, on a 7D, and the BARE MINIMUM shutter speed to get a clear shot in that situation is 1/1000s.

So, I still stress...there are very good reasons to have higher resolution sensors, and with the significantly advanced new generation of lenses Canon is releasing, I believe we have the optical resolving power to not only handle a 24mp APS-C sensor, but up to 65-70mp FF sensors, if not more, in the future.


--- Quote from: rs on February 27, 2013, 04:03:00 AM ---You've got some great shots there, very impressive  ;) -  /* ...clip... */ And just to re-iterate, great shots jrista  :)

--- End quote ---

Thanks!  ;D

dtaylor:
Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing. At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

Thanks again for the post!

wickidwombat:
you should  post your birdy pics again they help explain however it would also be good to show the same lenses shot on a FF say a 5Dmk3 for comparison

phoenix7:
http://www.anandtech.com/show/6777/understanding-camera-optics-smartphone-camera-trends

This seems to have some of the info about smaller photon-sites/greater megapixels for a given size at least
as regards smaller sensors for phones and such but I think the math explanations about the light waves
and such might help make things clear for those less mathematically inclined.

jrista:

--- Quote from: dtaylor on February 27, 2013, 10:00:23 PM ---Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

--- End quote ---

No digging...that was strait out of my brain! :P (Honestly...I've written those formulas out so many times at this point, I remember them all by heart...and when I don't, it is just a matter of deriving them.) I just try to avoid the math when possible, as not everyone understands it. There was no real way to prove the notion that higher resolution sensors still offer benefits over lower resolution ones, even beyond the point of diffraction limitation, without the math, though.


--- Quote from: dtaylor on February 27, 2013, 10:00:23 PM ---I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing.

--- End quote ---

It is true that diffraction differs depending on the wavelength, which is why I stated I'm generally ignoring the nature of bayer sensors and the difference in resolution of red and blue pixels vs. green. Green light is easy, ~555nm wavelength, and it falls approximately mid-way between red light and blue light. Diffraction in red gives a slight advantage to red, from blur standpoint, while it gives a slight disadvantage to blue...relative to their lower spatial resolution in the sensor. The math gets a lot more complex if you try to account for all three channels and cover spatial resolution for three wavelengths of light. I don't think that is quite appropriate for this kind of forum (and I don't have all of that memorized, either! :P)


--- Quote from: dtaylor on February 27, 2013, 10:00:23 PM ---At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

--- End quote ---

That is an option, however I am not sure it is the best one. A couple things here. For one, people who have never done much processing with say mRAW or sRAW from a Canon camera don't realize how limited it is compared to a true RAW file. Neither mRAR nor sRAW are true raw images...they are far more like a JPEG than a RAW, in that the camera "bins" pixels (via software) and produces a lossless compressed but high-precision 14-bit YCbCr image (JPEG is also a form of YCC, only it uses lossy compression). When I first got my 7D, I photographed in mRAW for a couple weeks. I likes the quality of the output, it was sharp and clear...but after editing the images in LR for a while, I realized that editing latitude was far lower. I couldn't push exposure as far without experiencing undesirable and uncorrectable shifts in color, getting banding, etc. The same went for white balance, color tuning, vibrancy and saturation, etc. Without the original digital signal that could be reinterpolated as needed without ANY conversion and permanent loss of precision, you lose editing capabilities.

A 200mp sensor that uses hardware pixel binning sounds cool, and so long as you expose perfectly every time the results would probably be great. But if you need or want that post-process exposure latitude (which, as dynamic range has moved well beyond the 8-10 stops of a modern computer screen, is almost essential regardless of any other reasons you may want it), the only way to get the same editing latitude as a 50mp RAW...you would need the 200mp image in a true RAW form as well. There is only one RAW, and any processing a camera does to bin or downsample will eliminate the kind of freedom we have all come to expect when using a DSLR these days.

Second, I guess I should also mention...there is an upper limit on how much you can resolve with a sensor, and still be reasonably priced. If we take a perfect f/4 lens, for example, you have an upper limit of 173lp/mm as far as the lens goes. That assumes optical aberrations contribute approximately nothing to blur, and that it is all caused by diffraction. I would say that Canon's 500mm f/4 II, 600mm f/4 II, as well as probably the 300mm /2.8 II and 400mm f/2.8 II lenses fall into this category. In other words, not many lenses actually produce truly diffraction-limited results, or at least get close enough such that they might as well be perfect, at f/4.

The question is...what kind of sensor would it take to actually resolve all 173lp/mm from a total system spatial resolution standpoint? You mention a 200mp sensor as being the likely upper limit. From a cost standpoint ten years from now, that might be the case...but it would still be woefully insufficient to fully realize the potential of a perfect f/4 lens @ f/4. Theoretically speaking, system resolution is asymptotically related to the lowest common denominator...so you could never actually achieve 173lp/mm exactly. You can only approach it...however assuming we basically get there...172.9lp/mm. To really get there...you would need no less than a 650mp APS-C sensor!! In terms of FF, that would be a 1.6 GIGAPIXEL sensor, 49824 pixels wide by 33216 pixels tall!! That would be an 80 GIGABYTE 16-bit TIFF file, assuming you could actually import the thing! :D

Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!! The physical size of the photodiode on a 180nm process would probably be around 350nm...which is well into the ultraviolet spectrum!! Perhaps, with subwavelength technology, we might be able to capture the light...I don't know all that much about that field...however I can't imagine it being cheap. And on top of the cost of making pixels that small in the first place! (That is nothing to say of the noise or dynamic range at that density...I can't imagine full-well charge capacity being high enough to be very useful at such a small pixel pitch.)

Navigation

[0] Message Index

[#] Next page

Go to full version