ChristopherMarkPerez said:
Back in the day when I was trying to sort this out for myself, I had a long conversation with a physics professor. I was just an engineer, after-all, and I wanted to make sure I fully understood the underlying physics, both practical and theoretic. He too had been looking at the question for years.
What it comes down to is this: While the 1/R formula that is commonly found (one example is found on Fuji's website, of all places) that attempts to describe the relationship between light capture media and optics is fundamentally flawed and is easily proven wrong through simple testing.
The problem is that the formula that is the basis for nearly every single one of these kinds of conversation/discussions/flame-wars is wrong. The issue is in the mis-application of particle physics as applied to light. The formula is simple, and it's "feels" right, but it's not. Light (in terms of resolution) acts more as a wave-front than that of a particle. A much more complex equation would be needed to properly explain the relationship between optics and sensors.
To simplify, the professor said that from his perspective, corrected math fully supports his claim that at normal working apertures (f-wide-open through f/11) and commercially available components (ie: anything coming from current camera manufacturers), film or digital sensors are the limiting factor and set the limits of image resolution.
Said simply, you can use any correctly manufactured commonly available optic on any sensor currently made and that lens will have more than adequate performance for your system. Which leaves us with the straight-forward exercise of calculating real world resolution by looking only at the sensor.
If any of what I just said was wrong you'd not be able to build the kinds of computer parts that enable this very discussion. Talk to any mask builder at Intel or AMD and you'll fully understand what I mean by this.
jrista said:
The point I was trying to make before was that you can't get 50mp with any existing lens, and probably wouldn't with any lens created within the next decade. The same goes for 36mp, 24mp, 18mp. You cannot actually resolve those resolutions with ANY lens, even the best of the best of the best. Because output resolution has an asymptotic relationship with the least resolving component of the system...
Your professor is right, to a degree...however, I'd be curious to know exactly what he said, as he is also incomplete. First, he is correct that light behaves as a wavefront. The particle-nature of light is useful for describing in geometric terms how optics behave...but in reality, we don't work with individual particles of light. We work with a continuous, complex wavefront of light that produces a three dimensional structure, a "light cone" for lack of a better term, within the lens that ultimately resolves, or focuses, at the sensor plane. Another critical point about a photonic wavefront is that diffraction is an inherent trait, not something like "light bending around an obstacle." If you want to learn more, you can read about it here:
http://www.telescope-optics.net/wave.htm
As for the resolving power of a camera. I've said it a thousand times on these forums already: Lenses don't outresolve sensors, and neither do sensors outresolve lenses. The two work together to convolve the output, which will have lower resolution than both, with an asymptotic relationship with the least-resolving component of the system.
To approximate how two lenses will perform, you can use the following formula:
sysRes = (1/SQRT((1/(lensRes*2))^2 + sensorPixelPitch^2))/2
Fundamentally, this formula is calculating the system spot size, the convolved result of a single point of light by both the lens and sensor. The rest is simply to convert from spatial resolution in lp/mm and back. So, assuming we have an f/11 lens that means the diffraction-limited performance at MTF50 is 63lp/mm. That is a pretty low resolution...there are actually lots of sensors that have higher spatial resolution than that. However, let's say we take a 22.3mp, 36.3mp, and 50mp sensor. Assume we use all of them with that diffraction-limited lens at f/11. Here are the results of measuring the MTF50 spatial resolution of the resulting images:
22.3mp: 49.45lp/mm
36.3mp: 53.60lp/mm
50mp: 55.97lp/mm
As you can see, even at f/11, as you increase megapixel count (reduce pixel size), the ability of a sensor to resolve detail with a fully diffraction limited lens producing a large spot size is still possible. The differences are not large...probably not visible, and probably only measurable with software.
Now, f/11 is a very limited aperture. It resolves 63lp/mm. If we jump up to f/4, the diffraction-limited resolution jumps to 173lp/mm. Then, we get the following results:
22.3mp: 72.6lp/mm
36.3mp: 88lp/mm
50mp: 99.6lp/mm
If we think about something like the Otus, which is diffraction-limited at even wider apertures, here is what we could resolve at f/2 where we have 346lp/mm spatial resolution:
22.3mp: 78lp/mm
36.3mp: 98lp/mm
50mp: 115lp/mm
As you can see, even though sensors are not resolving as much, in terms of spatial resolution, as lenses, higher and higher resolution sensors are still capable of producing higher resolution results. It doesn't matter if your at a heavily diffraction limited f/11, or at a minimally diffraction limited f/2...you will still resolve more detail with a higher resolution sensor. The sensors will ultimately set the limit...however we are currently far from reaching those limits with high quality lenses, such as the Otus (best example, there are other lenses out there that resolve near the diffraction limit wide open, such as most of Canon's great white supertelephoto lenses.) You can also approach the true diffraction limit of the lens by increasing sensor resolution. You want to actually resolve 63lp/mm from an f/11 lens? Well, you'll need a sensor capable of resolving a few hundred lp/mm to actually be able to get close...which means we'll need sensors capable of resolving hundreds of megapixels.
The diffraction-limited resolution of an f/4 lens is 173lp/mm. A 50mp sensor combined with such an f/4 lens is only resolving about 100lp/mm. That means were barely more than half way to the diffraction limit of the lens. We could still resolve more...with a 75mp sensor, a 100mp sensor, a 150mp sensor.
Regarding the kinds of UV lithography we use for etching sensor parts at 22nm. You have to remember that the wavelengths of light we use to do that are extremely short. Visible light wavelengths stretch from about 380nm near-UV to 850nm near-IR. Deep UV is less than 200nm in wavelength down to around 100nm in wavelength, and EUV is a mere 13.5nm in wavelength. The photomasks used in photolithography are also quite large, and the systems that are used to actually etch silicon with UV light use the best optics on the planet (usually Zeiss) and are perfectly diffraction limited (and at such small wavelengths, a diffraction spot is very small). Large masks result in little diffraction, and ultra small wavelengths smaller than the smallest transistor ensure we can actually create structures that small. We even have techniques that allow us to perform sub-wavelength etching, which is why we will ultimately be able to etch around 7nm and maybe even 4nm transistor sizes with a 13.5nm wavelength.