I know using the 7d, which has a higher pixel density of the D800 and closer to the 45MP MF camera mentioned earlier, that to leverage the best out of the camera, I need to use the best lenses I can afford... crap in crap out... So comparing pixel densities, can it be said Canon lenses, especially the L lenses, compared to shooting on the 7D, are better equipped for higher resolution cameras than Nikon's lenses, or am I reading too much into this?

I don't know a lot of specifics about Nikon glass, however what I do know is that its on par with Canon glass. Nikon lenses coated with

*Nano Crystal Coat* would be very similar to Canon lenses with

*SWC*.

When it comes to the 7D, there are probably fewer lenses than not that will help the 18mp sensor (which offers 115.97lp/mm) to resolve as much as possible. The number of apertures wherein you could resolve close to the theoretical maximum sensor resolution would be very few as well. I don't know if any lenses really actually

*achieve* "perfection" at apertures less than around f/6.3 (where spatial resolution is 117.23lp/mm, about the same as the 7D's 18mp APS-C sensor). I think many get very close, but perfection is difficult to achieve, and some degree of optical aberrations will exist until you are really pretty much just capturing light from the center of the lens.

Resolution is a rather tricky thing, too. Sensors are capable of recording a fixed spatial resolution. Lenses achieve different spatial resolutions depending on their aperture, and their best resolution falls around one specific aperture, and falls off as you open wider or stop down narrower.

*The actual resolution you are recording*, however, is a combination of all of the elements in the system that may have an effect on spatial frequencies. You can be as general or as accurate as you want in those calculations, and if you want to get very accurate you'll need to account for usually unseen factors...like the vertical and horizontal low-pass filters, the IR filter, teleconverters, even the quality of the microlenses above each photodiode in each sensor pixel. Anything that behaves as an optical lens or filter affects spatial frequencies, and will have an impact on your final resolution. To keep things more practical, I usually

*just account* for

**lenses, teleconverters, and the sensor itself** (its theoretical maximum resolution, ignoring the effects of the filters in front of it.)

To get much more technical, and explain everything with math (don't worry, the math is pretty simple, and it makes a LOT of sense in the end.)

The

**system resolution** is really what your camera as a whole is capable of recording into photographs, and its always less than the lowest common denominator. System resolution can be derived by calculating the total "system blur", or the amount that each physical component of a system affects the reproduction of spatial frequencies in combination with each other. System blur is the square root of the sum of the squares of each independent factor of the system (i.e. the square of the sensor's blur + the square of the lens blur + the square of a TC blur). In other words:

`systemBlur = sqrt(blur1^2 + blur2^2 + ... + blurN^2)`

Its tough to figure out the blur of a lens when optical aberrations dominate. More than one type of aberration usually affects spatial resolution at that point, they are more mathematically complex, and on top of all the optical aberrations, there is STILL diffraction to account for (which does affect the overall blur, however in and of itself it usually only contributes enough to cover a fraction of a pixel, so it doesn't matter.) When a lens becomes diffraction limited, optical aberrations are minimal (and possibly completely dominated by diffraction itself), and diffraction is easier to computer mathematically. Diffraction, the bending of light around edges (in this case, the edges of the diaphragm that border the aperture), produces a specific waveform, or

**airy pattern**. The airy pattern is dominated by the

**airy disk** in the center, which is primarily what your resolving a point of light to. Around the airy disc are concentric rings of light with far lower intensity. As the size of the physical aperture is reduced, the size of the airy pattern increases, with the central disk increasing in size, and the concentric rings reaching farther and becoming more intense themselves. Its the growth of the airy disk that softens resolution at smaller apertures.

You can compute the size of the airy disk using the following formula:

`1.22 * lightWavelength * fNumber`

As you can see from that formula, different wavelengths of light produce different amounts of diffraction. It is easiest to just use the wavelength of green light to compute an average airy disk size that roughly accounts for middle-ground resolution. Bayer sensors are twice as sensitive to green light as they are to blue and red light, so it works out well in the end. If we take our f/6.3 aperture and the wavelength of green light in millimeters (@555nm), you can see that:

`1.22 * 0.000555mm * 6.3 = 0.004265mm`

To convert that to microns (which are easier to compare to sensor pixel pitch), you simply multiply by 1000um/mm:

`blurCircle um = blurCircle mm * 1000um/mm `

0.0042653mm * 1000um/mm = 4.265um

Here is where things get very interesting. The 7D's 18mp APS-C sensor has 4.3 micron pixels...roughly the same size as the lenses blur circle at f/6.3. It should be easy to understand why the 7D is diffraction limited at about f/6.3...because that is exactly the point where the airy disk (the central bright point of light in the airy pattern resolved by a lens) is the same size as a sensor pixel. I believe officially the DLA (diffraction limited aperture) is f/6.9, which would allow for the airy disk to be just slightly larger than a single pixel, capable of affecting other pixels, thereby reducing resolution. (Its a bit more technical than that in reality, but roughly speaking thats the deal.)

Finally, we can determine the theoretical maximum spatial resolution of a lens in line pairs per millimeter as so:

`spatialResolution lp/mm = 1000um/mm / (blurCircle um * 2)`

So plugging our numbers from above into that formula, we get:

`1000um/mm / (4.265um * 2) = 1000um/mm / 8.53um = 117.23 lp/mm`

A diffraction-limited lens at f/6.3 is capable of a maximum of 117.23 line pairs per millimeter. Similarly, the 7D is capable of about 116lp/mm. The spatial resolution of a sensor can be easily computed as so:

`sensor_resolution lp/mm = pixelRows l / sensorHeight mm / 2`

If we plug in the numbers for the 7D into that formula, we get:

`3456 l / 14.9mm / 2 = 231.946 l/mm / 2 = 115.973 lp/mm`

At this point, your probably figuring that you only need a diffraction limited lens at f/6.3 to resolve enough resolution to make the most out of the 7D. That would be great, but its still not quite that simple. Remember the concept of

**system blur** from above:

`systemBlur = sqrt(blur1^2 + blur2^2 + ... + blurN^2)`

Both the lens and the sensor are capable of resolving the same amount of detail, however the final resolution of our photograph is actually going to be much less. Our total system blur boils down to:

`squrt((4.3um)^2 + (4.265um)^2) = sqrt(18.49um^2 + 18.19um^2) = sqrt(36.68um^2) = 6.056um`

While the maximum theoretical blur circle for our sensor is 4.3um, allowing for 115.97lp/mm, and the maximum theoretical blur circle of our lens is 4.265um, allowing for 117.23lp/mm, the theoretical blur circle of the lens+camera is a full 41% larger. If we plug our system blur into the necessary formulas to get spatial resolution in line pairs / mm, we get:

`1000um/mm / (6.056um * 2) = 1000um/mm / 12.113um = 82.56 lp/mm`

Our system resolution is a meager 82.56 line pairs per millimeter!! Rather frustrating, however it might give some light to why the 7D appears "soft" much of the time. The sensor itself is producing about 40% more pixels than it needs to even at f/6.3, and the problem only gets worse the more you stop down. Similar things happen if you open the lens up too wide. If we increase our aperture to f/4, the picture improves, but its still not perfect (or even ideal):

`F4Blur = 1.22 * 0.555um * 4 = 2.7um blur`

F4Resolution = 1000um/mm / 185.185 lp/mm

systemBlurF4 = sqrt(18.49um^2 + (2.7um)^2) = sqrt(18.49um^2 + 7.335um^2) = 5.08um

systemResolution = 1000um/mm / (5.08um * 2) = 1000um/mm / 10.163um = 98.389 lp/mm

So what would it really take to make the BEST use of our 7D's 18mp APS-C sensor? Well, we could rearrange our blur formula a bit to compute a target blur:

`targetBur = sqrt(lensBlur^2 + sensorBlur^2)`

targetBlur^2 = lensBlur^2 + sensorBlur^2

lensBlur^2 = targetBlur^2 - sensorBlur^2

If we plug in the numbers for our sensor and target blur when the target blur is the same as the sensor blur:

`lenaBlur^2 = (4.3um)^2 - (4.3um)^2 = 0 `

Well bugger. We would have to have infinite lens resolution (lens airy disks that are essentially infinitely small points of light) to actually achieve a total system resolution that is the same as our sensor resolution. A better, more realistic way to put it is that it is impossible for any combination of system components to actually produce a system resolution that equals the lowest resolution. System resolution has an

*asymptotic relationship* with the lowest component resolution. Lets say we just want to get very close. Lets say we want to achieve 114 lp/mm resolution with our lens+camera:

`lensBlur = sqrt((4.4um)^2 - (4.3um)^2) = sqrt(0.87um^2)`

systemResolution = 1000um/mm / (sqrt(0.87um^2) =

systemResolution = 1000um/mm / 0.9327um * 2 =

systemResolution = 1000um/mm / 1.8655um = 536.05 lp/mm

We need an unbelievably stellar lens, capable of a whopping 536.05lp/mm, to improve system spatial resolution to 114lp/mm. That would be about 98% of the theoretical maximum. In terms of lens aperture, that would be:

`1.22 * 0.000555um * fNumber = 536.05lp/mm * 2`

fNumber = (1000um/mm / (536.05lp/mm * 2)) / (1.22 * 0.000555mm * 1000um/mm)

fNumber = (1000um/mm / 1072.1l/mm) / 0.6771um

fNumber = 0.9327um / 0.6771um

fNumber = f/1.377

A perfect (i.e. diffraction limited rather than optical aberration limited) lens at an aperture of f/1.38 would be necessary to achieve 114lp/mm with the 7D sensor. To my knowledge, such a lens does not exist (or if it did, it would have to be an extreme supertelephoto lens, where most incident light is already collimated, producing very little optical aberrations to start with.) When it comes to system resolution (or system blur), you get the most benefit by improving the lowest common denominator. In this case, the sensor is the lowest common denominator

**when we are using f/4**. As such, we could gain more system resolution by increasing sensor resolution relative to lens resolution. If we used a sensor capable of 173lp/mm, the same as a diffraction limited lens at f/4, we would have a 40mp APS-C sensor. Our total system resolution would be about 122lp/mm at f/4. As you have probably figured, a 40mp APS-C sensor would be quite a feat to manufacture, probably have some very undesirable noise and electronic characteristics, and would likely be extremely expensive. It would also likely exhibit similar softness, as ironically, the sensor is still producing images with 40% more pixels than are necessary to produce a sharp photo.

Personally, I prefer my lenses to outresolve my sensors a bit, which means the sensors never produce more pixels than necessary to

**create a sharp photo strait out of the camera**.

Outside of extremely fine detail (which for the most part full-frame cameras would be incapable of resolving in the first place), any comparison between such a camera and full-frame with the same megapixels would make it seem like the higher resolution APS-C was "soft". In reality, larger details appear soft

*relative to the full-frame*, but your resolving finer detail overall. A real-world example (assuming all else being equal...i.e. our hypothetical 40mp sensors have the same noise characteristics, color fidelity, dynamic range, etc. despite being different physical sizes) might be shooting a portrait such that you wanted your subjects eye lashes to appear sharp, vs. shooting the same portrait where you wanted the fibers in your subjects iris to appear sharp. The eye lashes might appear a touch soft in the APS-C photo, compared to how they appeared in the FF photo...but the APS-C photo is resolving the iris itself in far more detail than the FF could ever aspire to. You could sharpen the APS-C photo a bit, and your eye lashes would be superb...although the iris may seem a bit over-sharpened now (and your exposing every single blemish of your lovely model's face in depressing detail as well!) In the end, neither 40mp camera, FF or APS-C, really matters, since your printing that amazing portrait photo on a full-page magazine spread that is a meer 10x12" in size...in which case, you could have probably done superbly well with a meager 10 or 12mp camera (possibly even less).