Pixel density, resolution, and diffraction in cameras like the 7D II

Status
Not open for further replies.

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
I'm starting this thread to continue a tangent from another. Rather than derail the other thread, but in order not to lose the discussion, I thought we could continue it in its own thread. I think there is important information to be gleaned from the discussion, which started when I responded to a comment by @rs:

jrista said:
rs said:
Ps - I really hope Canon resist the temptation to take their 1.6x crop sensor up to 24mp. It'll suffer from softness due to diffraction from f6.0 onwards - mount an f5.6 lens on there and you've got little in the way of options. Even the legendary 300/2.8 II with a 2x TC III will underperform, and leave you with just one aperture option if you want to attempt to utilise all of those megapixels. Leave the MP lower, and let those lower processing overheads allow them to push the hardware of the small mirror and shutter to its limits.

Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction! :p That is so frequently quoted, and it is so frequently wrong.

You can follow the quote above to read the precursor comments on this topic. So, continuing on from the last reply by @rs:



rs said:
jrista said:
Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction! :p That is so frequently quoted, and it is so frequently wrong.
I'm not saying its worse, its just the extra MP don't make any difference to the resolving power once diffraction has set in. Take another example - scan a photo which was a bit blurry - if a 600dpi scan looks blurry on screen at 100%, you wouldn't then think 'let's find out if anyone makes a 10,000dpi scanner so I can make this look sharper?' You'd know it would offer no advantages - at that point you're resolving more detail than is available - weakest link in the chain and all that...

I think you are generally misunderstanding resolution in a multi-component system. It is not the lowest common denominator that determines resolution...total system resolution is the root mean square of all the components. To keep things simple for this forum, and in general this is adequate for most discussion, we'll just factor in the lens resolution and sensor resolution, in terms of spatial resolution. The way I approach this is to determine the "system blur". Diffraction itself is what we call "blur" from the lens, assuming the lens is diffraction limited (and, for this discussion, we'll just assume the lens is always diffraction limited, as determining blur from optical aberrations is more complex), and it is caused by the physical nature of light. Blur from the lens changes depending on the aperture used, and as the aperture is stopped down, diffraction limits the maximum spatial resolution of the lens.

The sensor also introduces "blur", however this is a fixed, intrinsic factor determined by the size and spacing of the pixels, whether micro lenses are used, etc. For the purposes of discussion here, lets just assume that 100% of the pixel area is utilized thanks to "perfect" microlensing. That leaves us with a sensor blur equal to the pixel pitch (scalar size, horizontal or vertical, of each pixel) times two (to get us lp/mm or line pairs per millimeter, rather than simply l/mm or lines per millimeter).

[NOTE: I assume MTF50 as that is the standard that historically represents what we perceive as clear, crisp, sharp, with high microcontrast. MTF10, in contrast, is usually used to determine what might be considered the maximum resolution at the lowest level of contrast the human eye could detect...which might be useful for determining the resolution of barely perceptible features on the surface of the moon...assuming atmospheric conditions are perfect, but otherwise it is not really adequate for the discussion here. Maximum spatial resolution in MTF10 can be considerably higher than in MTF50, but there is no guarantee that the difference between one pixel and the next is detectable by the average person (Rayleigh Criterion, often described as the limit of human visual acuity for 20/20 vision)...it is more of the "true mathematical/theoretical" limit of resolution at very low, barely detectable levels of contrast. MTF0 would be spatial resolution where contrast approaches zero, which is largely useless for general photography, outside of the context of astronomy endeavors where minute changes in the shape and structure of an airy disk for a star can be used to determine if it is a single, binary, or tertiary system...or other scientific endeavors where knowing the shape of an airy disk at MTF0, or Dawe's Limit (the theoretical absolute maximum resolving power of an optical system at near zero contrast level) is useful.]

For starters, lets assume we have a perfect (diffraction-limited) lens at f/8, on a 7D sensor which has a pixel pitch of 4.3 microns. The lens, at f/8, has a spatial resolution of 86 lp/mm at MTF50. The sensor has a raw spatial resolution of approximately 116 lp/mm (assuming the most ideal circumstances, and ignoring the difference between green and red or blue pixels.) Total system blur is derived by taking the root mean square of all the blurs of each component in the system. The formula for this is:

Code:
tb = sqrt(lb^2 + sb^2)

Where tb is Total Blur, lb is Lens Blur, and sb is Sensor Blur. We can convert spatial resolution, from lp/mm, into a blur circle in mm, by simply taking the reciprocal of the spatial resolution:

Code:
blur = 1/sr

Where blur is the diameter of the blur circle, and sr is the spatial resolution. We get 0.01163mm for the blur size of the lens @ f/8, and 0.00863 for the blur size of the sensor. From these, we can compute the total blur of the 7D with an f/8 lens:

Code:
tb = sqrt(0.01163mm^2 + 0.00863mm^2) = sqrt(0.0001352mm + 0.0000743mm) = sqrt(0.0002095mm) = 0.014475mm

We can convert this back into lp/mm simply by taking the reciprocal again, which gives us a total system spatial resolution for the 7D of ~69lp/mm. Seems surprising, given the spatial resolution of the lens...but then again, that is for f/8. If we move up to f/4, the spatial resolution of the lens jumps from 86lp/mm to 173lp/mm. Refining our equation to stay in lp/mm:

Code:
tsr = 1/sqrt((1/lsr)^2 + (1/ssr)^2)

Where tsr is total spatial resolution, lsr is lens spatial resolution, and ssr is sensor spatial resolution, plugging in 173lp/mm and 116lp/mm for lens and sensor respectively gets us:

Code:
tsr = 1/sqrt((1/173)^2 + (1/116)^2) = 1/sqrt(0.0000334 + 0.0000743) = 1/0.0001077 = 96.34

With a diffraction limited f/4 lens, the 7D is capable of achieving ~96lp/mm spatial resolution.

The debate at hand is whether a 24.1mp APS-C sensor is "worth it", and whether it will provide any kind of meaningful benefit over something like the 7D's 18mp APS-C sensor. My response is absolutely!! However, we can prove the case by applying the math above. A 24.1mp APS-C sensor (Canon-style, 22.3mmx14.9mm dimensions) would have a pixel pitch of 3.7µm, or ~135lp/mm:

Code:
(1/(pitch µm / 1000µm/mm)) / 2 l/lp = (1/(3.7µm / 1000µm/mm)) / 2 l/lp = (1/(0.0037mm)) / 2 l/lp = 270l/mm / 2 l/lp = 135 lp/mm

Plugging that, for an f/4 lens, into our formula from above:

Code:
tsr = 1/sqrt((1/173)^2 + (1/135)^2) = 1/sqrt(0.0000334 + 0.0000549) = 1/sqrt(0.0000883) = 1/0.0094 = 106.4

The 24.1mp sensor, with the same lens, produces a better result...we gained 10lp/mm, up to 106lp/mm from 96lp/mm on the 18mp sensor. That is an improvement of 10%! Certainly nothing to shake a stick at! But...the lens is outresolving the sensor...there wouldn't be any difference at f/8, right? Well...not quite. Because of the nature of "total system blur" being a factor of all components in the system, we will still see improved resolution at f/8. Here is the proof:

Code:
tsr = 1/sqrt((1/86)^2 + (1/135)^2) = 1/sqrt(0.0001352 + 0.0000549) = 1/sqrt(0.00019) = 1/0.0138 = 72.5

Despite the fact that the theoretical 24.1mp sensor from the hypothetical 7D II is DIFFRACTION LIMITED at f/8, it still resolves more! In fact, it resolves about 5% more than the 7D at f/8. So, according to the theory, even if the lens is not outresolving the sensor, even if the lens and sensor are both thoroughly diffraction limited, a higher resolution sensor will always produce better results. The improvements will certainly be smaller and smaller as the lens is stopped down, thus producing diminishing returns. If we run our calculations for both sensors at f/16, the difference between the two is less than at f/8:

18.0mp @ f/16 = 40lp/mm
24.1mp @ f/16 = 41lp/mm

The difference between the 24mp sensor and the 18mp sensor at f/16 has shrunk by half to 2.5%. By f/22, the difference is 29.95lp/mm vs. 30.21lp/mm, or an improvement of only 0.9%. Diminishing returns...however even at f/22, the 24mp is still producing better results...not that anyone would really notice...but it is still producing better results.

rs said:
jrista said:
The aperture used was f/9, so diffraction has definitely "set in" and is visible given the 7D's f/6.9 DLA. The subject, in this case a Juvenile Baird's Sandpiper, comprised only the center 25% of the frame, and the 300 f/2.8 II w/ 2x TC STILL did a superb job resolving a LOT of detail:
You've got some great shots there, very impressive ;) - and it clearly does show the difference between good glass and great glass. But the f9 300 II + 2x shot isn't 100% pixel sharp like your native 500/4 shot is. I'm not saying there's anything wrong with the shot - it's great, and the detail there is still great. Its just not 18MP of perfection great. A 15MP sensor wouldn't have resolved any less detail behind that lens, but that wouldn't have made a 15MP shot any better. This thread is clearly going off on a tangent here, as pixel peeping is rarely anything to do with what makes a great photo - its just we are debating whether the extra MP are worth it. And just to re-iterate, great shots jrista :)

No, it certainly isn't 18mp of perfection great, because it is only a quarter of the frame. It is more like 4.5mp "great". :p My 100-400 wouldn't do as well, not because it doesn't resolve as much, at f/9 it would resolve roughly the same...but because it would produce lower contrast. Microcontrast from the 300mm f/2.8 II lens is beyond excellent....microcontrast from the 100-400 is bordering on piss-poor. There is also the advancements in IS technology to consider. I forgot to mention this before, but Canon has greatly improved the image stabilization of their new generation of lenses. Where we MAYBE got two stops of hand-holdability before, we easily get at least four stops now, and I've managed to get some good shots at five stops. As a matter of fact, the Sandpiper photo was hand held (with me squatting in an awkward manner on soggy, marshy ground that made the whole thing a real pain), at 600mm, on a 7D, and the BARE MINIMUM shutter speed to get a clear shot in that situation is 1/1000s.

So, I still stress...there are very good reasons to have higher resolution sensors, and with the significantly advanced new generation of lenses Canon is releasing, I believe we have the optical resolving power to not only handle a 24mp APS-C sensor, but up to 65-70mp FF sensors, if not more, in the future.

rs said:
You've got some great shots there, very impressive ;) - /* ...clip... */ And just to re-iterate, great shots jrista :)

Thanks! ;D
 

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing. At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

Thanks again for the post!
 
Upvote 0
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dtaylor said:
Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

No digging...that was strait out of my brain! :p (Honestly...I've written those formulas out so many times at this point, I remember them all by heart...and when I don't, it is just a matter of deriving them.) I just try to avoid the math when possible, as not everyone understands it. There was no real way to prove the notion that higher resolution sensors still offer benefits over lower resolution ones, even beyond the point of diffraction limitation, without the math, though.

dtaylor said:
I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing.

It is true that diffraction differs depending on the wavelength, which is why I stated I'm generally ignoring the nature of bayer sensors and the difference in resolution of red and blue pixels vs. green. Green light is easy, ~555nm wavelength, and it falls approximately mid-way between red light and blue light. Diffraction in red gives a slight advantage to red, from blur standpoint, while it gives a slight disadvantage to blue...relative to their lower spatial resolution in the sensor. The math gets a lot more complex if you try to account for all three channels and cover spatial resolution for three wavelengths of light. I don't think that is quite appropriate for this kind of forum (and I don't have all of that memorized, either! :p)

dtaylor said:
At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

That is an option, however I am not sure it is the best one. A couple things here. For one, people who have never done much processing with say mRAW or sRAW from a Canon camera don't realize how limited it is compared to a true RAW file. Neither mRAR nor sRAW are true raw images...they are far more like a JPEG than a RAW, in that the camera "bins" pixels (via software) and produces a lossless compressed but high-precision 14-bit YCbCr image (JPEG is also a form of YCC, only it uses lossy compression). When I first got my 7D, I photographed in mRAW for a couple weeks. I likes the quality of the output, it was sharp and clear...but after editing the images in LR for a while, I realized that editing latitude was far lower. I couldn't push exposure as far without experiencing undesirable and uncorrectable shifts in color, getting banding, etc. The same went for white balance, color tuning, vibrancy and saturation, etc. Without the original digital signal that could be reinterpolated as needed without ANY conversion and permanent loss of precision, you lose editing capabilities.

A 200mp sensor that uses hardware pixel binning sounds cool, and so long as you expose perfectly every time the results would probably be great. But if you need or want that post-process exposure latitude (which, as dynamic range has moved well beyond the 8-10 stops of a modern computer screen, is almost essential regardless of any other reasons you may want it), the only way to get the same editing latitude as a 50mp RAW...you would need the 200mp image in a true RAW form as well. There is only one RAW, and any processing a camera does to bin or downsample will eliminate the kind of freedom we have all come to expect when using a DSLR these days.

Second, I guess I should also mention...there is an upper limit on how much you can resolve with a sensor, and still be reasonably priced. If we take a perfect f/4 lens, for example, you have an upper limit of 173lp/mm as far as the lens goes. That assumes optical aberrations contribute approximately nothing to blur, and that it is all caused by diffraction. I would say that Canon's 500mm f/4 II, 600mm f/4 II, as well as probably the 300mm /2.8 II and 400mm f/2.8 II lenses fall into this category. In other words, not many lenses actually produce truly diffraction-limited results, or at least get close enough such that they might as well be perfect, at f/4.

The question is...what kind of sensor would it take to actually resolve all 173lp/mm from a total system spatial resolution standpoint? You mention a 200mp sensor as being the likely upper limit. From a cost standpoint ten years from now, that might be the case...but it would still be woefully insufficient to fully realize the potential of a perfect f/4 lens @ f/4. Theoretically speaking, system resolution is asymptotically related to the lowest common denominator...so you could never actually achieve 173lp/mm exactly. You can only approach it...however assuming we basically get there...172.9lp/mm. To really get there...you would need no less than a 650mp APS-C sensor!! In terms of FF, that would be a 1.6 GIGAPIXEL sensor, 49824 pixels wide by 33216 pixels tall!! That would be an 80 GIGABYTE 16-bit TIFF file, assuming you could actually import the thing! :D

Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!! The physical size of the photodiode on a 180nm process would probably be around 350nm...which is well into the ultraviolet spectrum!! Perhaps, with subwavelength technology, we might be able to capture the light...I don't know all that much about that field...however I can't imagine it being cheap. And on top of the cost of making pixels that small in the first place! (That is nothing to say of the noise or dynamic range at that density...I can't imagine full-well charge capacity being high enough to be very useful at such a small pixel pitch.)
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
wickidwombat said:
you should post your birdy pics again they help explain however it would also be good to show the same lenses shot on a FF say a 5Dmk3 for comparison

Yeah...I'll post those images again. I may just have to rent the 5D III and a 600mm lens as well, and produce some examples with the 600mm on both the FF and APS-C (at the same distances.)
 
Upvote 0

dr croubie

Too many photos, too little time.
Jun 1, 2011
1,383
0
Very nice reading for us nerdly-types. I know not everyone around here is so-inclined, so for a real-world example just read this, maybe not even the whole thing, just the bit halfway down to the bottom.

The same Tamron Lens on the D800e and 5D3 scores better on the D800, because more pixels means better MTF. (I know he didn't test at allegedly "diffraction limited" apertures, but at least the bit about the denser sensor works IRL).
 
Upvote 0
Here is my contribution I still maintain FF delivers noticably sharper images than 1.6 crop with current tech
(who knows what future tech will bring however apply the same tech advances to FF that you apply to crop and
FF will still be ahead. however I believe that the law of diminishing returns will apply soon and that in reality you wont be able to see the difference unless seriously pixel peeping

here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

both at f8 shot on tripod in Live view and manually focused

I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

however the FF combo is noticably sharper

these are 100% center crops
 

Attachments

  • 5d3-600f8.jpg
    5d3-600f8.jpg
    301.6 KB · Views: 3,463
  • EOS-M 400f8.jpg
    EOS-M 400f8.jpg
    252.1 KB · Views: 3,440
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
wickidwombat said:
Here is my contribution I still maintain FF delivers noticably sharper images than 1.6 crop with current tech
(who knows what future tech will bring however apply the same tech advances to FF that you apply to crop and
FF will still be ahead. however I believe that the law of diminishing returns will apply soon and that in reality you wont be able to see the difference unless seriously pixel peeping

here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

both at f8 shot on tripod in Live view and manually focused

I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

however the FF combo is noticably sharper

these are 100% center crops

I don't disagree, actually. When it comes to the FF vs. APS-C argument, assuming you compose the scene the same (i.e. get closer or use a longer lens to achieve same subject size in the frame), the higher pixel COUNTS of most FF sensors, along with the better performance of each pixel thanks to their larger size, definitely results in better IQ. This argument is better made in terms of pixels on subject rather than pixel density. Regardless of how big (or small) the pixels are, if you get more on subject, and get more better pixels on subject, then your subject overall will end up looking better. Assuming the same pixel performance between a 22.3mp FF and an 18mp APS-C (same amount of noise for any given exposure and ISO), if you frame the same subject the same size in the frame, the FF sensor should look about 24% better. It has 24% more pixels on the subject.

Now, assuming cost is no object, one can always pick up a 5D III, slap on a 600mm lens, and go zipping around photographing birds, wildlife, sports, air shows, whatever tickles your fancy and get better results than a 7D with a 400mm lens. You'll get roughly the same FF-effective FOV as the 600mm, but the larger, newer, better pixels of the 5D III, along with the fact that there are more of them in total, will just blow the 7D away.

The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

I challenge you to re-do your test with the 5D III and EOS-M. However this time, use the 300mm on both cameras, with the cameras at the same distance to the subject. The EOS-M should end up looking sharper on a size-normal basis (either scale its image down to a crop of the same area as the 5D III...or scale a matching crop of the 5D III up to the EOS-M image.) If you use the 300mm on both, and frame the subject in-camera the same for both, the 5D III again should end up being the winner because it gets more pixels on subject.
 
Upvote 0
jrista said:
Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!! The physical size of the photodiode on a 180nm process would probably be around 350nm...which is well into the ultraviolet spectrum!! Perhaps, with subwavelength technology, we might be able to capture the light...I don't know all that much about that field...however I can't imagine it being cheap. And on top of the cost of making pixels that small in the first place! (That is nothing to say of the noise or dynamic range at that density...I can't imagine full-well charge capacity being high enough to be very useful at such a small pixel pitch.)

Actually, we would be using BSI tech with such a sensor for sure. Fabrication process would probably be better than 180nm as well...maybe down to a 64nm process by then. So, assuming BSI, such a sensor could be barely viable...but there would still be the full-well capacity issues, and dynamic range issues, and noise issues, and read-out performance issues (1.6 GIGAPIXELS...assuming similar scaling in image processing chips...we might get...what...1fps?!?)

Cheers! :)
 
Upvote 0
hjulenissen said:
I believe that the "diffraction limit" or Rayleigh criterion is a somewhat arbitrary criterion where two impulses can be resolved at a certain contrast. There is nothing fundamental about this limit AFAIK.

True, it is. It is explicitly described as:

Rayleigh criterion: Imaging process is said to be diffraction-limited when the first diffraction minimum of the image of one source point coincides with the maximum of another.

I am trying to correlate the resulting image from a DSLR exposed at Rayleigh to how well a viewer of that image at it's native print size could resolve detail at an appropriate viewing distance, hence the reference to vision. In and of itself, Rayleigh is not a constant or anything like that. The reason MTF 80, 50, 10 (really 9%, Rayleigh) and 0 (or really just barely above 0%, Dawe's) are used is that they correlate to specific aspects of human perception regarding a print of said image. MTF 50 is generally referred to as the best measure of resolution that produces output we consider well resolved...sharp...good contrast and high acutance. MTF10 would be the limit of useful resolution, and does not directly correlate with sharpness or IQ...simply the finest level of detail resolvable that each pixel could be differentiated (excluding any effects image noise may have, which can greatly impact the viability of MTF10, another reason it is not particluarly useful for measuring photographic resolution.)

hjulenissen said:
If you use your camera to shoot images of stars (impulse-like), I believe that you can resolve information far beyond the Rayleigh criterion. If your camera had very little noise and/or high-frequency details had very high contrast, then I believe that good deconvolution could resolve beyond Rayleigh for general images.

This is true. Since stars are bright points of light on a very deep, dark background, assuming low noise, you can resolve stars down to Dawe's Limit...which is MTF 0. I touched on that in the first post of this thread. The only real use for assuming MTF0 that I know of is for the detection of multi-star stellar systems. At Dawe's Limit, airy discs are barely separated, much less so than at Rayleigh, but enough that they affect the presentation of the airy disc. Study of star airy discs with an understanding of how diffraction from multiple very close points of light interact allows astronomers to detect binary, tertiary, and even planetary systems.

From an artistic photographic perspective, resolution at Dawes Limit is generally useless. Outside of astrophotography, where any meaningful light comes from otherwise widely spaced point sources, one could never see the difference between points so closely spaced, nor observe airy disc shapes...it would all just be one big blur. :p

hjulenissen said:
Instead of treating these rules-of-thumb as absolute limits, I think it makes more sense to treat them as rules-of-thumb: as your sensel density approach x and your aperture is y, it is going to be increasingly difficult to obtain substantially more details.

I'm not proposing these as absolutes that one should take into account in their day to day photography. I was imply trying to prove the case that, at least for the foreseeable future, continued increase in sensor resolution still provides value, and that even in diffraction-limited scenarios, a higher density sensor WILL produce better results than a lower density sensor.
 
Upvote 0
wickidwombat said:
....here is the comparison shots i did.... the FF combo is noticably sharper...
Yes, but could you convince anyone that they were real?

Jrista, nice post. In an ideal world, we'd all be shooting with the world's best equipment. But as mentioned in one example above, due to financial constraints, many people are focal length limited or shooting with crop sensored cameras. So its interesting to read about the positives of increased megapixels.

If people like you are taking a real interest in how some aspects of the 7Dii sensor might perform, I'm also hoping that the Canon engineers are also taking it seriously, too. Wouldn't the world be an interesting place if the 7Dii had a spectacularly good sensor that rivalled some FF cameras at low ISOs?
 
Upvote 0
Can jrista take account of the heavy AA filters on Canon sensors in his calcs or have I missed something?

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.

Does this matter in the real world? Well yes maybe if you want to use 100pc crops.

Will Canon introduce something like the D7100 in their well trailed up coming 7D2, 70D, 700D series?

Somehow doubt it.

Excellent stuff jrista - thanks for posting.
 
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
wickidwombat said:
here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

While I agree in principle that, out of camera and all other things equal, FF shots are sharper, you've got way too many differences in this test.

But even here, if you sharpen the crop sample they look identical. Sharpening is not an unlimited good, so as long as the gap between component A and B is below a certain amount, post processing can eliminate the gap. This is true FF v. crop at low ISO, but not at high ISO.

I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

For testing purposes you can't make those kinds of assumptions. How do you know that your copy of lens A is sharper then your copy of lens B when used with your copy of teleconverter C at aperture D? Eliminate all relevant differences in the test to isolate the variable you're testing for.

But again I agree with the point. Out of camera, all other things equal, FF is sharper.
 
Upvote 0
Can I make a little objection?
You're talking about resolution, but what about the light sensitivity?
Is it right, that doubled MP would halve the amount of light every pixel gets to see?

Or is the increase of Megapixels always connected to a higher amplification per pixel? (Which would increase noise)

This is what I'm worried about talking about a 24MP APS-C Sensor.

Sorry for Offtopic!
 
Upvote 0
I need to support thewaywewalk:

Though I completely support jrista in arguing that increased pixel density is always a resolution advantage (zeiss have also published two excellent papers on that subject) there are other disadvantages of smaller pixels.

Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.
 
Upvote 0

RGF

How you relate to the issue, is the issue.
Jul 13, 2012
2,820
39
hjulenissen said:
yyz said:
Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.
How much of a practical disadvantage does this give the Nikon D800 vs the D600?

http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/834|0/(brand)/Nikon/(appareil2)/792|0/(brand2)/Nikon

Dynamic range->print

-h

This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer. Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
 
Upvote 0
Some interesting points here, though not all of them are 100% based on optical reality (which has a measurable match in optical theory to 99+% most of the time, as long as you stay within Fraunhofer and quantum limits)

Rayleigh is indeed the limit where MTF has sunk to 7% (or 13%, depending if your target is sine or sharp-edge, OTF) - a point where it is very hard to recover any detail by sharpening if your image contains any higher levels of noise. It's hard even at low levels of noise. And Rayleigh is defined as: "When the peak of one point maxima coincides with the first Airy disk null on the neighboring point".

Consider again what this means.
You have two points, at X distance p-p. The distance you're interested in finding is where they begin to merge enough to totally mask out the void in between. Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot.

But this does not make Rayleigh the p-p distance of pixels needed to register that resolution, to register the void in the first place you have to have one row of pixels between the two points. That means that Rayleigh is a line-pair distance, not a line distance. If Rayleigh is 1mm and the sensor 10mm, you need to have 20 pixels to resolve it.
 
Upvote 0
So, the real "optical limit in a perfect lens" is:
(For green light, 0.55µm - at F4.0)
1.22 x F4.0 x 0.55µm = 2.684µm is a line PAIR. >>> The pixels have to be 1.34µm

That gives 372lp/mm, or
(24x16mm) / (0.00134mm ^2) = 213MP on an APS sensor (x2.56 = ~550MP FF.

This is for F4.0 - and that seems to be a reasonable point of aperture size to choose. Lower aperture values are EXTREMELY seldomly diffraction limited in normal photographic lenses. Higher aperture values Often are, at least in the center of the iamge field.

This is easily verifiable in reality, by using a camera like the small Pentax Q (that has 1.55µm pixels) with a Canon EF adapter. Many current Canon lenses outresolve the little Pentax sensor (give moire and aliasing effect) at F4.0. You can also verify it by putting the lens in a real traversing-slit MTF bench, some of which have upper image formation resolution limits well below 1µm.
...........

A couple of counter / reinforcing arguments for higher resolution (no noise involved, yet...!):
Reinforcing:
1) The actual resolution of a Bayer sensor is about sqrt(pixel pitch) for a random oriented image detail. Not all lines or details line up perfectly with "pixel pairs". This increases the needed MP by a factor of two, since spatial resolution has to be sqrt(2) higher.
2) As long as RN + downstream electronic noise is kept low (or you have a hardware level binning scheme) image detail ACCURACY (not resolution) continues to increase measurably and visibly about one twofold increase in MP past the theoretical limit. This is maybe the most important part - for me. You get mroe ACCURATE detail, not just "more" detail.

counters:
1) Lenses very rarely truly diffraction limited at - or below - F4.0. The best lens we've ever measured - that can be used on a FF sensor - have a "loss-factor" of about 1.1. This means that diffraction + aberration losses can be approximated by [diffraction(actual f/# * 1.1)]. This is valid for large apertures if the lens is still considered sharp (like an 85L at F2.8, very sharp, but definitely not "perfect". Add in global contrast loss factors too.
2) Sharpness is also limited by shutter speed and vibration in various ways (at some shutter speeds with lighter lenses, the shutter actually induces more vibration into the image than the mirror-slap...!).
3) Sharpness is also limited by subject movement / shutter speed.
4) Sharpness is often limited by the NEED to have a deeper DoF, i.e smaller aperture - more diffraction.
.....................

From a purely practical PoV (well, I do quite a lot of actual shooting too...) I generally say that a factor of two times more MP than the largest presentation size you need is optimal. This includes both the electrical and the optical side of the equation.

If you NEVER use anything larger than 10MP output sizes, 20MP is good enough. Then there's very little actual - practical - gain to be had to go to 30 or 40MP (or higher...). This has to do with practical noise considerations.
(And then I'll immediately contradict myself - this isn't valid for FF bodies, if they're less than 16-18MP. At this point, so many lenses are so much sharper than what the camera can accurately resolve that you get aliasing and moire problems very easily...)
 
Upvote 0
thewaywewalk said:
Can I make a little objection?
You're talking about resolution, but what about the light sensitivity?
Is it right, that doubled MP would halve the amount of light every pixel gets to see?

Or is the increase of Megapixels always connected to a higher amplification per pixel? (Which would increase noise)

This is what I'm worried about talking about a 24MP APS-C Sensor.

Sorry for Offtopic!
Hi,
IMHO, I'm not worry about a new 24MP APS-C sensor... I think it should perform better than the current 60D 18MP APS-C sensor.

I got a Canon G15... a 1/1.7" (7.44mm x 5.58mm) 12MP (4000 x 3000) sensor and the result (noise performance) at ISO 1600 is not that far behind the 60D 18MPAPS-C sensor. If an APS-C sensor base on G15 pixel size, it would be a 90++MP APS-C sensor, so a new 24MP APS-C should not be that bad.

The attached is the 100% crop shot using G15 and 60D using the below setting:
ISO 1600, 1/6s and F2.8, NR Standard (G15 NR at high ISO cannot turn off when shooting RAW). I process both using the same setting in DPP (faithful, sharpening all set to 0 and NR all set to 0) and export to PS in 16-bit TIFF. Then I crop both in PS, copy, paste and save as jpeg.

Have a nice day.
 

Attachments

  • G15vs60D.jpg
    G15vs60D.jpg
    98.3 KB · Views: 1,085
Upvote 0
Status
Not open for further replies.