First - you can get more spacial detail from adding TCs, even to already-slow optics. This is a $60 Bayer-sensor webcam, not some high-end astronomical sensor. Pixel size is 5.6 microns - about the same as the 40D. f/30 on the left, f/15 on the right. According to you, the f/30 shot couldn't possibly be better, but it is.
I've highlighted where you've misunderstood me. Thats not my argument, and probably where we've made a disconnect. First off, "better" is such an extremely subjective and broad term, its a terrible term for this conversation. Obviously magnifying a subject makes it "better" in the sense that your recording more detail of that subject. Again, that is not my argument. My argument is that neither the lens nor the sensor are recording at a "higher spatial resolution" (which is what it sounds like you are saying when you say that using 1.4x TC's == 141mp...increasing megapixel count in the same sensor area increases the amount of spatial resolution the system can record, but I'm arguing that is not what happens when you tack on teleconverters) when increasing magnification. They are recording a larger subject
at the same (or slightly lower, given the math on total system blur) spatial resolution.
I guess to put it another way...the IMAGE RESOLUTION
, the width and height of your subject, increase, for the same SPATIAL RESOLUTION
Second, and this is going to be a little hard to accept for you, but it's fact so I suggest you listen carefully. You're thinking of a TC as a device that increases focal length and decreases aperture. First of all, it doesn't decrease aperture. f-stop = focal length / aperture diameter. A TC can be thought to increase focal length while keeping aperture the same, thus increasing f-stop.
Sure, f-stop, relative aperture, same thing. I know the absolute aperture, the physical diameter in mm, stays the same when increasing focal length with TCs. However due to the increased focal length, diffraction is magnified right along with everything else. The effects of a TC on diffraction are real, regardless of how the sensor may appear when looking through the TC.
However, and this is important, this is only true from the camera's point of view. From the lens' point of view, its focal length, aperture and f-stop remain the same. The TC is, after all, mounted behind it. From the lens' point of view, the TC has changed the camera. How, you might ask? By shrinking the sensor and the pixels on it. If you don't believe me, try this little experiment yourself.
Sure, the size of the pixels, as well as the size of the whole sensor, shrink when viewed through the TC...but the total density of the sensor does not increase for the same size. The former changes magnification, the latter would change the spatial resolution of the system. We are on the same page here, and I think I described this effect well when I described magnification as being the same as recording an image with a much larger (physical size) 141mp sensor, and cropping out the center 18mp. IMAGE resolution of the subject increases, SPATIAL resolution of the system remains the same or decreases with a TC. The image you linked does a good job demonstrating that everything viewed through the TC shrinks in size. If it was a sensor, the whole sensor...not just the pixel pitch, would shrink relative to whatever was being projected through the lens. I call that magnification. The actual width and height of the sensor in pixels has not changed. The distance between pixels has not changed. So spatial resolution is the same. (I'm again not sure were on different pages here...I think maybe we have been discussing the same thing from different angles.)
The point is, increasing focal length while preserving aperture (and thus increasing f-stop) and decreasing pixel size are equivalent. Here's an example of that:
Not disputing that. It again boils down to the comparison extra_magnification == more_megapixels, which sounds like extra_magnification == increased_spatial_resolution...assuming the rest of the system does not change. That comparison sounds like adding a TC increased the spatial resolution from 116lp/mm to 315.5lp/mm. Even if we do assume that DSLR sensors are capable of resolving fine detail at MTF 9, at f/16 for a 200 + 1.4 + 1.4 on a 7D, total system blur (ignoring any additional potential aberrations from the TC's, there probably aren't any at a physical aperture of f/8, and assuming just about twice the pixel pitch of the 7D for the blur of the sensor itself) is sqrt(10.8^2 + 8.4^2) = 13.7um
, which translates into a system spatial resolution of about 73lp/mm. Lets ignore the nature of bayer sensors, and assume the 7D is capable of resolving 4.3 micron airy discs. Our system blur becomes sqrt(10.8^2 + 4.3^2) = 11.63um
, or a system spatial resolution of about 86lp/mm. Decreasing pixel size relative to the subject could also be termed increasing the subject size relative to the pixel. Either way, thats magnification, the increase of object dimensions, or image resolution, not an increase in spatial resolution. The only reason you are resolving more detail is because the subject is larger...thats it.
Finally, if you want to see the effect of diffraction while shrinking pixel size, have a look at the link below. If you prefer, you can think of these as APS-C sensors with 8MP, 16MP, 32MP, and 64MP, all at f/11. The one on the bottom is for reference when using a larger aperture that isn't diffraction-limited. As you can see, even at f/11, resolving power goes up in each case, by ever-decreasing amounts (the so-called law of diminishing returns), just as theory would indicate. I've tested this all the way to oblivion (1.1 micron pixels at f/11), and the MTF 0 spatial cutoff formula I gave you from Wikipedia matches well with real-world testing.
I'm not really sure what those images demonstrate, outside of the fact that the 6.4 micron pixels are simply incapable of resolving enough detail in the first place. I've never claimed that increasing pixel density doesn't help increase spatial resolution, I've only argued that after a certain point...a sensor with roughly 2x the resolution of the optical image its recording...does increasing resolution stop having a decent cost/value ratio. I don't generally consider f/11 to be severely detrimental to IQ. I consider f/22 and beyond to be detrimental to IQ, regardless of the sensor density.
As for how good our optics are, I tested my version 1 70-200/2.8L IS at different apertures by mounting telescope eye pieces to it and trying to split double stars. Essentially, this is a test of the Rayleigh criterion (MTF 9). I found that it isn't diffraction-limited at f/2.8 but, amazingly, it is diffraction-limited at f/4 - I could split a double at exactly the Rayleigh equivalent separation angle for a 50mm aperture with that lens given sufficient optical magnification. I think you'd agree that we have several lenses in the line-up that are better at faster f-stops than f/4 than the version 1 70-200 is.
I do agree, I've mentioned many lenses that I think are capable of stellar optical characteristics at maximum aperture. Again, thats not really the point of my argument.
For a little more evidence of that, compare the Jupiter shot I posted above, taken with 125mm of aperture of diffraction-limited f/15 Maksutov-Cassegrain telescope, with one posted yesterday also taken with 125mm of aperture this time in the form of a wide-open 500/4. The detail retained is very, very similar providing further evidence that the 500/4 is diffraction-limited wide open.
Tough to evaluate this stuff, since the images were the result of some extensive stacking. Stacking completely changes the game, and allows things like superresolution and extreme noise reduction, pushing image resolution well beyond what is possible spatially with physical hardware. Thats all a discussion for another day, and doubt we would disagree much with the benefits of stacking and superresolution.
The take-home lessons are:
- We can extract more detail at finer resolutions than the MTF50 diffraction limit even with Bayer sensors with AA filters.
- We have optics that are diffraction-limited at f-stops much faster than f/8.
- Because of those two facts, we can make use of sensors much more densely packed than current 18MP APS-C sensors.
1. Still not sure about the first point. I'll make my arguments again below in the final quote.
2. Totally agree we have diffraction-limited lenses at faster f-stops than f/8. That was never a point of argument...I simply used f/8 in my prior posts because that was the aperture you mentioned using for your 200+1.4+1.4 setup to capture the moon. It never intended to portray that I thought f/8 was the first diffraction limited aperture in any lens. I think, outside of some of Canon's top L-series glass starting around 135mm, most of their lenses seem to achieve "best" resolution (normalize optical aberrations with diffraction) somewhere around f/4 (a little less in some cases, a little more in others.) Super fast lenses, like the 50mm and 85mm f/1.2 or f/1.4 lenses seem to achieve that normalization around f/2.8-f/3.5, however they are rather wide, and don't magnify their subjects enough for it to matter in the context of this discussion.
3. Agreed that more densely packed sensors are not "bad". Agreed that they can help us oversample optical resolution enough to eliminate sensor aberrations and capture a little bit more detail. Agreed that as we approach the "pixel pitch" of rods and cones in the 2° foveal spot in the human eye (which clear, high detail vision occurs), 0.5um, that our ability to resolve fine detail at lower contrast will increase. I'm not sure I agree that we can do that today at 9%, although I'm happy to offer that we probably can resolve fine detail at a contrast level below 50%.
Let's work on that a bit. Let's do f/4 since we have several lenses that are diffraction-limited by then. Let's use green light (conservative). Let's be further conservative and use Rayleigh instead of MTF5 or MTF0.
MTF 9 = Rayleigh = 1/(1.22*0.000550*f/4) = 373 lp/mm
We need roughly 3 pixels per line pair to resolve them on a Bayer sensor with AA filter.
3 * 373 * 22.3mm = 24,954 horizontal pixels
2/3 of 24,954 = 16,636 vertical pixels
24954*16636 = 415,134,744 pixels, or 415.1MP.
Here is where your argument breaks down, at least as I am interpreting it. Lets get back to facts:
FACT: A 415mp APS-C sensor with the dimensions 24954x16636 DOES NOT EXIST
. It never has existed, and will very likely not exist in the coming decades.
FACT: The 7D 18mp APS-C sensor is certainly not capable of resolving 415mp worth of spatial resolution.
Fact: For a lens that resolves 373lp/mm of resolution, the sensor is the LIMITING FACTOR (both from a resolution standpoint and a contrast standpoint.)
Fact: At 373lp/mm, the airy disc is 2.7 microns.
Fact: An 18mp APS-C sensor as in the 7D's has a minimum resolvable spot of 4.3 microns, assuming monochrome.
Fact: An 18mp APS-C sensor as in the 7D's has a minimum resolvable full-detail spot of about 2x pixel pitch, or 8.4-8.7 microns, assuming a bayer array, low-pass filter, IR filters, etc.
Fact: System blur of such a system would be about 8.9 microns.
Fact: System spatial resolution of such a system would be about 112lp/mm.
Equating the increase in subject detail
as resulted from the addition of teleconverters
to a lens for a given sensor
having a sensor with more megapixels in the same physical area
. That presumes an increase in the spatial resolution of the sensor as a result of increased optical magnification, which is obviously impossible...sensors have a fixed resolution (both spatially and dimensionally.)
So, I still don't understand your insistence on this formula that TCs == denser sensor
. At best, given the math, your 373lp/mm lens setup and 116lp/mm APS-C sensor boil down to 112lp/mm of system spatial resolution (and I'm still ignoring the fact that adding additional TC's still has a drag on IQ, even though it may be minimal at f/4.)
Here's a sample of the equivalent of 288MP:
Obviously, we have plenty of room between the real limits and the 18MP we have today.
And just in case you think I'm alone in thinking this, here's some information on gigapixel sensors from the guy that invented the type of sensors we use in our cameras:
I still don't see 288mp in that image. If you could show me an image that literally had the necessary dimensions of around 20756x13844, then I might have to change my mind. The moon image you linked has an image resolution of 1000x1500 pixels (1.5mp), and coming from the 7D, I am going to assume the original non-cropped version was 5184x3456 pixels (which is still 18mp). There is a serious disconnect in the thinking that more magnification equals increased system spatial resolution. Even assuming the optics are capable of FAR superior spatial resolution (which at MTF 9% they are...I still dispute the idea that CFA bayer sensors are even close to resolving detail at that level of contrast), the camera (being the sensor, low pass filters, and any other filtration devices between the virtual image projected by the lens and the sensor) is going to severely limit the total system resolution you are capable of actually recording. What you actually record is really what matters, regardless of how much spatial resolution exists in the virtual image the lens may be projecting at any level of contrast. The more you oversample the lens, the more soft an image will appear at 100%, so while theoretically things may be "better", you require greater and greater downsampling to achieve a sharply detailed image (which is really the ultimate goal anyway, unless you have scientific goals.) As of today, it is not yet possible to record 415mp, let alone 288mp, with a single sensor in a single shot using a DSLR camera. The closest thing I've ever seen to several-hundred megapixel images of stellar bodies are mosaics of hundreds of shots of the moon, usually shot with telescopes with rather high light gathering capabilities.
So, I don't disagree with you on every point. Yes, there is plenty of room for improvement, obviously (on both lens and sensor fronts.) I strongly disagree
with you on the point in your last quote...that TC == denser sensor
. Thats just plain and simply not based in fact