You still have a very skewed idea, or simply bad terminology, in describing what you are actually experiencing with a TC, though, Lee Jay. Your previous argument in that other thread, that the virtual image of the sensor shrinks when it is observed by looking through the lens into the camera is not indicative of what is really occurring. A teleconverter does not change how many megapixels you have, nor does it change the resolution of the lens.
No, but it has the same effect as doing either one.
The effects are different.
They are the same. Smaller pixels and longer focal length, both given the same aperture diameter, do the same thing. See for yourself:
You are only thinking pixel size, which I guess is one way to look at it. The OUTPUT of the two systems is entirely different, though. In the case of a denser sensor, you get a more detailed image of a LARGER area of your subject. In the case of a less dense sensor combined with a TC, you get a more detailed image of a SMALLER area of your subject. The two are not the same, even if in an abstract context the arc seconds per pixel is equivalent. In terms of the actual product of the two systems, the higher density sensor is always the better system. Additionally, adding a TC does not increase "spatial" resolution, it increases "system" resolution, which is a different concept.
If you have an 18mp sensor and a 36mp sensor, and use the same lens on both. The effect of switching from the 18mp sensor to the 36mp sensor has the effect of potentially doubling spatial resolution for the entire area of the object being photographed. (Let's assume for a moment that you have a perfect lens at a very wide aperture, so diffraction is not a problem.) On the other hand, adding a 2x teleconverter has the effect of enlarging the subject, such that a smaller area of that subject is being photographed at the same spatial resolution.
That's a separate issue (FOV/vignetting) having nothing to do with resolving power.
If you are referring to optical system resolution, rather than spatial resolution, then I agree. However you keep applying the units "lp/mm" to system resolution, which feels like a major conflation to me. Assuming the optical spatial resolution of the entire lens setup (original lens + TC) remains the same (which is generally impossible when adding a TC, as it reduces your REALTIVE aperture, which implicitly means your optical spatial resolution of THE ENTIRE LENS SETUP is reduced), the final system spatial
resolution will be lower than that of the lens or the sensor, as it is the root mean square of the blur each individual component.
Yeah...that's the definition of MTF 0. It's the asymptote.
As we discussed in our last debate, the spatial resolution of whatever is projected by the lens, as well as the spatial resolution of the sensor, are pretty limited. If you use a TC or multiple TC's that reduce your aperture to f/8, then according to the laws of physics spatial resolution becomes limited (specifically to around 86lp/mm), which is WELL below the fixed luminance spatial resolution of pretty much any APS-C sensor these days.
f=1/(0.00055* = 227lp/mm at MTF = 0. Using MTF=50, as you did above, is arbitrary and of little value in this context.
The notion that a consumer-grade camera can resolve anything at MTF ZERO is ludicrous.
Yes, it is the asymptote. It is also a purely theoretical construct. I read an interesting quote today:
In theory there is no difference between theory and practice. In practice there is.
You have to take into account the realities that exist in practice that don't exist in pure theory. In reality, the average consumer-grade camera couldn't detail at MTF 0% as a strong enough signal for it to be differentiated from noise (photon noise.) At MTF 9%, you would be hard pressed to know for certain what detail was noise, and what wasn't noise, the two are going to interfere with each other a lot.
In the context of scientific astrophotography, the imaging devices used are orders of magnitude more expensive than a consumer-grade sensor. They are supercooled, have quantum efficiencies that surpass 80%, and S/N ratios that would make some of the Nikon D800 fanboys eyes pop out of their skulls. The analysis of star airy patterns at MTF 0% requires some pretty sophisticated, and incredibly expensive, equipment. It isn't valid in the context of discussions about consumer-grade gear that are barely reaching 50% Q.E. and have relatively atrocious S/N ratios.
The notion that a camera can usefully resolve anything at MTF 9% (Reighley) is also pretty ridiculous.
Except that we do so all the time, in astro stuff.
You need to back that up with some actual examples that are properly analyzed for MTF. MTF 0% means 0% contrast. At that point, you are literally analyzing the specific shape of the spot resolved for a point light source like a star to make EDUCATED GUESSES about the nature of a star. Is that star a single star? Is it a binary star? Might it be a tertiary star system? Those analyses are also performed algorithmically by computers, and the stars need to be isolated against a dark backdrop, so the shape and waveform of the airy disc that was resolved is as clear and separate from background noise as possible. It has no application in general "photography", where we are resolving a system of point light sources to create a continuous signal. Even in the case of hobbyist astrophotography, you are not resolving point light sources for a scientific purpose
...you are resolving stars (plural), nebula, galazies, nova, etc. to produce a photograph for aesthetic purposes
Different contexts. And, therefor, different STANDARD
systems by which we measure spatial resolution. I use MTF 50% because it is the industry standard MTF that major products, like Imatest, use.
MTF 50% is of specific value because MTF 50% IS STILL and WILL CONTINUE to be used today as the standard benchmark for image resolution of meaningful sharpness, either by a lens or a sensor.
Meaningful image sharpness is meaningless term. People that espouse this ridiculous property also claim that reducing pixel count gets you are sharper image, which is impossible.
If no one ever complained about the sharpness of properly stabilized photos taken with a camera like the 7D, then I would agree with you. Simple fact of the matter is that once your sensor spatial resolution starts to outresolve your subject, details DO appear less-sharp than if the same photo, with the same lens, was taken with a sensor with larger pixels. It is an AESTHETIC, real-world thing. Not a theoretical thing. It is a matter of perception, not statistical measurement. Statistically, no matter how you slice and dice it, the 7D resolves more, and has the capability to resolve more detail as sharply as a sensor with larger pixels. Perceptually, the 7D tends to produce soft results in a non-normalized context (i.e. pixel peeping) than sensors with larger pixels.
Context, man. You have to discuss (and understand) things in context.
Spatial resolution is not increasing, magnification is increasing.
Same thing, since the optics (the lens) didn't change.
That is 100% factually incorrect. The optics DID change...you added a teleconverter.
I added it behind the lens. It doesn't change the performance of the lens at all.
Ok, now you are mincing words. Lets be specific and accurate here. The sensor, that tiny, wonderous little device setting inside the mirror box of your camera...that is what is actually resolving the image projected by your "lens setup". It doesn't care if the "original lens" remained unchanged. It cares about the entire "lens setup", which includes not only the "original lens", but also a "teleconverter". The entire "lens setup" is what matters in the context of the discussion at hand....the SPATIAL resolution of sensors, lens setups, and the optical system as a whole.
In the context of the SENSOR, if you add a TELECONVERTER to a LENS, the optics ABSOLUTELY DO CHANGE. The sensor doesn't sit between the lens and the TC...the sensor sits behind BOTH the lens and the TC. Lets stop playing games now.