You are seriously talking past each other now, and things are mixed up beyond belief.
Disregarding the real-world effects of a TC
(increased reflection and absorption losses, decrease in sharpness due to optical imperfections) from now on through this entire post: Yes, of course a TC magnifies the diffraction circle by exactly the same amount as the rest of the image. But that is also the point; the object referred diffraction is already determined at the front of the optical system, by the entrance pupil
(as long as we're within reasonably Gaussian systems, for microscopes and other applications with very high magnification you need to look at angular aperture in stead of numerical aperture). A teleconverter will magnify both this object referred diffraction and target detail, a wide-converter will decrease magnification on both diffraction and target detail. It varies the projected image magnification and not the angular object referred diffraction, which is what optically limits your target resolution.
In astro (which is a purely Gaussian limited application with ordinary systems, with infinity focus targets) this is extremely important, since the angular resolution in front of the lens is determined by the entrance pupil. NOTHING you do behind that can make things better, in any way.
Well, that would only be true if a sensor with larger pixels is the limiting factor in terms of spatial resolution. If, assuming an astro context, and diffraction is only 5.8 microns (the 173lp/mm spatial limit of an f/4 aperture), but your sensor uses 6.95 micron pixels (such as the 1D X)...switching to a sensor with 5.8 micron pixels (a hypothetical 26mp sensor...a change in something
behind the diaphragm) would indeed improve the detail and quality of the RAW image actually produced by the camera. Would it not?
Let me ask. Do you believe adding a 2x TC to a 400mm f/4 lens (800mm f/8) is the same as using just the 400mm f/4 lens on a sensor
with half the pixel pitch, in a general photographic frame of reference (vs. just the astrophotography frame of reference)? Or would you agree that using the 400mm f/4 lens with a sensor twice as dense will produce just as detailed output that encompasses a wider field of view (greater total area) than the TC setup?
In the former case, an 800mm f/8 lens on a FF sensor with say a 5.8 micron pixel pitch. Diffraction won't affect resolution enough to matter on that sensor, as the pixel is the same size as the airy disc.
In the latter case, a 400mm f/4 lens on a FF sensor with say a 2.9 micron pixel pitch. Again, diffraction won't affect resolution enough to matter on this sensor, as the pixel is the same size as the airy disc.
Assuming you use both setups to photograph a landscape of some kind...a small waterfall at some distance. Let's assume the entire waterfall fits on the FOV of the 400mm lens. Would you agree that the 800mm f/8 5.8um setup would capture only 1/4 of the total area of the waterfall? Would you agree that the 400mm f/4 2.9um setup would not only capture the entire waterfall, but that it would also capture the same 1/4 area as the 800mm setup in nearly the same detail?
My primary key point here is not so much that the 800mm f/8 setup is capable of reproducing that 1/4 area of the waterfall
in high detail. I've never disputed that (I believe my post at #78 entirely agrees with you on that point, actually.) My point is that the 800mm f/8 5.8um setup is capable of
reproducing only 1/4 the area of the waterfall, while the 400mm f/4 2.9um setup is capable of
reproducing the ENTIRE waterfall, with roughly same amount of detail in that same 1/4 area, as well as roughly the same amount of detail in any other 1/4 area that you could crop from the original frame.
My second key point here is that no matter what you do with any number of TC's...the spatial resolution of the real image at the plane of focus (the sensor) is intrinsically limited by
the spatial resolution of the sensor you are actually using. Saying that a TC added to a lens on an 18mp sensor suddenly gave you the same "resolution" as a 369mp sensor of the same dimensions is a fallacy.
(At least, in the frame of reference of sensors, who's resolutions are always measured in terms of
spatial resolution. If you wish to move to a different frame of reference and use a different measure of resolution such as angular resolution, you need to make all of that very clear, and make sure you transform EVERYTHING, all numbers and units for all participating elements of the discussion, into the same frame of reference...I'm not really sure how you measure a sensor in terms of angular resolution. Additionally, it is the sensor that "sees" in a camera, not something external, not even the front lens element that is gathering the light...it is the sensor that sees and records an image. So it seems logical to me to remain in the original frame of reference: Spatial Resolution at the Sensor).
To keep things consistent, if the discussion continues. Can we use the following sensors, cameras, and lenses?
Sensor A with 5.8 micron pixels (25.6mp FF)
Sensor B with 2.9 micron pixels (102.7mp FF)
Lens A is 800mm f/8 (400mm f/4 lens with 2x TC)
Lens B is 400mm f/4
Camera A with Sensor A and Lens A
Camera B with Sensor B and Lens B