According to Canon themselves, their BEST L-series lenses are only capable of resolving about 45mp of detail in a Full-Frame image circle.
If Canon ever said that, whoever said it was about as clueless as clueless can get. A 1.4x TC is about the same as doubling pixel count. Do you seriously believe that a 400/2.8 can only support a single 1.4x teleconverter on a 5D II? If so, look at this:
That's 4x worth of TCs on a 400/2.8 on a 7D! That's the equivalent of 737MP on a full-frame sensor.
I routinely use a 2x on my old 70-200/2.8 on a 20D - the equivalent of 32MP on 1.6-crop or 84MP on full-frame - and get pixel-sharp shots (which shows I'm still undersampling). The new 70-200 is much better than the old one, and has been shown to support stacked 2x and 1.4x on the 7D (the equivalent of 144MP on 1.6-crop or 369MP on full-frame). The best primes are better.
I'm not certain what your getting at, however if I understand correctly, I think your conflating two disjoint concepts: magnification and resolution.
No, I'm not.
There are two ways to change the real resolving power of an imaging system of a given aperture - change the focal length or change the pixel size. They are equivalent, with all else equal (sensor efficiency, read noise, processing, etc.).
I don't believe adding on a teleconverter does anything to optical resolution, other than possibly reducing it a bit as your adding more optical elements into the light path, each of which has the potential to diminish resolution with aberrations, flare, etc.
We're not talking about optical resolution, we're talking about image resolution. You can have the sharpest lens in the world and a sensor with 1 pixel won't have any resolution. The resolution (not pixel count - resolution) of the final image is what matters in a resolving power/aperture/focal length/magnification limited situation.
If you are referring to the resolution an image of the entire moon would be when imaged at that magnification, then yes...you might have to produce an image mosaic 737mp in size at to capture the entire moon at that magnification. But thats NOT the same thing as resolving 737mp worth of detail in the same image circle.
Yes, it is. Well, off-axis performance can be an issue, but aside from that, it's the same.
Think of doubing pixel count as a built-in 1.0-1.4x zoom teleconverter with no optical aberrations.
Think of quadrupling pixel count as a built-in 1.0-2.0x zoom teleconverter with no optical aberrations.
More pixels also means better noise performance. Yes, I know, many here and other places think that's backwards, but they're wrong in nearly every case. Sure, it means more noise at the 100% crop or 1:1 pixel level, but it also means lower noise in the overall image. This is for a simple and (should be) obvious reason - modern noise reduction algorithms are spectacularly more efficient at eliminating noise and preserving detail than simple block averaging is. Larger pixels do nothing but block average.
Think about this - the ideal sensor would capture each photon's location accurately and individually. This is the most information you can capture and it corresponds to infinite pixel count. In such a situation, each pixel would be extremely noisy - just on or off.
The main disadvantages to more pixels are, more relative read noise in photon-starved situations (but read noise has improved greatly of late), lower sensor yield, higher demands on the read electronics and processing pipeline, and larger storage and post-processing requirements. Image degradation isn't a disadvantage of more pixels until your process technology can't produce good fill factors at the smaller sizes. At the moment, that point is at about 1.5 micron pixels, which are an order of magnitude smaller in area than the ones in the 7D.
I can and have proven the substance of what I just said with real-world photographic testing of real cameras with real optics.