Further to this topic, I've checked some more of The Digital Picture's sample crops, this time for the 70-200 f4 L IS (a lens that I own): http://www.the-digital-picture.com/Reviews/ISO-12233-Sample-Crops.aspx?Lens=404&Camera=736&Sample=0&FLI=3&API=2&LensComp=404&CameraComp=453&SampleComp=0&FLIComp=4&APIComp=2 compared on a 60D crop body and 1Ds mk III full frame body- the situation is the same, the full frame crop is sharper everywhere, and indeed, this seems to be a general thing- the same lens used on a FF body appears sharper than the same lens used on a crop body. In the comparison, the aperture is set at f5.6, so diffraction should not be having any effect on the crop sensor result. I understand that the full frame sensor will deliver more total resolution, and that the 1Ds III has 21.1 megapixels compared to the 60D's 18 but can somebody explain why the full frame crop is sharper everywhere? Surely it must be possible to achieve sharp results at a lower total resolution? If pixels are a dimensionless unit, then surely it cannot be because of the crop sensor image requiring more magnification to appear the same size as the full frame image. I apologise is this has been discussed before, but I'm really trying to understand the reasons for these differences and how much difference going full frame would make to me.
I think I explained the reason for this pretty clearly above; however, some people seem to be getting confused by details that do not get to the crux of the reason why FF cameras tend to have better image sharpness (image sharpness being distinct from, though related to, lens sharpness and sensor resolution).
Imagine a world where sensors were perfect, and the only degradation to the image came from the lens (1). If you put a FF lens on an FF and a APS-C sensor with the same subject distance, the image formed on the APS-C sensor would be a crop from the FF sensor; it would be exactly
the same as you would get by taking the FF image and cropping it (now you see why APS-C sensors are called crop sensors). If you blew both images up to the same size, you would magnify the lens aberrations to a greater extent for the APS-C sensor than the FF sensor, so the APS-C image would look worse (just like occurs if you blow up a crop of an image). If you magnified the images by the same ratio, the APS-C image would look like a crop of the FF image; details in the image would look the same but the FF image would give a wider FOV.
The same thing happens if you take the photos with the same FOV (either by moving farther away with the APS-C camera, or by using a shorter focal length), with the obvious caveats for perspective or differing lens performance at differing focal lengths
For a concrete example, imagine that you have a lens that gives a uniform resolution of 100 lp/mm at MTF-50. In the above example, the FF image would then be able to show 2400 line pairs vertically in an image, while the APS-C would only be able to show about 1600 line pairs with the same contrast. So, of course
the FF image would look sharper.
If the above isn't crystal clear, don't worry about the other issues raised in this thread.
And with respect to the 'pixels are a dimensionless unit so surely that can't be the reason...' stuff, it hurts my head too much trying to figure out enough about what you do not understand to try to explain to you what's going on (3). Specifying a number of pixels and a format also gives you the pixel pitch, which has the dimension of length (cf. the dimensions for lens resolution.) But on an even more fundamental level, just because something is dimensionless doesn't mean that changes in that number can't explain some phenomenon (2).
(1) Somehow I can only read that sentence in Don LaFontain's voice...
(2) Just search for "Reynold's number", which is part of the introduction to dimensional analysis in a fluids course
(3) Charles Babbage could have phrased this more elegantly