Actually Lee Jay is the one who is more correct here.
The system resolution of the object will be the same with an ideal 1.4x as with doubling the number of pixels. If you can gain object resolution with TC you can do the same by increasing the number of pixels.
Assuming a wide enough aperture that does not lose resolution to optical aberrations, and is not yet blurring detail beyond the diffraction limit of the sensor, sure.
Just reminding you that you agreed before - with Tuggem, not myself - that increasing focal length with a TC is equivalent to shrinking the pixels on the sensor.
You also agreed with me on that point:
The point is, increasing focal length while preserving aperture (and thus increasing f-stop) and decreasing pixel size are equivalent.
To which you replied:
Not disputing that.
Yes, IN REALITY
, not virtually
by looking at the sensor through the front of the lens!! I agree that literally
using a sensor with more megapixels with the original lens without TCs, and cropping the image produced by a higher density sensor is similar
to using a TC with a smaller sensor: both of them magnify the subject relative to spatial resolution. I've never agreed about anything else.
Using a higher density sensor, though, is not exactly the same. A 116lp/mm 18mp APS-C can double-sample an image produced by a lens+TC combo at f/13 (MTF 50, which is about 56lp/mm). That lens+sensor system, despite the fact that the sensor is double-sampling the virtual image, is still only achieving a total resolution of 52.6lp/mm. Lets drop the TC, and double the number of pixels. We are now at 165lp/mm for the sensor and 82lp/mm for the lens. The sensor is oversampling by almost a factor of three, however our system resolution, while its definitely an improvement over the system with a TC and lower resolution sensor, is still only 73.5lp/mm. I believe it was Tuggem who actually said doubling the number of pixels is actually better than using a 1.4x TC and quadrupling pixels is better than using a 2x TC. Crunching the numbers, it appears to really indeed be true (double the pixels is about 40% better than using a 1.4x TC)...assuming its actually a LITERAL INCREASE
, as in, you physically use a sensor with double the number of pixels.
For some reason, you dispute the next step, namely that shrinking the pixels results in more pixels in the same area. Thus, adding a teleconverter is just like increasing pixel count in the same space. I don't understand what could be more clear about that.
I also want to point out your own statement that started this:
The 7D is already pretty maxed out when it comes to resolution as well with 18mp in an APS-C format. You might gain a bit more by going to 20 or 22mp, but thats going to make it really hard to get sharp shots right down to the pixel level...and you would only be able to do so at a very narrow range of apertures at the center of the lens before diffraction or optical aberrations kill you.
That statement is in error, and more to the point, it's irrelevant. Sharp shots down to the pixel level means you are throwing away detail by definition. If people really want that, then we have an education problem.
I'm not so sure its as much an education problem as it is a mental and perceptual problem. People don't spend money on 18mp, 22.3mp, or even 36.3mp to get what they perceive
as soft photos (regardless of how irrational that may be). Thats more than readily apparent in how much flak the 7D gets from people complaining about how soft it is (at apertures much above or below f/4.) Personally, while intellectually I fully understand the value of oversampling optics and downsampling the results to achieve the level of sharpness I want, that whole psychological bent towards wanting sharp results strait out of the camera still exists (and is actually a necessity if you shoot JPEG for immediate review and publish.)
Regardless, even if we use MTF 50, and we agree that many lenses can achieve diffraction-limited performance at f/4 like my now-discontinued 70-200/2.8L IS can, the resolution is:
0.38(0.000550*4) = 172.7lp/mm
Given 3 pixels to resolve 1 line pair, that's 11,554x7,702 = 89 megapixels on APS-c.
Optically, 172.7lp/mm at f/4 is certainly possible, again never disputed that (I believe I've used the number 173lp/mm for f/4 on several occasions in my previous posts.) I've tried to account for low-pass filters (particularly the stronger one on the 7D...although low pass filters are a bit of a wildcard and can never be fully accounted for without a full understanding of their impact on spatial frequencies), and the fact that bayer sensors require interpolation
to produce "full color" (or final RGB) pixels once processed, and the fact that the spatial phase of the sensor is not always aligned with the spatial phase of the virtual image, by using 4 pixels / line pair. Perhaps a tad conservative (although it is a general practice to assume 4p/lp when discussing bayer sensors, even by astrophotography enthusiasts), however I think using 3 pixels / line pair is a bit aggressive. Perhaps a happy medium of 3.5 would be more realistic. Either way, sure, at the sharpest aperture after optical aberrations are eliminated and before diffraction sets in and starts diminishing maximum potential resolution, you could keep pushing sensor resolution. Its a very narrow window within which you can achieve more resolution, and there are rather few lenses that are not aberration-bound at wider apertures that can currently support higher resolutions than are attainable at f/4 (the majority of which cost a small to moderate fortune.)
My comment above about 20-22mp is inaccurate on a scientific level, but it wasn't originally intended to be hard core scientific (my desire to be more accurate only came after you decided to conflate the use of TC's with literal increases in sensor spatial resolution.) I think I originally made that comment in the context of the average photographer who usually does suffer from the "psychological impairment" discussed above. I still agree, for most apertures of the lenses the majority of 7D photographers are likely to use, I think the softness of photos caused either by optical aberrations or diffraction around the f/4 "sweet spot" is going to be a turn-off for more photographers than not. I've spent more time arguing the fact that the 7D is not sharpness-impaired, its just that it frequently outresolves the lens at the apertures used, than I have argued my points in this particular thread.
In that context, I don't think pushing APS-C resolution much beyond where we have it now is going to buy the average photographer
all that much. The window around that sweet spot, wherein you can get very sharp images that keep increasing in detail as you use better and better optics will shrink the closer you get to 70, 80, or 90 megapixels packed into the space of an APS-C sensor. (We haven't even touched on the fact that for all but a few of Canon's L-series lenses, such resolution is also only really possible in the center of the lens, and falloff, sometimes severe, to the corners reduces resolution well below theoretical perfection. There would need to be considerable center-to-corner improvement in lenses to fully support "perfection" across the area of the lens, and in many cases where trade-offs are required...such as wide angle zoom lenses...perfection may be unattainable, at least for the refractive optics of DSLR cameras.)
I think far greater benefits can be gained at the resolution were at now by improving ISO, lowering noise, etc. instead of chasing a difficult to attain perfect resolution at a very narrow aperture range, where all other apertures achieve less and produce softer and softer photos at higher and higher resolutions. Yes, its all psychological, but thats really what matters beyond
the rather narrow bounds we've been arguing within lately. Thats all besides the points I've been making thus far, however. And hopefully my comment to the original quote at the beginning of this post clears up my position.
So, two conclusions:
- Teleconverters are just like multipliers on pixel count (1.4x = 2x pixel count), but with reduced field of view which is irrelevant if you're looking at a small portion of the frame anyway.
- We still have plenty of room to grow pixel counts before we are capturing nothing but additional gray sludge.
I'd love the next 7D to be 24MP on APS-C and have f/8 AF sensors. Such a system would almost achieve MTF50 at f/8 - certainly it would achieve way above the commonly-used extinction points of MTF9, MTF5 or MTF0. If they can't provide f/8 AF sensors, 32MP or more would be nice.
You are certainly free to conclude all you want, however I still think its factually invalid. The sensor's pixel count DOES NOT
, two fold or any other fold. Thats a fixed constant for any given camera, no amount of optics will ever change that. Neither will they improve the spatial resolution of the system at large unless you replace the lens with something that is closer to perfection than its predecessor (and probably replace the teleconverters as well.) Even if you do replace a less than perfect lens with a perfect lens, the final spatial resolution of the system will never surpass that of the sensor itself...you can only approach it.
To turn things around, lets assume the lens is the limiting factor, rather than the sensor. The same rules apply to increasing the physical resolution of a sensor. If you are using a lens that is only capable of resolving 150lp/mm, using a sensor capable of 300lp/mm (2x oversampling) will still result in a system resolution of only 134.3lp/mm. Doubling
the sensor resolution again
to 600lp/mm still only gets you to 145.7lp/mm, and the returns diminish well beyond the realm of reason to actually approach 149.99lp/mm. The only thing you can do to increase the resolution of the system as a whole at this point is to physically improve the least effective component...a better lens, in this case. With a perfect lens capable of 173lp/mm and your 300lp/mm sensor, your system resolution is now 149.5lp/mm.
There is no one-piece magic bullet that can instantly improve the resolving power of any system by orders of magnitude. If you want to achieve 173lp/mm, you need to improve each and every component of the system to raise the lowest common denominator high enough that it surpasses your target resolution by a sufficient amount. To actually achieve a system resolution of 173lp/mm, you would need both a lens and a sensor capable of 247lp/mm, and that imposes a minimum aperture of f/2.8 in a perfect lens (or less than perfect lens with an even wider aperture.) It would be at this point, with a perfect
f/2.8 lens, that we could finally use everything an 80mp APS-C sensor had to offer.
As a note on low-contrast photography and astronomical bodies. The use of extremely low contrast photography, at Rayleigh or even as low as Daws, by astronomers is to determine if two extremely close points of starlight are indeed separate points and thus separate...such as a binary star. Using photography for such purposes is not an average case, it is a rather specialized and more scientific case than anything else. One usually has to explicitly know about contrast, diffraction, MTF, etc., and know exactly what they are looking for, to be able to use MTF 0% photography to identify distant binary stars and the like. Special processing techniques are also usually involved when working at MTF 0%, such as multiple image stacking and superresolution that can use computation and algorithms to produce a final high resolution image that is more likely to show the 5% peak dip between two low-contrast points of light at Daws MTF 0. NorthLight images have this to say about digital photography down to MTF 0 (emphasis added):
So to resolve all data up to a frequency corresponding to 4000 lines—the Rayleigh criterion-- would require a Nyquist frequency of 8000 vertical lines, corresponding to 100 megapixels.
The Rayleigh criterion was derived based on a simple model that correctly predicted what astronomers could see. More recent astrophotographic techniques allow stars to be distinguished up to the point that MTF drops to zero. This is about 20-25% closer spacing than the Rayleigh criterion, and is referred to as the Dawes limit. If we wished to use this as the criterion for resolution, then the required sensor resolution would be about 150 megapixels. It is also possible for astronomers to detect whether a star image is a single star or a binary star even of there is no separation between the two adjacent maxima: the form of the merged maximum can still be indicative of a binary subject. . But there is a catch to the latter method: you have to know in advance that you are looking for two closely separated points. If you have no a priori information about what the subject is, this method won’t work. So it is pretty much useless for normal photography.
At current digital camera resolutions of 20-35mp, which are well below the 150mp necessary Dawes level photography (0% contrast) and 100mp necessary for Rayleigh level photography (9% contrast), we may be able to achieve useful resolution at less than MTF 50, but outside of specialized cases and potentially specialized processing, we still need considerably more contrast than at either Rayleigh or Dawes for the average type of photography where sharpness strait out of camera is highly desirable.
It should also be noted that much of Rayleigh and Dawes criterion astrophotography for the purposes of identifying binary stars started out with film. Film has better characteristics to gathering detail at low contrast than digital sensors do, so its more capable at resolving fine detail at MTF 9. Conversely, digital sensors are better than film at resolving contrasty detail at closer to MTF 40-50 than film is. For specialized photography, such as astrophotography, where low-contrast detail is supreme, there is probably far more room to grow in terms of digital sensor resolution than there is for general forms of photography, where sharpness and contrast are frequently more important.