The Megapixels are Coming [CR1]

Status
Not open for further replies.
Lee Jay said:
Fine - show me the images with the same spacial resolution as those shot the way I said. I'll help you out - you can't. I've already done this experiment, and the teleconverters do indeed drastically improve the overall system spacial resolution despite very slightly decreasing the optical resolution. This is exactly why we need more pixels, and a whole lot more - so we aren't undersampling the optics in the first place.

Have you ever wondered why the best amateur planetary imagers operate pixels that are about the size of those on the 40D through optics set at f/30? According to you, they're way, way beyond the capability of those optics, yet they increased focal length to that level in an effort to preserve maximum detail. Why would they use expensive barlows (Televue Powermates) if those small pixels were extracting all the detail from their bare f/11 optics in the first place? Answer - they don't. And that's with monochrome sensors with no OLPFs!!!

Have a look. This was shot at about f/30 with pixels that are about 40D sized:

http://damianpeach.com/barbados10/2010_09_12pic.jpg

Ok, this is my last attempt. Words are certainly insufficient, so hopefully some visual demonstrations will clear things up. Some facts:

1. Diffraction limits resolution at narrow apertures
2. Optical aberrations limit resolution at wide apertures
3. The more lens elements, the more optical aberrations introduced
4. The longer the focal length for a fixed physical aperture, the smaller the relative aperture (i.e. add TC's)

Lets assume we have a hypothetical 200mm lens capable of producing a 1"x1" image circle. Lets assume lens is capable of1.97lp/mm in terms of spatial frequency of the virtual image at the sensor, would roughly translate into a 50x50 "pixel" area within which our subject is resolved. Lets assume spatial resolution is not impacted by the addition of teleconverters. Lets assume our sensor resolution is infinite in the context of this discussion, so we don't have to factor in its effects on resolution. We are JUST talking lens resolution in this case.

Our subject is a small moon.

At 200mm without TC's, the moon is 14 "pixels" in size in the center of our frame. If we slap on a 2x TC and a 1.4x TC, our subject grows to 44 "pixels" in size, nearly filling the frame. Our SPATIAL RESOLUTION is CONSTANT, however we are suddenly able to observe FAR GREATER detail in our subject. If we reduce our spatial resolution by 50%, the more magnified subject IS STILL MORE DETAILED than the original, unmagnified subject (an exaggerated example of the effect of stacking on multiple TC's...which at the very least are going to increase diffraction and therefor reduce spatial resolution.)

This effect can be seen below in this simple animated gif (frame 2, unmagnified; frame 3, magnified same spatial resolution; frame 4, magnified w/ 50% less spatial resolution). Note, I've purposely kept resolution the same or lower do demonstrate the effect of, say, magnifying Jupiter such that it fills the frame (rather than being a small dot in the center of a largely empty frame) without changing spatial resolution:

HbLb6.gif


Two TC's are added to a lens increasing magnification, spatial resolution remains constant, yet we are capable of "seeing" more detail in our much larger subject, even at a LOWER spatial resolution. Magnification and spatial resolution are not the same. Magnification and spatial resolution are disjoint concepts that can vary independently. Increasing magnification by adding teleconverters, while keeping spatial resolution constant, DOES increase the apparent detail we are capable of observing...because OUR SUBJECT IS LARGER RELATIVE TO THE FRAME FOR A GIVEN RESOLUTION.

Well, thats the best I can do. If a small animated picture isn't worth 4000 words, then no amount of proof in this case will sway your opinion. I do indeed believe science backs up what I've said here.
 
Upvote 0
gene_can_sing said:
They need to hurry with the 4K C-DSLR. Sony is kicking Canon's A$$ now in video and a LOT of people have switched to the Sony FS-100, especially now that the Metabones adaptor allows full electronic interface with the FS-100 and Canon lenses. Canon could have owned that market, but they slept on it, so Sony took advantage and came out on top.

The 5D3 is shaping up to be a real disappointment in video since it's still has the soft, up-res'd false HD (at least in the early test models). So yeah, the 4K C-DSLR really needs to come out. And have a FLIP SCREEN for God sake. We need it for video.

If it's not soon, I'm going to have to buy the Sony FS-100. I'm not thrilled about it since I now hate shooting with the video camera form, but I need a better picture.

If the 4K DSLR came out at NAB in April, I would buy it the next day even it it is $6000. I need a better video solution and I cannot wait much longer.

And NO, that is not going to be the high mega-pixel camera from Canon. Video requires low mega-pixels because then the processor has less work to do to down-scale the image into a 2K output (or in this case a 2K to 4K) output.

I can't imagine the 4k DSLR coming out before Winter for one reason:
Summer is the only time you can shoot (without actors whining/dying) in much of the world.

I'm expecting around $6k for 4k as well, but I don't see it happening...
unless they offer $13,000 rebates on the stupid C300.

Don't buy the FS-100.
Buy 10 Gh2s instead.

Nobody can wait much longer for a solution, especially Canon. The ONLY reason the 5Dii broke every camera sales record ever was video. "Video cameras" were poised to destroy film, but they forgot to make them anything like film.

DSLRs swooped up everyone looking for the "film look" for not the price of a car ("video" cams) and not the price of a house (film).
But it's only a matter of days before phones are shooting 4k, global shutter.

We'll be rotobrush/gaussian blurring phone footage to look like the DSLR that never was.
Hollywood is a perpetually dazed Raiden.
FINISH HIM!!

=)
 
Upvote 0
Tuggem said:
Fact is that you will get to better result by doubling the number of pixels than using 1.4x converter and 4-doubling the number of pixels will be better than 2x converter. Only if the converters were ideal they could compete with increasing the number of pixels.

Yup...100% right! Fortunately, TCs are pretty good.
 
Upvote 0
jrista said:
Well, thats the best I can do. If a small animated picture isn't worth 4000 words, then no amount of proof in this case will sway your opinion. I do indeed believe science backs up what I've said here.

Believe me, I understand the science here quite well. You just aren't reading what I wrote.

"It (the addition of TCs) does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing."

Note the word "system", meaning lens + sensor + processing. Ideal TCs can only preserve optical resolution (as you seem fond of pointing out, regardless of the fact that this point is not in dispute), not increase it, but they can increase system resolution if the sensor is undersampling the optics without them. Since we can extract more real detail with TCs than without them even on our exiting 1.6-crop 18MP sensors, we are obviously undersampling the optics without the TCs. The question is, by how much? And the answer is, A LOT. I can easily demonstrate that we are undersampling by at least a factor of 4 in pixel count, and others have shown up to a factor of 16 on the best lenses. So the idea that 18MP (or 20, or 22 or whatever) is all we'll ever need to squeeze everything out of the best optics is just not even close to correct.
 
Upvote 0
Lee Jay said:
jrista said:
Well, thats the best I can do. If a small animated picture isn't worth 4000 words, then no amount of proof in this case will sway your opinion. I do indeed believe science backs up what I've said here.

Believe me, I understand the science here quite well. You just aren't reading what I wrote.

"It (the addition of TCs) does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing."

Note the word "system", meaning lens + sensor + processing. Ideal TCs can only preserve optical resolution (as you seem fond of pointing out, regardless of the fact that this point is not in dispute), not increase it, but they can increase system resolution if the sensor is undersampling the optics without them. Since we can extract more real detail with TCs than without them even on our exiting 1.6-crop 18MP sensors, we are obviously undersampling the optics without the TCs. The question is, by how much? And the answer is, A LOT. I can easily demonstrate that we are undersampling by at least a factor of 4 in pixel count, and others have shown up to a factor of 16 on the best lenses. So the idea that 18MP (or 20, or 22 or whatever) is all we'll ever need to squeeze everything out of the best optics is just not even close to correct.

Here is some reality:

Fact: A Canon 5D III full-frame sensor resolves (at best) 80lp/mm.
Fact: A "perfect" 200mm f/2 lens @ f/8 is physically capable of resolving an absolute maximum of 86lp/mm.
Fact: Two "perfect" 1.4x TCs attached to that perfect 200mm lens reduces f/8 to f/22, limiting the physically possible absolute maximum resolution to 31lp/mm.

Fact: At apertures of f/9 or narrower, even the least-dense sensors of today outresolve lenses.
Fact: Artifacts caused by waveform interference, such as moire, could be minimized or eliminated by increasing sensor resolution up to 2x beyond lens resolution, however sharpness and contrast will not increase (and likely decrease relative to 1x sampling at 100% crop.)
Fact: A sensor with 2x lens resolution will produce photos that, when viewed at 100% crop, appear very soft.
Fact: Increasing sensor resolution beyond 2x lens resolution produces minimal or imperceptible returns at increasingly disproportionate cost.

Fact: An oversampled photo at 80lp/mm would have to be reduced to an image size with equivalent resolution to 31lp/mm in order to restore original sharpness that was lost to diffraction and oversampling.

FACT: A "perfect" 560mm lens achieved via combining a 200mm f/2 with two 1.4x TCs shot at an aperture of f/8 (which is an effective aperture of f/22) can resolve at best 31lp/mm, and a sensor capable of recording 80lp/mm is capable of resolving every last scrap of detail from that lens...and then some. Total system resolution is 31lp/mm of (sharp) actual resolution when the final output is downsampled to the lens native resolution, or 62lp/mm of (soft) oversampled resolution without artifacts when left at native camera resolution.
 
Upvote 0
Lee Jay, you were THIS close to having it right ;D

Lets go back to the question you asked..... Why does the mag seem to make the image come out better.

You guys contest if the lens or sensor is the limiting factor in this system. For sake of argument and example, lets assume the lens has just a bit of headroom left above the sensor.

In that case the magnification will improve (here is the important part) the resolution of your OBJECT OF INTEREST.
It will do this wile simultaneously reducing your system resolution through additional artifact producing elements in LOS.
The (reduced) system resolution is better applied to your magnified object.

Now if we blow away the starting premise, that the lens has some headroom over the sensor, the mag doesn't do a bloody thing for you, it just gives your more precision of less accuracy.

Regarding sensor or lens in this case? Sounds to me like both your guys are operating in the magin of error.

Great posts though, plenty of people reading them would have found the thread to be a good mini-tutorial.

cheers
-E
 
Upvote 0
@ebrakus: Good way to explain it with "resolution of the OBJECT OF INTEREST". ;) Like that.

@those-still-interested:

For reference, here is a little bit of math on "system" resolution. Every element of a system has its own independent resolution that combine to create the final outcome system resolution. System resolution is never as high as the independent resolutions of each component. Converting element resolution into a "blur circle" gives you the finest size "dot" that can be resolved with that element, regardless of the actual size of the source dot that is being resolved. The total resolution of a system is the square root of the sum of the squares of each elements blur circle. In other words:

Code:
totalBlur = sqrt(blur1^2 + blur2^2 + ... + blurN^2)

Total blur increases as you add elements to the system, based on the formula above. Assuming we have a lens @ f/4 and a sensor that both produce a 5.3 micron blur circle. Total system blur is greater than 5.3 microns:

Code:
sqrt(5.3um^2 + 5.3um^2) = 7.5um

Add a teleconverter that produces the same 5.3 micron blur, and your total system blur increases:

Code:
sqrt(5.3um^2 + 5.3um^2 + 5.3um^2) = 9.2um

Given that there are 1000 um/mm, and twice the airy disc diameter produces a line pair for MTF 50% (um/lp), we can calculate spatial resolution in lp/mm as such:

Code:
lp/mm = 1000/(totalBlur * 2)

With our base lens + sensor system, spatial resolution at f/4 would be:

Code:
1000/(7.5 * 2) = 1000/15 = 66.665 lp/mm

When we stack on the TC (lets say 2x), we either lose resolution both due to the additional lens element and narrower aperture (since our focal length increased), or we lose resolution both due to the additional lens element and the need to use a wider aperture. Assuming we actually have a perfect lens and used a wider aperture, we would then only lose resolution due to the additional lens element. We can demonstrate the first and last cases (as computing lens aberrations is a lot more complex than diffraction). If we leave the aperture as-is, the blur circle for both elements increase to 10.65um:

Code:
sqrt(10.65um^2 + 5.3um^2 + 10.65um^2) = 16um (airy disc size)
1000/(16 * 2) = 1000/32 = 31.25 lp/mm

Adding a single 2x TC has cost us over 50% resolution when keeping the aperture setting the same. If we widen the aperture by 2x to compensate for the TC, assuming perfect optics:

Code:
1000/(9.2 * 2) = 1000/18.4 = 54.35 lp/mm

Even with perfect optics, adding on a TC and increasing the aperture has cost over 18% resolution.
 
Upvote 0
Actually Lee Jay is the one who is more correct here.
The system resolution of the object will be the same with an ideal 1.4x as with doubling the number of pixels. If you can gain object resolution with TC you can do the same by increasing the number of pixels.

It has also been shown that 7D with 18MP APS-C doesn't fully oversample a lens at f22 but some more MP is needed as according to the link above.

Lens resolution and sensor resolution are just two completely different things.
 
Upvote 0
Tuggem said:
Actually Lee Jay is the one who is more correct here.
The system resolution of the object will be the same with an ideal 1.4x as with doubling the number of pixels. If you can gain object resolution with TC you can do the same by increasing the number of pixels.

Assuming a wide enough aperture that does not lose resolution to optical aberrations, and is not yet blurring detail beyond the diffraction limit of the sensor, sure. If you start out with a sensor that is limited by diffraction at f/8, doubling its resolution would approximately halve its DLA to around f/4.5. You probably could gain more resolution that way, although I would expect the TC to outperform a larger sensor unless you also widened the aperture when shooting with a larger sensor (and you run the risk of introducing optical aberrations at that point.) As such, I find converting magnification into megapixels to be misleading on both ends...the TC reduces aperture, and a denser sensor requires a wider aperture, to normalize to the same results...so its a very rough approximation. Saying your 2x 1.4x TC stack on a 200mm lens resolves 144mp is extremely misleading, when you are actually "resolving" less with the TC. Thats all my point has really been.

It has also been shown that 7D with 18MP APS-C doesn't fully oversample a lens at f22 but some more MP is needed as according to the link above.

That would only be the case at an 9% contrast, which would resolve 68lp/mm, requiring 136lp/mm for 2x oversampling (which is a bit higher resolution than the 7D). Detail with that low contrast is barely discernible by the human eye. The primary vision center of the human eye is packed with 0.5 micron cones and rods, some 8.6 times smaller and considerably more sensitive than the 4.3 micron photosites in the 7D. If you account for the random distribution of cones and rods, and varying degree of separation of cones from each other of anywhere from less than 0.1 microns to as much as 1 micron, you would have to figure the minimum blur circle for the eye is around 0.7um (for color vision). That would lead to a spatial resolution of about 714lp/mm. Thats just about right to oversample 2x the 370lp/mm resolution our lenses project at the average pupil diameter of 4mm (f/4.1) in normal light.

Its not surprising the eye can discern detail at only 9% contrast, but its harder for a digital sensor to do the same, and if you could, probably only do it at a very low ISO where noise has the least impact. I guess parts of the surface of the moon (such as a mare) might be a subject wherein you could extract and record greater spatial resolution from than your average photographic subject. Whether you could actually extract 68lp/mm...need to do some careful experimentation and measurement to be convinced (assuming a 4.3 micron 7D pixel...despite the greater sensitivity of say a 5D III or 1D X pixel, they have considerably lower spatial resolution, and are insufficient to double oversample 68lp/mm.) The sample moon photos that Lee Jay has linked a few times, however, clearly exhibit very pronounced diffraction blurring and considerable loss of detail (I assume due to the f/22 aperture.) Some of that may be due to noise reduction (I have to imagine a higher ISO is used at 560mm f/22, so there was probably a lot of noise, and thus a lot of noise reduction.) I have a hard time believing 68lp/mm system resolution there, although I guess wouldn't be surprised if more than 30lp/mm was indeed being resolved.

Lens resolution and sensor resolution are just two completely different things.

Yes, they are. Each have their own detractors that operate in different ways to degrade resolution. However their spatial capabilities can both be described the same way using the same units. Same would go for film, even though film is also a very different thing with its own nuances.
 
Upvote 0
jrista said:
Here is some reality:

Fact: A Canon 5D III full-frame sensor resolves (at best) 80lp/mm.
Fact: A "perfect" 200mm f/2 lens @ f/8 is physically capable of resolving an absolute maximum of 86lp/mm.

Not sure where you got that.

1/(0.000550*8) = 227 lp/mm at MTF 0. I'm guessing you're using MTF 50 which is crazy talk for extinction. Even the Rayleigh criterion is MTF 9, and many people use MTF 5 for extinction. Plus, people do shoot at faster than f/8, and many lenses, such as the 200/2, are diffraction limited at much faster apertures than f/8.

Fact: Two "perfect" 1.4x TCs attached to that perfect 200mm lens reduces f/8 to f/22

Uh...no, that would be f/16, not that it's relevant.

Fact: At apertures of f/9 or narrower, even the least-dense sensors of today outresolve lenses.

That's not a fact. Well, it might be if you use the crazy MTF 50 as your criteria. But that's not remotely realisitc.

Fact: Artifacts caused by waveform interference, such as moire, could be minimized or eliminated by increasing sensor resolution up to 2x beyond lens resolution, however sharpness and contrast will not increase (and likely decrease relative to 1x sampling at 100% crop.)

This is where your thinking has gone off the rails.

Fact: If you are getting pixel sharp shots at 100%, you are undersampling the optics.

In such a case, using more pixels of a smaller size would get you better detail resolving power.
 
Upvote 0
@Lee Jay: Yes, I use MTF 50, although not necessarily for extinction. (Perhaps thats our disconnect...I've been referring to "useful" resolution, not the point where all detail becomes gray mud.) Using MTF 0 for extinction for digital SLR cameras is insane. Even using MTF 9 is pretty crazy...unless your talking about specialized CCD cameras designed for high end astrophotography, in which case I don't know much about them, and I honestly can't speak to that. However that is an entirely different story to DSLR resolution.

If you have some example studies done that actually demonstrate Canon glass like the 200/2, used on a Canon sensor, actually consistently resolving detail at 9% contrast, I'd like to see it (honest request there). I use MTF 50 because thats what is normally used to demonstrate camera resolution. Anything I say about glass is relative to a sensor. Glass is a different story, and obviously a LENS in isolation is capable of being measured at any level of contrast. I'm not surprised at all that the best GLASS, such as the 135/2, 300/2.8, or any of Canon's supertelephotos (like the 600/4 L II) are capable of resolving 370lp/mm with f/4 or 700lp/mm+ at f/2 at MTF 9%.

The limiting factor is the sensor. Sensors are not like human eyes, and can't resolve the same amount of detail at low contrast. Now, there have certainly been advancements, such as gapless microlenses, or even multiple layers of gapless microlenses; foveon-type stacked photodiodes; hardware level noise reduction mechanisms capable of transporting purer pixel data, etc. However not all of those things are perfect, noise still exists, and the majority of sensors are not monochrome or foveon-style, where every row of pixels is fully capable of resolving all of the image detail they receive. Bayer sensors still impose a significant hit to resolution (despite their densities), and their nature impacts their ability to resolve at lower contrast levels.

I'll certainly give a bit in the argument here...a sensor as dense as Canon's 18mp APS-C, Sony's 24mp APS-C, and now Sony/Nikon's 36.6mp FF sensor, are probably capable of resolving at a lower contrast than MTF 50. I'm still very skeptical they are capable of resolving consistent, accurate data at MTF 9 (especially without a low pass filter like the D800). I don't have readily available charts for MTF 40 or MTF 20, so I don't frequently quote those numbers. I could probably generate some rough mathematically generated data for those contrast levels for conversations sake, but they would only be accurate on a purely hypothetical level. The MTF and ISO 12233 chart data available today for lenses indicates even the highest density sensors start losing spatial accuracy and useful detail around 100-115lp/mm for APS-C sensors. You still get "spotty" detail at finer resolution, but its often obscured or otherwise limited by moire, color moire, and noise.

I'd love to know exactly what Canon glass is really capable of OUTSIDE the context of their cameras or theoretical, mathematically generated MTF charts (which don't even use adequate test images to start with, limited at most to 30lp/mm details.) All "real-world" lens tests that I know of are still performed using a DSLR camera, which always limits system resolution. Sometimes it indicates the lens is the limiting factor (especially for tests done at or near maximum aperture, which really make the tests rather useless), sometimes it indicates the sensor is the limiting factor. I've spent a lot of time looking for accurate MTF charts for Canon glass, and so far its lacking. The tests available indicate that something is limiting system resolution (approaching extinction) to numbers closer to the MTF 50 category than MTF 9, though.

If you are sitting on some magical database full of accurate MTF data for lenses, cameras, and telescopes, please, let me know. I'd love to have more accurate information to work with. I'm often a skeptic, but if I have concrete evidence of something, I'll happily change my mind. (BTW, the photos of jupiter and the moon you have linked a few times appear very, very soft to me. The moon photo seems to lack a lot of the fine detail I've often seen in other high res photos of the moon. I'm not sure those are the best examples of system spatial resolution. If you have something that demonstrates say two comparable shots of the moon, taken with the same 200 + 1.4x + 1.4x, one on a sensor that resolves the same resolution as the lens and one that oversamples the lens, such that they can be compared one on top of the other, I think that would be a much better demonstration of your point that increased sensor resolution does help increase system resolution.

I'm not sure anything would demonstrate an increase in spatial resolution when adding TC's, however...I'm still holding steadfast on that point.)

Lee Jay said:
This is where your thinking has gone off the rails.

Fact: If you are getting pixel sharp shots at 100%, you are undersampling the optics.

In such a case, using more pixels of a smaller size would get you better detail resolving power.

Sure, not arguing that point, really (I think we diverged a bit too far from the original complaint I had, which was equating spatial resolution to magnification...something I still assert is very, very inaccurate. At the moment, I don't think we really disagree on most of the points you made in your last post.) Scientifically, using more sensor resolution is a good thing, and will pretty usually produce better results (even if you push well past 2x oversampling, more resolution will still increase system resolution (totalBlur) from the standpoint of resolving closer to the resolution limit of a lens...ignoring issues like noise, sensitivity, and DR, which will all eat into those gains as you keep decreasing pixel pitch.)

However most people WANT sharp pictures at 100%. This forums has had several threads in the not to distant past where people have complained a LOT about the Canon 7D and how horribly soft its photos are. I found the complaints to largely be due to a lack of understanding about how very dense the 7D sensor really is (and the likelyhood that it probably has a slightly overaggressive low-pass filte.) I argued the point that the 7D is an excellent camera and that you have to downsample to fully realize its potential, since its oversampling (although probably not 2x oversampling) the lens at apertures wider than f/4 and narrower than about f/5.6 or so (using MTF 50 numbers anyway...which certainly seem to jive with the reality there). Personally, I like that trait in the 7D, as I have to downsample a bit for 13x19" prints (and its certainly more than enough resolution to upsample for large prints viewed at a greater distance.)

That doesn't change the fact, though, that people want SHARP photos AT 100%. So it doesn't surprise me that companies like Canon aim to provide such results as much as they can (which is why I don't suspect they will push APS-C resolution much past 22mp or so, unless they are able to produce a sensor that is far more capable of resolving low-contrast detail. Its also not surprising that they are producing sensors with middle-ground resolution at 18-22mp for their full-frame sensors. Those cameras will undersample lenses by a considerable degree, and there is little chance of the sensor significantly oversampling such that all photos come out looking soft at 100% crop. Its a sad psychological problem, but far too many photographers don't understand the concept of image size normalization when comparing results, and if it doesn't look perfect at 100%, then its crap. Again, personally, I wouldn't mind sensors that twice-oversampled the lens (and at f/2 to boot), I'd prefer it as it does increase system resolution. Practical issues come to mind (immense file size and very slow post processing time come to mind), so again, its doubtful we'll really get sensors that oversample lenses all that much on a regular basis.
 
Upvote 0
jrista said:
Two TC's are added to a lens increasing magnification, spatial resolution remains constant, yet we are capable of "seeing" more detail in our much larger subject, even at a LOWER spatial resolution. Magnification and spatial resolution are not the same. Magnification and spatial resolution are disjoint concepts that can vary independently. Increasing magnification by adding teleconverters, while keeping spatial resolution constant, DOES increase the apparent detail we are capable of observing...because OUR SUBJECT IS LARGER RELATIVE TO THE FRAME FOR A GIVEN RESOLUTION.

Well, thats the best I can do. If a small animated picture isn't worth 4000 words, then no amount of proof in this case will sway your opinion. I do indeed believe science backs up what I've said here.

You seem to be arguing over things he didn't claim and missing his point and fixated on keeping the sensor disjoint.
 
Upvote 0
First - you can get more spacial detail from adding TCs, even to already-slow optics. This is a $60 Bayer-sensor webcam, not some high-end astronomical sensor. Pixel size is 5.6 microns - about the same as the 40D. f/30 on the left, f/15 on the right. According to you, the f/30 shot couldn't possibly be better, but it is. I took these:
http://photos.imageevent.com/sipphoto/samplepictures/Jupiter%20f30%20versus%20f15%20comparison.jpg

Second, and this is going to be a little hard to accept for you, but it's fact so I suggest you listen carefully. You're thinking of a TC as a device that increases focal length and decreases aperture. First of all, it doesn't decrease aperture. f-stop = focal length / aperture diameter. A TC can be thought to increase focal length while keeping aperture the same, thus increasing f-stop. However, and this is important, this is only true from the camera's point of view. From the lens' point of view, its focal length, aperture and f-stop remain the same. The TC is, after all, mounted behind it. From the lens' point of view, the TC has changed the camera. How, you might ask? By shrinking the sensor and the pixels on it. If you don't believe me, try this little experiment yourself.

http://photos.imageevent.com/sipphoto/samplepictures/Teleconverter%20optical%20reduction.jpg

The point is, increasing focal length while preserving aperture (and thus increasing f-stop) and decreasing pixel size are equivalent. Here's an example of that:

http://photos.imageevent.com/sipphoto/samplepictures/Pixel%20density%20versus%20teleconverters.jpg

Finally, if you want to see the effect of diffraction while shrinking pixel size, have a look at the link below. If you prefer, you can think of these as APS-C sensors with 8MP, 16MP, 32MP, and 64MP, all at f/11. The one on the bottom is for reference when using a larger aperture that isn't diffraction-limited. As you can see, even at f/11, resolving power goes up in each case, by ever-decreasing amounts (the so-called law of diminishing returns), just as theory would indicate. I've tested this all the way to oblivion (1.1 micron pixels at f/11), and the MTF 0 spatial cutoff formula I gave you from Wikipedia matches well with real-world testing.

http://photos.imageevent.com/sipphoto/samplepictures/Diffraction%20pixel%20size%20test%202.jpg

As for how good our optics are, I tested my version 1 70-200/2.8L IS at different apertures by mounting telescope eye pieces to it and trying to split double stars. Essentially, this is a test of the Rayleigh criterion (MTF 9). I found that it isn't diffraction-limited at f/2.8 but, amazingly, it is diffraction-limited at f/4 - I could split a double at exactly the Rayleigh equivalent separation angle for a 50mm aperture with that lens given sufficient optical magnification. I think you'd agree that we have several lenses in the line-up that are better at faster f-stops than f/4 than the version 1 70-200 is.

For a little more evidence of that, compare the Jupiter shot I posted above, taken with 125mm of aperture of diffraction-limited f/15 Maksutov-Cassegrain telescope, with one posted yesterday also taken with 125mm of aperture this time in the from of a wide-open 500/4. The detail retained is very, very similar providing further evidence that the 500/4 is diffraction-limited wide open.

http://forums.dpreview.com/forums/read.asp?forum=1029&message=40928248

The take-home lessons are:

- We can extract more detail at finer resolutions than the MTF50 diffraction limit even with Bayer sensors with AA filters.
- We have optics that are diffraction-limited at f-stops much faster than f/8.
- Because of those two facts, we can make use of sensors much more densely packed than current 18MP APS-C sensors.
 
Upvote 0
jrista said:
Tuggem said:
Actually Lee Jay is the one who is more correct here.
The system resolution of the object will be the same with an ideal 1.4x as with doubling the number of pixels. If you can gain object resolution with TC you can do the same by increasing the number of pixels.

Assuming a wide enough aperture that does not lose resolution to optical aberrations, and is not yet blurring detail beyond the diffraction limit of the sensor, sure.

Good. Glad we agree.

Let's work on that a bit. Let's do f/4 since we have several lenses that are diffraction-limited by then. Let's use green light (conservative). Let's be further conservative and use Rayleigh instead of MTF5 or MTF0.

MTF 9 = Rayleigh = 1/(1.22*0.000550*f/4) = 373 lp/mm

We need roughly 3 pixels per line pair to resolve them on a Bayer sensor with AA filter.

3 * 373 * 22.3mm = 24,954 horizontal pixels
2/3 of 24,954 = 16,636 vertical pixels

24954*16636 = 415,134,744 pixels, or 415.1MP.

Here's a sample of the equivalent of 288MP:
http://forums.dpreview.com/forums/read.asp?forum=1029&message=37493247

Obviously, we have plenty of room between the real limits and the 18MP we have today.

And just in case you think I'm alone in thinking this, here's some information on gigapixel sensors from the guy that invented the type of sensors we use in our cameras:

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30006322
 
Upvote 0
Lee Jay said:
First - you can get more spacial detail from adding TCs, even to already-slow optics. This is a $60 Bayer-sensor webcam, not some high-end astronomical sensor. Pixel size is 5.6 microns - about the same as the 40D. f/30 on the left, f/15 on the right. According to you, the f/30 shot couldn't possibly be better, but it is.

I've highlighted where you've misunderstood me. Thats not my argument, and probably where we've made a disconnect. First off, "better" is such an extremely subjective and broad term, its a terrible term for this conversation. Obviously magnifying a subject makes it "better" in the sense that your recording more detail of that subject. Again, that is not my argument. My argument is that neither the lens nor the sensor are recording at a "higher spatial resolution" (which is what it sounds like you are saying when you say that using 1.4x TC's == 141mp...increasing megapixel count in the same sensor area increases the amount of spatial resolution the system can record, but I'm arguing that is not what happens when you tack on teleconverters) when increasing magnification. They are recording a larger subject at the same (or slightly lower, given the math on total system blur) spatial resolution. I guess to put it another way...the IMAGE RESOLUTION, the width and height of your subject, increase, for the same SPATIAL RESOLUTION.

Lee Jay said:
Second, and this is going to be a little hard to accept for you, but it's fact so I suggest you listen carefully. You're thinking of a TC as a device that increases focal length and decreases aperture. First of all, it doesn't decrease aperture. f-stop = focal length / aperture diameter. A TC can be thought to increase focal length while keeping aperture the same, thus increasing f-stop.

Sure, f-stop, relative aperture, same thing. I know the absolute aperture, the physical diameter in mm, stays the same when increasing focal length with TCs. However due to the increased focal length, diffraction is magnified right along with everything else. The effects of a TC on diffraction are real, regardless of how the sensor may appear when looking through the TC.

Lee Jay said:
However, and this is important, this is only true from the camera's point of view. From the lens' point of view, its focal length, aperture and f-stop remain the same. The TC is, after all, mounted behind it. From the lens' point of view, the TC has changed the camera. How, you might ask? By shrinking the sensor and the pixels on it. If you don't believe me, try this little experiment yourself.

http://photos.imageevent.com/sipphoto/samplepictures/Teleconverter%20optical%20reduction.jpg

Sure, the size of the pixels, as well as the size of the whole sensor, shrink when viewed through the TC...but the total density of the sensor does not increase for the same size. The former changes magnification, the latter would change the spatial resolution of the system. We are on the same page here, and I think I described this effect well when I described magnification as being the same as recording an image with a much larger (physical size) 141mp sensor, and cropping out the center 18mp. IMAGE resolution of the subject increases, SPATIAL resolution of the system remains the same or decreases with a TC. The image you linked does a good job demonstrating that everything viewed through the TC shrinks in size. If it was a sensor, the whole sensor...not just the pixel pitch, would shrink relative to whatever was being projected through the lens. I call that magnification. The actual width and height of the sensor in pixels has not changed. The distance between pixels has not changed. So spatial resolution is the same. (I'm again not sure were on different pages here...I think maybe we have been discussing the same thing from different angles.)

Lee Jay said:
The point is, increasing focal length while preserving aperture (and thus increasing f-stop) and decreasing pixel size are equivalent. Here's an example of that:

http://photos.imageevent.com/sipphoto/samplepictures/Pixel%20density%20versus%20teleconverters.jpg

Not disputing that. It again boils down to the comparison extra_magnification == more_megapixels, which sounds like extra_magnification == increased_spatial_resolution...assuming the rest of the system does not change. That comparison sounds like adding a TC increased the spatial resolution from 116lp/mm to 315.5lp/mm. Even if we do assume that DSLR sensors are capable of resolving fine detail at MTF 9, at f/16 for a 200 + 1.4 + 1.4 on a 7D, total system blur (ignoring any additional potential aberrations from the TC's, there probably aren't any at a physical aperture of f/8, and assuming just about twice the pixel pitch of the 7D for the blur of the sensor itself) is sqrt(10.8^2 + 8.4^2) = 13.7um, which translates into a system spatial resolution of about 73lp/mm. Lets ignore the nature of bayer sensors, and assume the 7D is capable of resolving 4.3 micron airy discs. Our system blur becomes sqrt(10.8^2 + 4.3^2) = 11.63um, or a system spatial resolution of about 86lp/mm. Decreasing pixel size relative to the subject could also be termed increasing the subject size relative to the pixel. Either way, thats magnification, the increase of object dimensions, or image resolution, not an increase in spatial resolution. The only reason you are resolving more detail is because the subject is larger...thats it.

Lee Jay said:
Finally, if you want to see the effect of diffraction while shrinking pixel size, have a look at the link below. If you prefer, you can think of these as APS-C sensors with 8MP, 16MP, 32MP, and 64MP, all at f/11. The one on the bottom is for reference when using a larger aperture that isn't diffraction-limited. As you can see, even at f/11, resolving power goes up in each case, by ever-decreasing amounts (the so-called law of diminishing returns), just as theory would indicate. I've tested this all the way to oblivion (1.1 micron pixels at f/11), and the MTF 0 spatial cutoff formula I gave you from Wikipedia matches well with real-world testing.

http://photos.imageevent.com/sipphoto/samplepictures/Diffraction%20pixel%20size%20test%202.jpg

I'm not really sure what those images demonstrate, outside of the fact that the 6.4 micron pixels are simply incapable of resolving enough detail in the first place. I've never claimed that increasing pixel density doesn't help increase spatial resolution, I've only argued that after a certain point...a sensor with roughly 2x the resolution of the optical image its recording...does increasing resolution stop having a decent cost/value ratio. I don't generally consider f/11 to be severely detrimental to IQ. I consider f/22 and beyond to be detrimental to IQ, regardless of the sensor density.

Lee Jay said:
As for how good our optics are, I tested my version 1 70-200/2.8L IS at different apertures by mounting telescope eye pieces to it and trying to split double stars. Essentially, this is a test of the Rayleigh criterion (MTF 9). I found that it isn't diffraction-limited at f/2.8 but, amazingly, it is diffraction-limited at f/4 - I could split a double at exactly the Rayleigh equivalent separation angle for a 50mm aperture with that lens given sufficient optical magnification. I think you'd agree that we have several lenses in the line-up that are better at faster f-stops than f/4 than the version 1 70-200 is.

I do agree, I've mentioned many lenses that I think are capable of stellar optical characteristics at maximum aperture. Again, thats not really the point of my argument.

Lee Jay said:
For a little more evidence of that, compare the Jupiter shot I posted above, taken with 125mm of aperture of diffraction-limited f/15 Maksutov-Cassegrain telescope, with one posted yesterday also taken with 125mm of aperture this time in the form of a wide-open 500/4. The detail retained is very, very similar providing further evidence that the 500/4 is diffraction-limited wide open.

http://forums.dpreview.com/forums/read.asp?forum=1029&message=40928248

Tough to evaluate this stuff, since the images were the result of some extensive stacking. Stacking completely changes the game, and allows things like superresolution and extreme noise reduction, pushing image resolution well beyond what is possible spatially with physical hardware. Thats all a discussion for another day, and doubt we would disagree much with the benefits of stacking and superresolution.

Lee Jay said:
The take-home lessons are:

- We can extract more detail at finer resolutions than the MTF50 diffraction limit even with Bayer sensors with AA filters.
- We have optics that are diffraction-limited at f-stops much faster than f/8.
- Because of those two facts, we can make use of sensors much more densely packed than current 18MP APS-C sensors.

1. Still not sure about the first point. I'll make my arguments again below in the final quote.

2. Totally agree we have diffraction-limited lenses at faster f-stops than f/8. That was never a point of argument...I simply used f/8 in my prior posts because that was the aperture you mentioned using for your 200+1.4+1.4 setup to capture the moon. It never intended to portray that I thought f/8 was the first diffraction limited aperture in any lens. I think, outside of some of Canon's top L-series glass starting around 135mm, most of their lenses seem to achieve "best" resolution (normalize optical aberrations with diffraction) somewhere around f/4 (a little less in some cases, a little more in others.) Super fast lenses, like the 50mm and 85mm f/1.2 or f/1.4 lenses seem to achieve that normalization around f/2.8-f/3.5, however they are rather wide, and don't magnify their subjects enough for it to matter in the context of this discussion.

3. Agreed that more densely packed sensors are not "bad". Agreed that they can help us oversample optical resolution enough to eliminate sensor aberrations and capture a little bit more detail. Agreed that as we approach the "pixel pitch" of rods and cones in the 2° foveal spot in the human eye (which clear, high detail vision occurs), 0.5um, that our ability to resolve fine detail at lower contrast will increase. I'm not sure I agree that we can do that today at 9%, although I'm happy to offer that we probably can resolve fine detail at a contrast level below 50%.

Lee Jay said:
Let's work on that a bit. Let's do f/4 since we have several lenses that are diffraction-limited by then. Let's use green light (conservative). Let's be further conservative and use Rayleigh instead of MTF5 or MTF0.

MTF 9 = Rayleigh = 1/(1.22*0.000550*f/4) = 373 lp/mm

We need roughly 3 pixels per line pair to resolve them on a Bayer sensor with AA filter.

3 * 373 * 22.3mm = 24,954 horizontal pixels
2/3 of 24,954 = 16,636 vertical pixels

24954*16636 = 415,134,744 pixels, or 415.1MP.

Here is where your argument breaks down, at least as I am interpreting it. Lets get back to facts:

FACT: A 415mp APS-C sensor with the dimensions 24954x16636 DOES NOT EXIST. It never has existed, and will very likely not exist in the coming decades.
FACT: The 7D 18mp APS-C sensor is certainly not capable of resolving 415mp worth of spatial resolution.
Fact: For a lens that resolves 373lp/mm of resolution, the sensor is the LIMITING FACTOR (both from a resolution standpoint and a contrast standpoint.)
Fact: At 373lp/mm, the airy disc is 2.7 microns.
Fact: An 18mp APS-C sensor as in the 7D's has a minimum resolvable spot of 4.3 microns, assuming monochrome.
Fact: An 18mp APS-C sensor as in the 7D's has a minimum resolvable full-detail spot of about 2x pixel pitch, or 8.4-8.7 microns, assuming a bayer array, low-pass filter, IR filters, etc.
Fact: System blur of such a system would be about 8.9 microns.
Fact: System spatial resolution of such a system would be about 112lp/mm.

Equating the increase in subject detail as resulted from the addition of teleconverters to a lens for a given sensor as literally having a sensor with more megapixels in the same physical area is incorrect. That presumes an increase in the spatial resolution of the sensor as a result of increased optical magnification, which is obviously impossible...sensors have a fixed resolution (both spatially and dimensionally.)

So, I still don't understand your insistence on this formula that TCs == denser sensor. At best, given the math, your 373lp/mm lens setup and 116lp/mm APS-C sensor boil down to 112lp/mm of system spatial resolution (and I'm still ignoring the fact that adding additional TC's still has a drag on IQ, even though it may be minimal at f/4.)

Lee Jay said:
Here's a sample of the equivalent of 288MP:
http://forums.dpreview.com/forums/read.asp?forum=1029&message=37493247

Obviously, we have plenty of room between the real limits and the 18MP we have today.

And just in case you think I'm alone in thinking this, here's some information on gigapixel sensors from the guy that invented the type of sensors we use in our cameras:

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30006322


I still don't see 288mp in that image. If you could show me an image that literally had the necessary dimensions of around 20756x13844, then I might have to change my mind. The moon image you linked has an image resolution of 1000x1500 pixels (1.5mp), and coming from the 7D, I am going to assume the original non-cropped version was 5184x3456 pixels (which is still 18mp). There is a serious disconnect in the thinking that more magnification equals increased system spatial resolution. Even assuming the optics are capable of FAR superior spatial resolution (which at MTF 9% they are...I still dispute the idea that CFA bayer sensors are even close to resolving detail at that level of contrast), the camera (being the sensor, low pass filters, and any other filtration devices between the virtual image projected by the lens and the sensor) is going to severely limit the total system resolution you are capable of actually recording. What you actually record is really what matters, regardless of how much spatial resolution exists in the virtual image the lens may be projecting at any level of contrast. The more you oversample the lens, the more soft an image will appear at 100%, so while theoretically things may be "better", you require greater and greater downsampling to achieve a sharply detailed image (which is really the ultimate goal anyway, unless you have scientific goals.) As of today, it is not yet possible to record 415mp, let alone 288mp, with a single sensor in a single shot using a DSLR camera. The closest thing I've ever seen to several-hundred megapixel images of stellar bodies are mosaics of hundreds of shots of the moon, usually shot with telescopes with rather high light gathering capabilities.

So, I don't disagree with you on every point. Yes, there is plenty of room for improvement, obviously (on both lens and sensor fronts.) I strongly disagree with you on the point in your last quote...that TC == denser sensor. Thats just plain and simply not based in fact.
 
Upvote 0
jrista said:
Sure, the size of the pixels, as well as the size of the whole sensor, shrink when viewed through the TC...but the total density of the sensor does not increase for the same size.

The second part of that sentence is a direct contradiction for the first part. Pixel shrink = increased density.

The former changes magnification,

You seem to have a problem with the term "magnification".

Magnification = image size on sensor / object size
Enlargement = size of final print or image / image size on sensor

Lee Jay said:
The point is, increasing focal length while preserving aperture (and thus increasing f-stop) and decreasing pixel size are equivalent. Here's an example of that:

http://photos.imageevent.com/sipphoto/samplepictures/Pixel%20density%20versus%20teleconverters.jpg

Not disputing that.

Good, since it's correct. The problem is, you do dispute it below:

Equating the increase in subject detail as resulted from the addition of teleconverters to a lens for a given sensor as literally having a sensor with more megapixels in the same physical area is incorrect.

So, make up your mind. Correctly, would be nice.

Smaller pixels = more pixel density = higher pixel counts = higher spacial resolution if the optics can support it. And we now agree (I think) that many of the optical devices we have available can support it.
 
Upvote 0
traveller said:
What are your takes?

He equated focal length with pixel pitch just like I did, because they have the same effect on resolving power.

300mm f/2.8 4.3microns 3.0 arc seconds
500mm f/4 7.4microns 3.1 arc seconds

600mm f/5.6 4.8microns 1.65 arc seconds
1000mm f/8 8.2microns 1.7 arc seconds

Teleconverter = longer focal length = smaller pixels = more pixels in the same area.
 
Upvote 0
Status
Not open for further replies.