The Megapixels are Coming [CR1]

Status
Not open for further replies.
Lee Jay said:
jrista said:
GL said:
I think the key word here is FF. I can see Canon pulling a rabbit out the hat with a 30-40MP 7D II, similar specs to the current 7D, lower high ISO IQ than any of the FF cams, but decent low-to-midrange IQ and best-in-class resolution. That way it doesn't step on any "pro" toes in terms of ultimate IQ/build, and undercuts Nikon's high-res cam by $1000 or more.

No way that'll ever happen. The 7D's core value is in its APS-C sensor...that will never change. The entire reason people buy the 7D is for the extra reach with pro-grade features. The 7D is already pretty maxed out when it comes to resolution as well with 18mp in an APS-C format. You might gain a bit more by going to 20 or 22mp, but thats going to make it really hard to get sharp shots right down to the pixel level...and you would only be able to do so at a very narrow range of apertures at the center of the lens before diffraction or optical aberrations kill you.

So, what you're saying is teleconverters are useless on a 7D (a 1.4x is like doubling pixel count in the center).

Sorry, but you are dead wrong. I've used stacked 2.8x worth of TCs on an 18MP 1.6-crop sensor with results better than just with a 2x. 2.8x^2*18=141MP.

I believe you have made this argument before. Lets see if we can end the debate once and for all here with a little bit of math and fact.

I was referring to spatial resolution. The spatial resolution of an 18mp APS-C sensor (3456 lines @ 14.9mm high) is 116lp/mm, where as the spatial resolution of a 22mp APS-C sensor (3820 lines @ 14.9mm high) is 128lp/mm. Real-world MTF tests on Canon lenses show that rather few of them are even capable of resolving 116lp/mm at around f/4-5.6 (where optical aberrations and diffraction tend to normalize in the average lens), and at f/5.6, the maximum spatial resolution physically possible is 123lp/mm, some 5lp/mm less than what a 22mp APS-C sensor would be capable of.

Magnification and spatial resolution are NOT the same thing. Adding on a 1.4x TC changes magnification, it does not increase spatial resolution. By saying one thing "is like" another does not mean you "actually" have the alternative. Its simply "like" doubling pixel count. All things being equal, tacking on a TC to a high quality lens is more likely to reduce its spatial resolution than increase it due to the greater number of optical elements. Thats nothing to say of the increased effects of diffraction you have to endure with an aperture 2.8 times smaller (i.e. f/8 becomes f/22).

The notion that more magnification ~= more megapixels is simply a rough way to determine how much you might be gaining in terms of detail (ignoring the effects of increased diffraction...theoretically, you could never gain enough detail as you might believe by converting magnification into megapixels). By using the nomenclature "2.8x^2 * 18mp" to get "141mp", that simply means IF you had a lens with a large enough unmagnified image circle to project the image of your entire subject onto a 141mp sensor with the same pixel pitch as the original 18mp sensor such that cropping out the 18mp center would produce an identical image as simply using a smaller lens and 2.8x greater magnification on an 18mp sensor in the first place. Thats a really convoluted way to measure magnification...when simply saying 2.8x greater magnification is usually enough, and a better approach would be to say your subject is 7.84 times larger than without two 1.4x TC's.

Sorry...buy if anything is dead wrong here, its the use of the notation:

additional_magnification^2 * orig_mp = effective_mp_assuming_the_impossible
 
Upvote 0
Here is something to think about

A 28MP APS-C sensor of 6480 x 4320 will give you 4x4 pixel binning of 1080P video (Is that correct?)
this would beat sony MP probably make video people happy (correct me if i'm wrong)
on a rebel with a flip screen would probably sell well

extrapolate that to full frame you get over 71MP almost double the D800...

I would guess that the resolving power of the best canon glass is really going to be at its limit if not close to or exceeding it. but fact is megapixels sell cameras.
 
Upvote 0
A 28MP APS-C sensor? ewwww.

I'd love their new wide-angle zoom. If it could match the MFT goodness of the new 24-70 then it's going to be one wicked zoom.

I'm going to get two 5D3's (should hopefully have one next week). That'll do me for everything I do including landscapes. Once I've got all the lenses I could possibly want, and then want to blow away some cash on a third body, I'd be mildly tempted by a 36MP full-frame monster, although at that stage I'd probably be more interested in medium-format digital.
 
Upvote 0
Gcon said:
A 28MP APS-C sensor? ewwww.

I'd love their new wide-angle zoom. If it could match the MFT goodness of the new 24-70 then it's going to be one wicked zoom.

I'm going to get two 5D3's (should hopefully have one next week). That'll do me for everything I do including landscapes. Once I've got all the lenses I could possibly want, and then want to blow away some cash on a third body, I'd be mildly tempted by a 36MP full-frame monster, although at that stage I'd probably be more interested in medium-format digital.

Hey i'm not saying its desirable just possible :D
 
Upvote 0
justsomedude said:
moreorless said:
The rest of your post just seems like pointless spectulation to me.

I don't know why you think it's pointless. Evaluating the competition and understanding customers desires is what spurs innovation. And if you don't innovate, you can't succeed. Right now, the dSLR market is all about who is making the biggest advancements in digital imaging technology. And you can be damn sure Canon wants to be #1 in that arena.

At least I hope they do.

His post was based on his guesses of the internal workings of Canon which he seemed to be treating as facts. While its not impossible that he maybe right I don't think Westerfalls comments really provide much evidense either way, they are simpley the words of corperate promotion thats not going to highlight the weakness of a new product.

I'd say your second point is questionable aswell, in terms of druming up interest on the net new and exciting tech always wins(often interest from people who'll never own it though) however on the ground I theres evidense the market is reaching saturation in certain areas. In such a market I'd say that targetting weaknesses becomes more important than innovation which is what Canon seems to be focused on with the 5D mk3.
 
Upvote 0
please stop mixing the concepts of 4K video and and a high-MP sensor: they definitely don't mix well

4K resolution is 4096x2304, so less than 10 Mpix
if you definitely have to go 3:2 aspect ratio on the sensor, it's still a bit over 11 Mpix

and in any case Canon has already said a lot of things about the 4K DSLR: it will have a full frame sensor, but only record video out of a windowed area of about APS-H size, then it will store that in mjpeg
to me, that sounds like it will use the sensor in the 1DX (18mpix, very fast readout times)
 
Upvote 0
Come on Come on , high megapixels camera , the lowest they have more than 28M , 5d3 22M to tell the truth a little , sometimes a photo not quite enough for this number of pixels belong to a rather awkward number , and sent a little bit .
Later , printing, output , so I have the final say , Canon do not know who to listen to speeches , the whole out of such an embarrassing 5d2 / 3 , underachievement , silent ... :)
 
Upvote 0
jrista said:
I believe you have made this argument before. Lets see if we can end the debate once and for all here with a little bit of math and fact.

I was referring to spatial resolution. The spatial resolution of an 18mp APS-C sensor (3456 lines @ 14.9mm high) is 116lp/mm, where as the spatial resolution of a 22mp APS-C sensor (3820 lines @ 14.9mm high) is 128lp/mm.
Sorry, wrong already. You're assuming a monochrome sensor with no micro lenses and no AA filter. All three are wrong. The real-world with Bayer masks, microlenses, and AA filters mean you have to divide those numbers by something like 1.5.
Real-world MTF tests on Canon lenses show that rather few of them are even capable of resolving 116lp/mm at around f/4-5.6 (where optical aberrations and diffraction tend to normalize in the average lens), and at f/5.6, the maximum spatial resolution physically possible is 123lp/mm, some 5lp/mm less than what a 22mp APS-C sensor would be capable of.

Also wrong. Those tests were either shot on film that can't resolve better than that or shot on digital through an AA filter. Thus, they aren't lens tests but system tests.

Also, your diffraction-limit calculation is wrong. See here:

http://en.wikipedia.org/wiki/Spatial_cutoff_frequency

"As an example, a telescope having an f/6 objective and imaging at 0.55 micrometers has a spatial cutoff frequency of 303 cycles/millimeter."

Some of our lenses are now diffraction-limited at f/2.8. That's 650 cycles/mm at 550nm.

Magnification and spatial resolution are NOT the same thing. Adding on a 1.4x TC changes magnification, it does not increase spatial resolution.

It does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing.

If you don't believe me, go outside with any 200mm lens you like attached to a Canon 1.6-crop 18MP sensor with no TCs and see if you can get a picture of Jupiter that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3105%20old.jpg
Or a picture of the moon that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3054%20edited.jpg
 
Upvote 0
Lee Jay said:
jrista said:
I believe you have made this argument before. Lets see if we can end the debate once and for all here with a little bit of math and fact.

I was referring to spatial resolution. The spatial resolution of an 18mp APS-C sensor (3456 lines @ 14.9mm high) is 116lp/mm, where as the spatial resolution of a 22mp APS-C sensor (3820 lines @ 14.9mm high) is 128lp/mm.
Sorry, wrong already. You're assuming a monochrome sensor with no micro lenses and no AA filter. All three are wrong. The real-world with Bayer masks, microlenses, and AA filters mean you have to divide those numbers by something like 1.5.
Real-world MTF tests on Canon lenses show that rather few of them are even capable of resolving 116lp/mm at around f/4-5.6 (where optical aberrations and diffraction tend to normalize in the average lens), and at f/5.6, the maximum spatial resolution physically possible is 123lp/mm, some 5lp/mm less than what a 22mp APS-C sensor would be capable of.

Sure, you can't get exactly correct with bayer sensors, as they have CFA's and low-pass filters. The numbers I've listed are the theoretical maximums for green pixels, but thats largely besides the point. I'm not arguing 99% vs. 100% accuracy here, I'm just arguing about the the way you seem to abuse the conversion of magnification into megapixels, and the much larger inaccuracies of doing so.

Lee Jay said:
Also wrong. Those tests were either shot on film that can't resolve better than that or shot on digital through an AA filter. Thus, they aren't lens tests but system tests.

Also, your diffraction-limit calculation is wrong. See here:

http://en.wikipedia.org/wiki/Spatial_cutoff_frequency

"As an example, a telescope having an f/6 objective and imaging at 0.55 micrometers has a spatial cutoff frequency of 303 cycles/millimeter."

Thats the absolute maximum for diffraction with an MTF at Dawes Criterion. At that level, a sensor would image nothing but smooth, solid, flat gray. Sensors need a greater separation of airy discs for there to be enough contrast in spatial frequencies to be recorded usefully by the sensor. The human eye is, at best, JUST BARELY able to resolve detail with diffraction at 9% contrast...and that is generally too low for a digital sensor to resolve. Usually, an MTF of around 50% contrast is necessary for film or a digital sensor to resolve useful, unmuddied detail. If you have a subject with particularly low contrast, you might get away with slightly less, but as a general rule, MTF 50% is used to determine lens and sensor resolution in a spatial context.

I usually use a table from Luminous Landscape as a quick reference for spatial resolutions at acceptable contrast levels for photography:
http://www.luminous-landscape.com/tutorials/resolution.shtml

The table there is based on Norman Koren's work. If you have issues with his work, you better take it up with him, as he is highly respected when it comes to lenses, film & sensors, resolving power, sharpness, contrast, etc.

http://www.normankoren.com/Tutorials/MTF6.html

Lee Jay said:
Some of our lenses are now diffraction-limited at f/2.8. That's 650lp/mm.

LOL! Yes, a PERFECT f/2.8 lens is capable of 649lp/mm...at just infinitesimally above 0% contrast! You wouldn't get any useful detail from such a lens at that level of contrast...other than flat, solid, unbroken consistency of a single tone. The human eye can't even resolve any detail if contrast is less than 9%, and at that level (well below what a bayer CMOS or CCD sensor can resolve) your down to 532lp/mm. Detail is still rather close to monotone at that level of contrast. At 50% contrast, which would be necessary for a sensor to resolve USEFUL detail (i.e. detail where all line pairs are resolved with enough clarity to consistently tell them apart) you are down to 247lp/mm. That is the number most people would normally use when talking about spatial resolution.

As for DSLR lenses that are actually diffraction-limited at f/2.8...there are VERY FEW. Zeiss may have a lens or two that are diffraction limited at around f/1.7...however I believe those were cinema lenses, not DSLR lenses...and highly specialized to boot. MTF's provided by Canon are THEORETICAL most of the time (I believe their book on lens technology may have included a few real MTF charts for some lenses), generated by computing optical performance using computer models of their lenses, not actual lenses. Their MTF charts depict reproduction accuracy of 10lp/mm (for contrast) and 30lp/mm (for sharpness) meridional and sagittal (opposing diagonal) line pairs. Even with a relatively low f/2.8 resolution (such as 50-70lp/mm, which many Canon lenses ARE capable of) you can get very high marks when the most detail you are resolving is 30lp/mm.

Real-world tests of Canon lenses at maximum aperture have NEVER demonstrated resolutions much above 70lp/mm, so its highly doubtful were getting 247lp/mm out of any Canon lenses...let alone 532, or in your rather humorous case, 650! :D

Lee Jay said:
Magnification and spatial resolution are NOT the same thing. Adding on a 1.4x TC changes magnification, it does not increase spatial resolution.

It does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing.

If you don't believe me, go outside with any 200mm lens you like attached to a Canon 1.6-crop 18MP sensor with no TCs and see if you can get a picture of Jupiter that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3105%20old.jpg
Or a picture of the moon that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3054%20edited.jpg

Your still talking about magnification, not spatial resolution. You can magnify a subject and still project it through a lens at THE SAME spatial resolution. Magnification and resolution are disjoint concepts, and as such, they can vary independently of each other. You can magnify a subject to a greater extent while also reducing spatial resolution..and you will see greater apparent detail of your larger subject...even though your actual resolving power is lower. (This is normally the case to a small degree when tacking on teleconverters...the additional optical elements each have their own optical aberrations that reduce resolving power...simple matter of physics there until you stop down to smaller apertures...where in the longer focal length results in a smaller effective aperture, so diffraction takes a much larger toll than without a TC.) Its like moving closer to a highly detailed 600ppi print. If you view it at 8 feet, it looks nice as a whole (i.e. looking at the moon with a 100mm lens), however walk closer to 4 feet, and you can see finer details (i.e. looking at the moon with a 200mm lens). Just because you walked closer to the print doesn't mean your eyes are magically capable of resolving more detail, either optically or via your retina...both remain exactly the same as they are...the subject is simply larger, so given a CONSTANT spatial resolution, more detail can be observed.

This is all pretty basic physics. I recommend reading Norman Koren's work...solid stuff, should clear things up.
 
Upvote 0
NormanBates said:
please stop mixing the concepts of 4K video and and a high-MP sensor: they definitely don't mix well

4K resolution is 4096x2304, so less than 10 Mpix
if you definitely have to go 3:2 aspect ratio on the sensor, it's still a bit over 11 Mpix

and in any case Canon has already said a lot of things about the 4K DSLR: it will have a full frame sensor, but only record video out of a windowed area of about APS-H size, then it will store that in mjpeg
to me, that sounds like it will use the sensor in the 1DX (18mpix, very fast readout times)

You are correct in that native 4k 1:1 sampled video is only about 10mp. However, there IS a direct link between 4k video and high resolution sensors: How many source pixels you can sample to produce the final pixels in 4k video output. This would relate to that 4:2:2 subsampling ratio you may have seen (kind of a holy grail for DSLR 4k video sampling...produces excellent quality at very reasonable space savings.) Full and uncompressed video would be 4:4:4 sampling (four rows of pixels per single output horizontal video line), and that can produce very large files. A 4:2:2 subsampling ratio means that the luma channel is fully sampled, while the Cr and Cb channels are sampled at half resolution. To get full 4k chroma sampling, you need four times as many pixels as native 4k video resolution, so a 40mp sensor would be necessary to achieve that.

http://en.wikipedia.org/wiki/Chroma_subsampling
 
Upvote 0
Lee Jay said:
jrista said:
GL said:
I think the key word here is FF. I can see Canon pulling a rabbit out the hat with a 30-40MP 7D II, similar specs to the current 7D, lower high ISO IQ than any of the FF cams, but decent low-to-midrange IQ and best-in-class resolution. That way it doesn't step on any "pro" toes in terms of ultimate IQ/build, and undercuts Nikon's high-res cam by $1000 or more.

No way that'll ever happen. The 7D's core value is in its APS-C sensor...that will never change. The entire reason people buy the 7D is for the extra reach with pro-grade features. The 7D is already pretty maxed out when it comes to resolution as well with 18mp in an APS-C format. You might gain a bit more by going to 20 or 22mp, but thats going to make it really hard to get sharp shots right down to the pixel level...and you would only be able to do so at a very narrow range of apertures at the center of the lens before diffraction or optical aberrations kill you.

So, what you're saying is teleconverters are useless on a 7D (a 1.4x is like doubling pixel count in the center).

Sorry, but you are dead wrong. I've used stacked 2.8x worth of TCs on an 18MP 1.6-crop sensor with results better than just with a 2x. 2.8x^2*18=141MP.

Fact is that you will get to better result by doubling the number of pixels than using 1.4x converter and 4-doubling the number of pixels will be better than 2x converter. Only if the converters were ideal they could compete with increasing the number of pixels.
 
Upvote 0
jrista said:
Lee Jay said:
jrista said:
I believe you have made this argument before. Lets see if we can end the debate once and for all here with a little bit of math and fact.

I was referring to spatial resolution. The spatial resolution of an 18mp APS-C sensor (3456 lines @ 14.9mm high) is 116lp/mm, where as the spatial resolution of a 22mp APS-C sensor (3820 lines @ 14.9mm high) is 128lp/mm.
Sorry, wrong already. You're assuming a monochrome sensor with no micro lenses and no AA filter. All three are wrong. The real-world with Bayer masks, microlenses, and AA filters mean you have to divide those numbers by something like 1.5.
Real-world MTF tests on Canon lenses show that rather few of them are even capable of resolving 116lp/mm at around f/4-5.6 (where optical aberrations and diffraction tend to normalize in the average lens), and at f/5.6, the maximum spatial resolution physically possible is 123lp/mm, some 5lp/mm less than what a 22mp APS-C sensor would be capable of.

Sure, you can't get exactly correct with bayer sensors, as they have CFA's and low-pass filters. The numbers I've listed are the theoretical maximums for green pixels, but thats largely besides the point. I'm not arguing 99% vs. 100% accuracy here, I'm just arguing about the the way you seem to abuse the conversion of magnification into megapixels, and the much larger inaccuracies of doing so.

Lee Jay said:
Also wrong. Those tests were either shot on film that can't resolve better than that or shot on digital through an AA filter. Thus, they aren't lens tests but system tests.

Also, your diffraction-limit calculation is wrong. See here:

http://en.wikipedia.org/wiki/Spatial_cutoff_frequency

"As an example, a telescope having an f/6 objective and imaging at 0.55 micrometers has a spatial cutoff frequency of 303 cycles/millimeter."

Thats the absolute maximum for diffraction with an MTF at Dawes Criterion. At that level, a sensor would image nothing but smooth, solid, flat gray. Sensors need a greater separation of airy discs for there to be enough contrast in spatial frequencies to be recorded usefully by the sensor. The human eye is, at best, JUST BARELY able to resolve detail with diffraction at 9% contrast...and that is generally too low for a digital sensor to resolve. Usually, an MTF of around 50% contrast is necessary for film or a digital sensor to resolve useful, unmuddied detail. If you have a subject with particularly low contrast, you might get away with slightly less, but as a general rule, MTF 50% is used to determine lens and sensor resolution in a spatial context.

I usually use a table from Luminous Landscape as a quick reference for spatial resolutions at acceptable contrast levels for photography:
http://www.luminous-landscape.com/tutorials/resolution.shtml

The table there is based on Norman Koren's work. If you have issues with his work, you better take it up with him, as he is highly respected when it comes to lenses, film & sensors, resolving power, sharpness, contrast, etc.

http://www.normankoren.com/Tutorials/MTF6.html

Lee Jay said:
Some of our lenses are now diffraction-limited at f/2.8. That's 650lp/mm.

LOL! Yes, a PERFECT f/2.8 lens is capable of 649lp/mm...at just infinitesimally above 0% contrast! You wouldn't get any useful detail from such a lens at that level of contrast...other than flat, solid, unbroken consistency of a single tone. The human eye can't even resolve any detail if contrast is less than 9%, and at that level (well below what a bayer CMOS or CCD sensor can resolve) your down to 532lp/mm. Detail is still rather close to monotone at that level of contrast. At 50% contrast, which would be necessary for a sensor to resolve USEFUL detail (i.e. detail where all line pairs are resolved with enough clarity to consistently tell them apart) you are down to 247lp/mm. That is the number most people would normally use when talking about spatial resolution.

As for DSLR lenses that are actually diffraction-limited at f/2.8...there are VERY FEW. Zeiss may have a lens or two that are diffraction limited at around f/1.7...however I believe those were cinema lenses, not DSLR lenses...and highly specialized to boot. MTF's provided by Canon are THEORETICAL most of the time (I believe their book on lens technology may have included a few real MTF charts for some lenses), generated by computing optical performance using computer models of their lenses, not actual lenses. Their MTF charts depict reproduction accuracy of 10lp/mm (for contrast) and 30lp/mm (for sharpness) meridional and sagittal (opposing diagonal) line pairs. Even with a relatively low f/2.8 resolution (such as 50-70lp/mm, which many Canon lenses ARE capable of) you can get very high marks when the most detail you are resolving is 30lp/mm.

Real-world tests of Canon lenses at maximum aperture have NEVER demonstrated resolutions much above 70lp/mm, so its highly doubtful were getting 247lp/mm out of any Canon lenses...let alone 532, or in your rather humorous case, 650! :D

Lee Jay said:
Magnification and spatial resolution are NOT the same thing. Adding on a 1.4x TC changes magnification, it does not increase spatial resolution.

It does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing.

If you don't believe me, go outside with any 200mm lens you like attached to a Canon 1.6-crop 18MP sensor with no TCs and see if you can get a picture of Jupiter that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3105%20old.jpg
Or a picture of the moon that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3054%20edited.jpg

Your still talking about magnification, not spatial resolution. You can magnify a subject and still project it through a lens at THE SAME spatial resolution. Magnification and resolution are disjoint concepts, and as such, they can vary independently of each other. You can magnify a subject to a greater extent while also reducing spatial resolution..and you will see greater apparent detail of your larger subject...even though your actual resolving power is lower. (This is normally the case to a small degree when tacking on teleconverters...the additional optical elements each have their own optical aberrations that reduce resolving power...simple matter of physics there until you stop down to smaller apertures...where in the longer focal length results in a smaller effective aperture, so diffraction takes a much larger toll than without a TC.) Its like moving closer to a highly detailed 600ppi print. If you view it at 8 feet, it looks nice as a whole (i.e. looking at the moon with a 100mm lens), however walk closer to 4 feet, and you can see finer details (i.e. looking at the moon with a 200mm lens). Just because you walked closer to the print doesn't mean your eyes are magically capable of resolving more detail, either optically or via your retina...both remain exactly the same as they are...the subject is simply larger, so given a CONSTANT spatial resolution, more detail can be observed.

This is all pretty basic physics. I recommend reading Norman Koren's work...solid stuff, should clear things up.

Interesting information
http://forums.dpreview.com/forums/read.asp?forum=1019&message=35719448
 
Upvote 0
Tuggem said:

If you are referring to the discussion about how the 50/1.4 is "sharp" in the corners of a 5D II, thats not surprising. The 5D II, given its pixel pitch, only resolves about 72lp/mm (not accounting for things like the low-pass filter, which probably push that down to 70lp/mm or so). According to DXO's wide-aperture lens tests, the Sigma 50mm f/1.4 resolves 60lp/mm at f/1.4 on a 5D II (and at most 63lp/mm in their best test case.) It is not surprising that, at that level of resolving power, the Sigma 50/1.4 appears sharp (its resolving just about as much as the sensor can handle.) In the grand scheme of things, 60lp/mm is only 12% the resolution that a perfect f/1.4 lens is capable of resolving...that should give you an idea of how much potential room for improvement there can be for lenses at very wide apertures (and how hard it really is to achieve perfection in the face of overpowering optical aberrations.)

As for the comment about 300mp in FF to show pixel softness center-frame...I'm rather skeptical of that. I'll have to see if I can find a real-world MTF that actually indicates center-frame sharpness is that high for that lens. If it was possible, it would reach that only at the VERY CENTER pinpoint of the lens, and it would likely fall off rapidly from there due to optical aberrations. Its highly unlikely a 286mp APS-C sensor would be sharp at a pixel-level...the pinpoint center lens resolution is only the absolute maximum, and the bulk of a 286mp APS-C photo would...when pixel peeping...appear extremely soft. Measuring resolution at the center area of the lens is more useful...and tends to be between 50 and 70 line pairs/millimeter for most wide fast lenses. When you move into telephoto territory, where the widest apertures are f/2.8 and supertelephoto territory where the widest apertures are f/4 or f/5.6, diffraction limits your resolution from the get-go, even though its easier to get closer to perfection with such lenses. Assuming you did have a "perfect" f/4 lens, you would only realize that much resolution at a single aperture. Stop down to f/5.6 and you can't resolve more than 22mp APS-C worth, and anything beyond that current sensors are already outresolving lenses.
 
Upvote 0
Kahuna said:
wickidwombat said:
Kahuna said:
I wonder what life would be like if Steve Jobs was involved with DLSR's?
then we wouldn't be able to save photos, only view them on the icloud at $1 per view

Ouch. Just came crashing back down to earth. I'll think next time before I post something so stupid next time.

You are correct sir.

Don't worry, I enjoyed that one anyway. I was thinking he's probably too caught up arguing with St Peter about the Pearly Gates having rounded corners to have much time to take pictures. He'll have to make the case by himself, all the lawyers are in the other place.
 
Upvote 0
jrista said:
I'm not arguing 99% vs. 100% accuracy here,

I was talking about more like 67% versus 100%, which is more like a factor of 2 in pixel count. You're ignoring that.

The table there is based on Norman Koren's work. If you have issues with his work, you better take it up with him, as he is highly respected when it comes to lenses, film & sensors, resolving power, sharpness, contrast, etc.

I've read, created spreadsheets from, and quoted Norman's work many times. What I'm saying is entirely consistent with that work.

Real-world tests of Canon lenses at maximum aperture have NEVER demonstrated resolutions much above 70lp/mm,

Shot through an Optical Low-Pass Filter! Do you not see the difficulty in that approach?

Magnification and spatial resolution are NOT the same thing. Adding on a 1.4x TC changes magnification, it does not increase spatial resolution.

It does increase system spacial resolution if you are undersampling the optics without it. That's exactly what we're doing.

If you don't believe me, go outside with any 200mm lens you like attached to a Canon 1.6-crop 18MP sensor with no TCs and see if you can get a picture of Jupiter that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3105%20old.jpg
Or a picture of the moon that looks like this:
http://photos.imageevent.com/sipphoto/samplepictures/T2i__3054%20edited.jpg
Your still talking about magnification, not spatial resolution.

Fine - show me the images with the same spacial resolution as those shot the way I said. I'll help you out - you can't. I've already done this experiment, and the teleconverters do indeed drastically improve the overall system spacial resolution despite very slightly decreasing the optical resolution. This is exactly why we need more pixels, and a whole lot more - so we aren't undersampling the optics in the first place.

Have you ever wondered why the best amateur planetary imagers operate pixels that are about the size of those on the 40D through optics set at f/30? According to you, they're way, way beyond the capability of those optics, yet they increased focal length to that level in an effort to preserve maximum detail. Why would they use expensive barlows (Televue Powermates) if those small pixels were extracting all the detail from their bare f/11 optics in the first place? Answer - they don't. And that's with monochrome sensors with no OLPFs!!!

Have a look. This was shot at about f/30 with pixels that are about 40D sized:

http://damianpeach.com/barbados10/2010_09_12pic.jpg
 
Upvote 0
KitH said:
Kahuna said:
wickidwombat said:
Kahuna said:
I wonder what life would be like if Steve Jobs was involved with DLSR's?
then we wouldn't be able to save photos, only view them on the icloud at $1 per view

Ouch. Just came crashing back down to earth. I'll think next time before I post something so stupid next time.

You are correct sir.

Don't worry, I enjoyed that one anyway. I was thinking he's probably too caught up arguing with St Peter about the Pearly Gates having rounded corners to have much time to take pictures. He'll have to make the case by himself, all the lawyers are in the other place.

LOL
Actually i think you'll find the pearly gates are now a svelte brushed aluminium with a cool white glowing logo.
and st peter now wears a blue tshirt that says "genius"
 
Upvote 0
Status
Not open for further replies.