This is pretty straightforward. The Canon LENS is better than the Nikon LENS. They got the same LENS score. Yep, pretty much sums it up.
Upvote
0
Pi said:jrista said:Simple fact of the matter is a better lens will perform better on ALL sensors, 20mp, 30mp, or 50mp. The problem with DXO's tests is they quite simply don't give you a reasonable camera-agnostic basis from which to compare lenses.
Actually, they do. There is a way to extract the pure lens resolution from the data they used to publish (full MTF curves, not the nonsense they publish now).
Pi said:The Nikon 500/4 performs "on par" (toung in cheek) with the Canon 500/4 solely because of the higher resolution sensor. That sort of tells you that the Canon lens is particularly good, because it is performing so well on a worse sensor...but you don't really have any exact way of comparing. You only get a "feeling" that it performs so well.
Why in the world would you want to know how a Canon compares to a Nikon without a body? For bragging rights? They tell you what is achievable with the current bodies on which the lens works, the way it is deigned to work. A better lens on one body will be better on future bodies as well.
jrista said:Pi said:jrista said:Simple fact of the matter is a better lens will perform better on ALL sensors, 20mp, 30mp, or 50mp. The problem with DXO's tests is they quite simply don't give you a reasonable camera-agnostic basis from which to compare lenses.
Actually, they do. There is a way to extract the pure lens resolution from the data they used to publish (full MTF curves, not the nonsense they publish now).
Umm, no...sorry. The final image is a convoluted result...one could not extract a "pure" lens resolution...you could only approximate it. (For the very same reason one cannot perfectly extract noise from a noisy image...it is part of a convolution produced by a complex real-world system. Too much uncertainty and a loss of information prevents perfect noise removal.)
One wouldn't, necessarily. But your missing the point. The point is to call out DXO's BS approach to performing lens tests. The point is to clearly note that those tests are "camera system" tests...they are neither lens tests nor sensor tests. I wouldn't go so far as to say that is 100% useless, but it is certainly biased the way DXO does it, and there is a suspiciously long-term bias towards a particular manufacturer by DXO. (Not just away from Canon, either...even the Sony lens, which actually has better transmission, should have scored better...but it was limited by a sensor!)
Pi said:But the numbers are meaningful; or should I say were meaningful before they decided that we are too stupid to understand what MTF meant and decided to use an undocumented metric.
Pi said:But a better lens on, say, the 5D3, will preform better on the 5D4, you know that for sure. Again, where is the problem exactly?
Pi said:jrista said:Pi said:jrista said:Simple fact of the matter is a better lens will perform better on ALL sensors, 20mp, 30mp, or 50mp. The problem with DXO's tests is they quite simply don't give you a reasonable camera-agnostic basis from which to compare lenses.
Actually, they do. There is a way to extract the pure lens resolution from the data they used to publish (full MTF curves, not the nonsense they publish now).
Umm, no...sorry. The final image is a convoluted result...one could not extract a "pure" lens resolution...you could only approximate it. (For the very same reason one cannot perfectly extract noise from a noisy image...it is part of a convolution produced by a complex real-world system. Too much uncertainty and a loss of information prevents perfect noise removal.)
You are wrong on that. I am not saying that you can remove the AA filters/sensor blur from the image. I am saying that you can find (estimate, if you wish) the strength of the sensor blur. If you are interested in the math, go to my profile, click on the link, etc. Deconvolution is a very different process, very unstable but you do not need to deconvolute to estimate the effect of the sensor blur. You can get instability only if you use sensors with such a low resolution, that the lenses you want to compare look the same (and they are not).
The problem with all that is that even if you are going to get the pure lens resolution somehow, you still need to consider the blurring effect of a future sensor, and compute the combined resolution again. So my question stands: are you sure you know how to do that?
Pi said:One wouldn't, necessarily. But your missing the point. The point is to call out DXO's BS approach to performing lens tests. The point is to clearly note that those tests are "camera system" tests...they are neither lens tests nor sensor tests. I wouldn't go so far as to say that is 100% useless, but it is certainly biased the way DXO does it, and there is a suspiciously long-term bias towards a particular manufacturer by DXO. (Not just away from Canon, either...even the Sony lens, which actually has better transmission, should have scored better...but it was limited by a sensor!)
Of course those are lens+camera tests, and DXO never said otherwise.
DxOMark's comprehensive camera lens test result database allows you to browse and select lenses for comparison based on its characteristics, brand, type, focal range, aperture and price.
jrista said:It doesn't matter what kind of sensor you have, low resolution, high resolution, or tomorrows resolution. A convolved result is a convolved result, and in this case stability (or the lack thereof) doesn't really apply like it might when trying to denoise or deblur.
You are talking about reverse engineering the actual lens PSF from an image produced by a grid of spatially incongruent red, green, and blue pixels (likely covered by additional lenses (microlenses)), then further interpolated by software to produce the kind of RGB color pixels we see on a screen and analyze with tools like Imatest (or DXO's software).
Your article is an interesting start, but you are assuming a Gaussian PSF. An actual PSF is most definitely not Gaussian, nor is it constant across the area of the lens (i.e. it changes as you leave the center and approach the corners...do a search for "spot diagram" to see actual lens PSF's produced mathematically from detailed and accurate lens specifications...even for the best of lenses, outside of the most centeral on-axis results, a PSF can be wildly complicated).
Not to mention the fact that you have to guess the kernel in the first place, so whatever your result, it is immediately affected by what you think the lens is capable of in the first place.
Personally, I wouldn't trust any site that provided "lens resolution" results reverse engineered from an image produced by any sensor. I would actually rather take the "camera system" tests than have someone telling me what their best guess is for lens performance.
This is nitpicking. Try to get the resolution numbers. You cannot get it without choosing a body, and the results are always displayed with the body well visible. Their articles are poorly written but it is not a "rocket science" to realize what you are looking at.Hmm, DXO's own description on the lens tests page begs to differ:
DxOMark's comprehensive camera lens test result database allows you to browse and select lenses for comparison based on its characteristics, brand, type, focal range, aperture and price.
ankorwatt said:well as I have declare before in many discussion here at CR that a real MTF test from Hasselblad MTF lab shows that there are no significant difference between Nikon super tele and Canon, even if Canon are using their fluorite elements compared to Nikons super ED
http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Nikon-AF-S-Nikkor-300mm-and-400mm-f-2.8G-ED-VR-lens-reviews-legendary-performers-in-the-range
and 200/2.0 FROM NIKON, I PERSONALLY RANK THIS LENS AS BETTER THAN 200/1,8 AND 2.0 FROM CANON and so does also http://www.lenstip.com/index.html?test=obiektywu&test_ob=325
http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Nikon-AF-S-Nikkor-200mm-f-2.0G-ED-VR-II-lens-review/Nikkor-AF-S-Nikkor-200mm-f-2G-ED-VR-II-versus-competition
ankorwatt said:msm said:ankorwatt said:well as I have declare before in many discussion here at CR that a real MTF test from Hasselblad MTF lab shows that there are no significant difference between Nikon super tele and Canon, even if Canon are using their fluorite elements compared to Nikons super ED
http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Nikon-AF-S-Nikkor-300mm-and-400mm-f-2.8G-ED-VR-lens-reviews-legendary-performers-in-the-range
and 200/2.0 FROM NIKON, I PERSONALLY RANK THIS LENS AS BETTER THAN 200/1,8 AND 2.0 FROM CANON and so does also http://www.lenstip.com/index.html?test=obiektywu&test_ob=325
http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Nikon-AF-S-Nikkor-200mm-f-2.0G-ED-VR-II-lens-review/Nikkor-AF-S-Nikkor-200mm-f-2G-ED-VR-II-versus-competition
I'll take this page showing real ISO crops over some MTF lab test anyday I'm going to buy a lens to take pictures with:
http://www.the-digital-picture.com/Reviews/ISO-12233-Sample-Crops.aspx?Lens=458&Camera=453&Sample=0&FLI=0&API=0&LensComp=648&CameraComp=0&FLIComp=0&APIComp=0
I'll worry about the "real MTF" test next time I buy a lens to take it to the lab.
thats no real MTF test of the lens, it is the same way lens tip, photo zone and others measure the lenses
msm said:Never said it was, just tried to make the point that MTF does not tell the entire story and I prefer equipment that produce best images to my eyes and couldn't care less about numbers from a lab. And the Nikon 200f2 looks softer in the corner compared to Canon on those actual images.
Pi said:msm said:Never said it was, just tried to make the point that MTF does not tell the entire story and I prefer equipment that produce best images to my eyes and couldn't care less about numbers from a lab. And the Nikon 200f2 looks softer in the corner compared to Canon on those actual images.
Not that I disagree with you about the visual evidence vs. the lab results ... but if your goal is to get the whole PSF (the image of an ideal point), then the MTF is giving essentially its Fourier transform, and from there, you get the PSF in a direct and a stable way. This does not work is such a straightforward way when the PSF is too concentrated near a single pixel. Still, the whole MTF is in some sense the "complete data" that DXO is hiding from us.
msm said:Pi said:msm said:Never said it was, just tried to make the point that MTF does not tell the entire story and I prefer equipment that produce best images to my eyes and couldn't care less about numbers from a lab. And the Nikon 200f2 looks softer in the corner compared to Canon on those actual images.
Not that I disagree with you about the visual evidence vs. the lab results ... but if your goal is to get the whole PSF (the image of an ideal point), then the MTF is giving essentially its Fourier transform, and from there, you get the PSF in a direct and a stable way. This does not work is such a straightforward way when the PSF is too concentrated near a single pixel. Still, the whole MTF is in some sense the "complete data" that DXO is hiding from us.
Ages since I had fourier analysis, but as far as I recall that theory is based on point sampling. An image sensor uses area sampling. Has it been shown that fourier analysis still applies?
Mika said:As far as I know, digital image sensors are a bit more complicated case than the classical sampling theorem would predict. First of all, it is important to understand the full meaning that the captured image is atwothree dimensional signal (x, y and intensity) and how the eye sees it.
Using classical sampling theorem, a maximum resolvable frequency could be found by taking the inverse of (2*pixel pitch), which would lead to Nyqvist cut-off frequency. However, this is not the case, as in the measurements the image sensor tends to see further, as explained in [1] and published in [2]
As a short version, if one is able to align the pixel array exactly in the direction of bar patterns, the classical Nyqvist frequency holds. However, it is very difficult to do this, and thus what is actually seen is a result of sub-pixel sampling, which is then averaged by the eye and interpreted as a distinguishable bar. If one would only take a single line of the image, I'm not sure if the result in that case would be classified as distinguishable.
Add on top of that the fact whether we want to represent the actual shape of the subject at the maximum resolvable frequency despite the fact if it lands between the pixels, it can be seen that there can be a need for three to five times oversampling. I don't unfortunately have a good link to show this, I'll try to look for it and post it whether I can find it. However, this tends to be a way of selling more pixels too.
EDIT: Ah, found it, the PDF was by Andor [3]. What I want to say with all this, is that it is actually not that well defined what is meant by "resolving something" with the image sensors.
Pi said:Mika said:As far as I know, digital image sensors are a bit more complicated case than the classical sampling theorem would predict. First of all, it is important to understand the full meaning that the captured image is atwothree dimensional signal (x, y and intensity) and how the eye sees it.
Using classical sampling theorem, a maximum resolvable frequency could be found by taking the inverse of (2*pixel pitch), which would lead to Nyqvist cut-off frequency. However, this is not the case, as in the measurements the image sensor tends to see further, as explained in [1] and published in [2]
As a short version, if one is able to align the pixel array exactly in the direction of bar patterns, the classical Nyqvist frequency holds. However, it is very difficult to do this, and thus what is actually seen is a result of sub-pixel sampling, which is then averaged by the eye and interpreted as a distinguishable bar. If one would only take a single line of the image, I'm not sure if the result in that case would be classified as distinguishable.
Add on top of that the fact whether we want to represent the actual shape of the subject at the maximum resolvable frequency despite the fact if it lands between the pixels, it can be seen that there can be a need for three to five times oversampling. I don't unfortunately have a good link to show this, I'll try to look for it and post it whether I can find it. However, this tends to be a way of selling more pixels too.
EDIT: Ah, found it, the PDF was by Andor [3]. What I want to say with all this, is that it is actually not that well defined what is meant by "resolving something" with the image sensors.
Those links have nothing to do with the sampling theorem. The latter does not care whether you image bars, etc., it tells you how to sample an a priori band limited signal (the bars are NOT that), and how to reconstruct it. The modification needed that I mentioned is simple and must have been done by somebody already. In short, if your image is band limited already (this is what the AA filter does, together with the lens), and you have a good estimate what that limit is, you know how many pixels you need.
Do not confuse a convenient resolution test (bars) with the sampling theorem.
Mika said:Pi said:Mika said:As far as I know, digital image sensors are a bit more complicated case than the classical sampling theorem would predict. First of all, it is important to understand the full meaning that the captured image is atwothree dimensional signal (x, y and intensity) and how the eye sees it.
Using classical sampling theorem, a maximum resolvable frequency could be found by taking the inverse of (2*pixel pitch), which would lead to Nyqvist cut-off frequency. However, this is not the case, as in the measurements the image sensor tends to see further, as explained in [1] and published in [2]
As a short version, if one is able to align the pixel array exactly in the direction of bar patterns, the classical Nyqvist frequency holds. However, it is very difficult to do this, and thus what is actually seen is a result of sub-pixel sampling, which is then averaged by the eye and interpreted as a distinguishable bar. If one would only take a single line of the image, I'm not sure if the result in that case would be classified as distinguishable.
Add on top of that the fact whether we want to represent the actual shape of the subject at the maximum resolvable frequency despite the fact if it lands between the pixels, it can be seen that there can be a need for three to five times oversampling. I don't unfortunately have a good link to show this, I'll try to look for it and post it whether I can find it. However, this tends to be a way of selling more pixels too.
EDIT: Ah, found it, the PDF was by Andor [3]. What I want to say with all this, is that it is actually not that well defined what is meant by "resolving something" with the image sensors.
Those links have nothing to do with the sampling theorem. The latter does not care whether you image bars, etc., it tells you how to sample an a priori band limited signal (the bars are NOT that), and how to reconstruct it. The modification needed that I mentioned is simple and must have been done by somebody already. In short, if your image is band limited already (this is what the AA filter does, together with the lens), and you have a good estimate what that limit is, you know how many pixels you need.
Do not confuse a convenient resolution test (bars) with the sampling theorem.
There is a misunderstanding somewhere here, for me it sounds like we are talking about different things or use different terms. I'm well aware of the different nature of the problem described in [3]. However, what I meant to say with that is related to your earlier PSF considerations, when characterizing the PSF, the energy in the typical photographic objective spot is typically within the region of 1-3 camera body pixels, with a central core of the energy (something like 80 %) in a single pixel.
So in that case, you would be quite subject to errors in estimating the PSF due to the effect shown in [3]. And you really don't know the PSF beforehand. Only at the proximity of image edges (or using fast lenses) the PSF may become large enough to be sampled well by the camera sensor. If you are using a different bench for estimating the PSF with magnification, you'll then lose the effect of the AA filter as well.
Also, the photographic objective MTF isn't typically evaluated from a PSF (haven't seen this being used in many places), but from an edge or line spread function which then allows sub-pixel sampling and is more robust against positioning with respect to sampling grid. Astronomical telescopes may be a different thing, I don't have experience in designing them.
The point of [1] was to show that for example, depending on the angle the camera is mounted with respect to the bar chart target, your micro-contrast figures may change slightly.
None of this actually matters to the actual photography, though. I don't know whether we should continue with private messages, I suppose this is going to get technical and lots of people aren't probably interested in seeing this.
Mika said:There is a misunderstanding somewhere here, for me it sounds like we are talking about different things or use different terms. I'm well aware of the different nature of the problem described in [3]. However, what I meant to say with that is related to your earlier PSF considerations, when characterizing the PSF, the energy in the typical photographic objective spot is typically within the region of 1-3 camera body pixels, with a central core of the energy (something like 80 %) in a single pixel.
So in that case, you would be quite subject to errors in estimating the PSF due to the effect shown in [3]. And you really don't know the PSF beforehand. Only at the proximity of image edges (or using fast lenses) the PSF may become large enough to be sampled well by the camera sensor. If you are using a different bench for estimating the PSF with magnification, you'll then lose the effect of the AA filter as well.
Also, the photographic objective MTF isn't typically evaluated from a PSF (haven't seen this being used in many places), but from an edge or line spread function which then allows sub-pixel sampling and is more robust against positioning with respect to sampling grid.
None of this actually matters to the actual photography, though. I don't know whether we should continue with private messages, I suppose this is going to get technical and lots of people aren't probably interested in seeing this.
Mika said:Sorry about the delay in replying, the weather has been (almost too) good in this week.
What it comes to slanted edge testing, this is where I disagree (partially). If we consider a slanted edge test with a body+lens setup, there are several issues in that what I'd think as a deal breaker for recovering the real point spread function as I know it.
First, the pixel pitch typically does not actually support sufficient sampling.
Second, the slanted edge is considerably larger and thus the average of the line spread functions is taken over a comparatively large image block where PSF has probably changed by some amount (if this isn't done, there will be uncertainty in the slant angle and the sub-pixel sampling is affected).
Third, given the slant angle is small, this test methodology cannot differentiate between imaging quality of tangential and sagittal axes and can miss changes in the averaging direction completely.
For an extreme example, it would report the MTF of a cylinder lens system equal to spherical lens system if it was aligned along the imaging axis. This mistake of course, is hard to imagine happening in real life, but extending the thought for a bit, it is easier to understand that decentered elements along one axis could be missed with this. For this reason, lens would need to be turned 90 degrees to determine both directions.
The bar chart quality assurance benches that I have seen are used as OK/NOK step in quality control. The actual MTF measurement benches magnify the known spot with a high quality microscope objective, and thus this measurement of the MTF is much more local, and for that reason I accept it as a representative PSF. The only people who I do know to have sampled the PSF directly are astronomers.