Dxo tests canon/nikon/sony 500mm's

Status
Not open for further replies.
I checked through some of the late night photos I have taken with 28/1.8 and 40D to see how large the star points actually are on the sensor. With 8-15 second exposure, I'm seeing that the star spots are within 4x3 pixels or 5x4 (the extension along the other dimension is the most dominant here, and is due to exposure time and Earth's rotation).

This is measured from a straight out of camera JPEG, with F/3.5 and with a lens that had slight amount of frost on the front element. And even then the spots are quite limited. I do think that had the conditions been better (no frost and better lens with equal aperture and better ISO than 40D has) and had I taken RAWs, most of the star images would fall within 3x3 region (as I said earlier) - spread is mainly because of the AA filter, otherwise stars should fall within a single pixel assuming any kind of reasonable performance of the objective and with F-numbers less than 5.6. With a PSF of size 3x3 pixels, it is hard for me to see how this could be used to compute the MTF even with sub-pixel sampling without averaging over larger area.

For example, 100 pixels with a 5 µm pitch respresents about 0.5 mm, which is significant. If we are talking about smaller averaging distance, for example 30 pixels, the uncertainty of the slant angle itself would be about 2 degrees. I haven't seen many error estimations to the slanted edge method, but I'm afraid I'll have to do that myself in the near future in a publication.

It is great to hear about your background in the tomography, it helps me understand how you think about these issues. But I have to remind you that we are talking about optical systems within visual wavelength range, where things are quite bit different from radio waves (MRI) or THz region or ultrasound. From my hazy memory, MRI actually measures time differences on different detectors, but it's 11 years since I have needed to think anything related to NMR? These medical wavelengths are used mainly because of the requirement of non-invasiveness and that's why you need to deal a lot with image processing techniques.

I think a better equivalent would be to compare image processing techniques in astronomical telescopes to get the state-of-the-art results in the visible wavelength range. Adaptive optics correction to the PSFs allows ground based telescopes just to match them with Hubble over a smaller field of view on good nights.
 
Upvote 0
Mika said:
With a PSF of size 3x3 pixels, it is hard for me to see how this could be used to compute the MTF even with sub-pixel sampling without averaging over larger area.

And still they do it. Here is a good read:

http://www.imatest.com/docs/sharpness/
http://www.imatest.com/docs/sharpness/#calc

Quote: Briefly, the ISO-12233 slanted edge method calculates MTF by finding the average edge (4X oversampled using a clever binning algorithm), differentiating it (this is the Line Spread Function (LSF)), then taking the absolute value of the Fourier transform of the LSF. The edge is slanted so the average is derived from a distribution of sampling phases (relationships between the edge and pixel locations).

Of course, this measures the combined lens+AA filter MTF (a bit more complicated than that, actually). Different lenses show different enough results to make the test meaningful. They do average over some area, that is the whole idea of the slanted edge test vs. measuring what happens near a single pixel.

It is great to hear about your background in the tomography, it helps me understand how you think about these issues. But I have to remind you that we are talking about optical systems within visual wavelength range, where things are quite bit different from radio waves (MRI) or THz region or ultrasound.

My background is in math, which (fortunately for me) is useful regardless of the whether you have microwaves, or visual wavelengths, etc.
 
Upvote 0
Sorry again about the delay, I was on a vacation trip.

I have not said that the MTF computed using the slanted edge method isn't useful. However, I have said that the MTF calculated with this method isn't scientifically accurate if you want absolute accuracy. The problem with the averaging is that it tends to lose information of the spot itself, while the average along two orthogonal directions is computed with sufficient sampling, pretty much nothing is said about what happens between the orthogonal directions.

For this reason, I don't believe it would be possible to reconstruct an accurate PSF with the slanted edge method and thus the measured MTF must be slightly invalid as well. You can think of this from the dimensional reduction point of view; it is generally not possible to recreate a 3D function from two 2D functions. Higher order aberrations do give rise for all sorts of interesting spot shapes and orientations with element decentering.

But as I said, slanted edge method allows comparable MTF measurements and is very good at that, but it does not allow absolute measurements where you have to guarantee the results.

It is relatively easy to think that there isn't differences between the behavior of rays when shifting from a wavelength range to another. I hear this argument quite often, and this may sound like blasphemy for some, but I disagree with that. For example, there is a considerable difference between the ray propagation physics between a visual wavelength range camera (typically not diffraction limited) and a THz system and you have to take them into account when designing them.
 
Upvote 0
Pi said:
Mika said:
Sorry about the delay in replying, the weather has been (almost too) good in this week.

What it comes to slanted edge testing, this is where I disagree (partially). If we consider a slanted edge test with a body+lens setup, there are several issues in that what I'd think as a deal breaker for recovering the real point spread function as I know it.

First, the pixel pitch typically does not actually support sufficient sampling.

It does but not of the PSF directly - of the PSF convoluted with "something", derived from the pixel size and the AA filter. That "something" is a known quantity, depending on the angle as well. From there, you can get the PSF. Again, the computation is NOT the same as deconvolution but still, it is not going to be too accurate if the PSF is too concentrated compared to one pixel but the instability is far from the (exponential) one for deconvolution. The lens PSF convoluted with the effect of the AA filter however is not "too concentrated" (the reason AA exists in the first place), and can be well reconstructed. Factoring out the effect of the AA filter itself if trickier but if you keep the same sensor, you want to keep that effect in place.

If you are reverse engineering a PSF which is the convolution of lens + AA filter, then that would be the exact issue I was trying to point out before. The AA filter is designed to limit the resolution of the image that reaches the sensor plane by filtering out higher frequencies, while leaving lower frequencies in tact. If you are reverse engineering the image post-AA, then it has an intrinsic upper limit on resolution. The lens could very well (and in the case of a good lens like the EF 500/4 L II, most likely does) resolve more than what the lens+AA convolved resolve.

If you knew the exact nature of the AA filter, you could probably exclude its effect from the PSF, and arrive at a result much closer to what the lens itself is actually capable of. If you leave the convolution with the AA filter in, then you haven't really reverse engineered the lens MTF, you've just reverse engineered the lens+AA filter MTF. That might be useful for comparison, but it really doesn't tell you all that much about the lens itself.
 
Upvote 0
Mika said:
Sorry again about the delay, I was on a vacation trip.

I have not said that the MTF computed using the slanted edge method isn't useful. However, I have said that the MTF calculated with this method isn't scientifically accurate if you want absolute accuracy. The problem with the averaging is that it tends to lose information of the spot itself, while the average along two orthogonal directions is computed with sufficient sampling, pretty much nothing is said about what happens between the orthogonal directions.

For this reason, I don't believe it would be possible to reconstruct an accurate PSF with the slanted edge method and thus the measured MTF must be slightly invalid as well. You can think of this from the dimensional reduction point of view; it is generally not possible to recreate a 3D function from two 2D functions. Higher order aberrations do give rise for all sorts of interesting spot shapes and orientations with element decentering.

We are getting here to more philosophical questions. In science, we use modeling. We make some a priori assumptions, ignore this and that, and then build a model which we analyze. That model is never perfect, and it can't be. We must be aware of its limitations. But saying - you can never have a perfect model, so why bother with science at all - is not the right thing.

Long time ago, Riemann suggested to model anisotropic phenomena by a quadratic form. The level curves (in 2D) of that form are ellipses. 2 measurements then are enough. In some sense, this is equivalent to taking a truncated Taylor expansion of a more complicated function.

Going back to photography - when the PSF is well concentrated, approximation by a quadratic form is OK. When it is not, you can see it, no need of sophisticated methods (like my 35L at f/1.4 in the corners, wide open). Those are some of the limitations in this case.

So when I say - you can get the the PSF from the MTF, I always assume some reasonable model in place. Also, I do not mean that you must use the DXO test, necessarily. If you are curious enough, you can rotate your target, and get get more directions.

It is relatively easy to think that there isn't differences between the behavior of rays when shifting from a wavelength range to another. I hear this argument quite often, and this may sound like blasphemy for some, but I disagree with that.

In the good old times, DXO published MTF charts on each of the (RAW) RGB channels. They were different enough, indeed. Actually, sometimes, too much. They reported some kind of average, weighed heavily towards greens, if I remember well, because people want simple answers. If you dig deeper in that, the spectral decomposition of the light in the test would play a role, too, etc. In principle, the camera projects an infinitely dimensional color space to a 3D one, so full spectral information is lost anyway.
 
Upvote 0
Status
Not open for further replies.