31

**Lenses / Re: Dxo tests canon/nikon/sony 500mm's**

« **on:**July 25, 2013, 04:34:02 PM »

As far as I know, digital image sensors are a bit more complicated case than the classical sampling theorem would predict. First of all, it is important to understand the full meaning that the captured image is a ~~two~~ three dimensional signal (x, y and intensity) and how the eye sees it.

Using classical sampling theorem, a maximum resolvable frequency could be found by taking the inverse of (2*pixel pitch), which would lead to Nyqvist cut-off frequency. However, this is not the case, as in the measurements the image sensor tends to see further, as explained in [1] and published in [2]

As a short version, if one is able to align the pixel array exactly in the direction of bar patterns, the classical Nyqvist frequency holds. However, it is very difficult to do this, and thus what is actually seen is a result of sub-pixel sampling, which is then averaged by the eye and interpreted as a distinguishable bar. If one would only take a single line of the image, I'm not sure if the result in that case would be classified as distinguishable.

Add on top of that the fact whether we want to represent the actual shape of the subject at the maximum resolvable frequency despite the fact if it lands between the pixels, it can be seen that there can be a need for three to five times oversampling. I don't unfortunately have a good link to show this, I'll try to look for it and post it whether I can find it. However, this tends to be a way of selling more pixels too.

EDIT: Ah, found it, the PDF was by Andor [3]. What I want to say with all this, is that it is actually not that well defined what is meant by "resolving something" with the image sensors.

Using classical sampling theorem, a maximum resolvable frequency could be found by taking the inverse of (2*pixel pitch), which would lead to Nyqvist cut-off frequency. However, this is not the case, as in the measurements the image sensor tends to see further, as explained in [1] and published in [2]

As a short version, if one is able to align the pixel array exactly in the direction of bar patterns, the classical Nyqvist frequency holds. However, it is very difficult to do this, and thus what is actually seen is a result of sub-pixel sampling, which is then averaged by the eye and interpreted as a distinguishable bar. If one would only take a single line of the image, I'm not sure if the result in that case would be classified as distinguishable.

Add on top of that the fact whether we want to represent the actual shape of the subject at the maximum resolvable frequency despite the fact if it lands between the pixels, it can be seen that there can be a need for three to five times oversampling. I don't unfortunately have a good link to show this, I'll try to look for it and post it whether I can find it. However, this tends to be a way of selling more pixels too.

EDIT: Ah, found it, the PDF was by Andor [3]. What I want to say with all this, is that it is actually not that well defined what is meant by "resolving something" with the image sensors.