Stay at home
- Aug 16, 2012
Knowing that resolution varies as the square root of the number of pixels for a given size is very useful practically. Suppose for example that you have a 48 Mpx and a 24 Mpx FF camera. The 48 MPx has sqrt(2) times the resolution, ie 1.4x. This means in practice and all other things being equal, a 500mm lens on the 48 has the same resolution as a 1.4x500mm, ie 700mm, on the 24 Mpx and puts as many pixels on the duck. You know that you have to put a 1.4xTC on the 24 Mpx to give it the reach of a 48 Mpx.Neither method of modeling and measuring resolution is strongly related to human impressions of photographs, nor are human impressions a valid way to judge one superior to the other. Given the right subject there's practically no difference between even 50mp FF and 18mp APS-C at 24x36. Given a different subject the difference is subtle. Given a third subject the difference can be quite large and immediately obvious to a casual observer.
I would agree from a photographic perspective that there will only be a subtle difference between 24mp FF and 20mp FF, in general, and all other factors being equal. I would also say that the 24mp sensor is collecting 20% more data and is therefore able to resolve a 2D subject 20% "better" in a strictly technical sense.
Insisting that the 24mp sensor only resolves 9.6% more because we traditionally measure linearly for convenience is missing the forest for the trees. It's also a predictive failure in other applications where an array of sensors is deployed.