Theoretical and practical limits of high megapixel cameras.

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,881
With more extremely resolving lenses becoming available now, i actually do think that difraction is a practically limiting factor.

Altough its true, that for most lenses stopping down the lens above the difraction limit will still improve sharpness.

But if we take a theoretical "perfectly sharp" lens, then we actually are limited by difraction. The r5 with 45 has a difraction limit of about 7.1. We can see this already with the extremly sharp RF 135mm f1.8 here: Hover with your mouse over the picture to see f8 and take the mouse back away from the picture to see f5.6. f8 is noticably softer, less contrasty.


So while difraction might not be relevant right now, because the lenses arent perfectly resoling the sensors, the 45 mp cameras of today are absolutly theoretically limited by lenses like the 100-500 f7.1, 200-800 f9 and 600 and 800 f11. It just isnt apparent because the lenses arent perfectly resolving the sensors, so stopping down still improves the image more, than diffraction degrades it.

This might mean, that there is not much reason to increase the megapixel count with the current market trend of preference for small and lightweight lenses, that need small aperatures in order to achieve this form factor. I wouldnt be suprised if the r5 ii stays in the 45 mp range. Same for the sony r range, i dont think it will go much beyond 61mp.

This leads me to question, where camera makers will head next to? We already have really good ergonomic camera bodies. We have more than enough frames per second for most applications, we have a practically good amount of megapixels. I think the r5 ii will be a really big tell for where we are heading with the camera industry, as i think we wont see much more meaningful improvement for most people. I hope it will be a great camera, but im unsure what could meaningfully be improved, except for making it a stacked sensor, maybe a bit more dynamic range.
The RF 100-500mm f/7.1 and RF 100-400 f.8 both resolve about 25% or so more on the R7 than the R5, and the R7 is equivalent to an 82 Mpx FF sensor. One reason for this is that the Bayer filter decreases the effective resolution of the sensor by about 30% or thereabouts so the onset of diffraction limitation appears later with higher Mpx count. Without going into more theory, that should be enough to tell you that there is more headway for increasing Mpx count for the narrow lenses, and the f/2.8 lenses will benefit more.
 
  • Like
Reactions: 1 user
Upvote 0
The RF 100-500mm f/7.1 and RF 100-400 f.8 both resolve about 25% or so more on the R7 than the R5, and the R7 is equivalent to an 82 Mpx FF sensor. One reason for this is that the Bayer filter decreases the effective resolution of the sensor by about 30% or thereabouts so the onset of diffraction limitation appears later with higher Mpx count. Without going into more theory, that should be enough to tell you that there is more headway for increasing Mpx count for the narrow lenses, and the f/2.8 lenses will benefit more.
Thats interesing. Thanks for that input.

How come i can seemingly see difraction with the provided link then?
 
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,881
Thats interesing. Thanks for that input.

How come i can seemingly see difraction with the provided link then?
You will see the effects of diffraction at any aperture - it's physics. Diffraction progressively lowers the MTF of any lens at with increasing aperture, from f to 1 to to 1.2 to 1.4 to 2 to 2.8 etc. The iMTF for a uniformly illuminated spherical aperture is given by:
MTF = 2(φ - cosφsinφ)/π
Where φ = cos-1(λν/2NA), ν is the frequency in cycles per millimeter, λ is the wavelength of illumination in mm, and NA is the numerical aperture.

What this means in practice can be seen in this graph that I posted very recently here of the resolution of the sharpest lenses measured on the 5DSR. The question is not why you can see diffraction but where diffraction limits the resolving power of the lens to the extent that increasing resolution of the sensor no longer increases the resolving power of the whole camera system.

5DSR_ephotozine_New.jpg
 
  • Like
Reactions: 1 user
Upvote 0
Jul 28, 2015
3,369
571
This leads me to question, where camera makers will head next to?
The main differentitor I think will be dynamic range (especially for wildlife and sports) to allow low light images without flash.
But in general, I think Olympus is showing the way and it is mainly around compositional functions and computational photography. Things like pre-capture are becoming more common in new models for all manufacturers, they have a neutral density filter simulator, live bulb (see a long exposure developing before your eyes and you stop the exposure when you have what you want), Live composite where you leave the shutter open for a firework display and it does not overexpose the background but records the new fireworks and overlays them. In-camera editing will probably improve as will live links to a phone.
In the days of film, when you chose the film you wanted, cameras were differentiated by functions and build quality. With high quality sensors now appearing in even basic cameras, it will not be long before we are back to that situation.
 
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,881
The main differentitor I think will be dynamic range (especially for wildlife and sports) to allow low light images without flash.
But in general, I think Olympus is showing the way and it is mainly around compositional functions and computational photography. Things like pre-capture are becoming more common in new models for all manufacturers, they have a neutral density filter simulator, live bulb (see a long exposure developing before your eyes and you stop the exposure when you have what you want), Live composite where you leave the shutter open for a firework display and it does not overexpose the background but records the new fireworks and overlays them. In-camera editing will probably improve as will live links to a phone.
In the days of film, when you chose the film you wanted, cameras were differentiated by functions and build quality. With high quality sensors now appearing in even basic cameras, it will not be long before we are back to that situation.
DR at low light is the least likely to be improved as sensors are pretty close to maximum quantum efficiency. There hasn't been an improvement for several years there and at high light levels (low iso) the top models are converging.

From photonstophotos

Screenshot 2024-02-09 at 22.15.49.png
 
  • Like
Reactions: 1 users
Upvote 0
The main differentitor I think will be dynamic range (especially for wildlife and sports) to allow low light images without flash.
But in general, I think Olympus is showing the way and it is mainly around compositional functions and computational photography. Things like pre-capture are becoming more common in new models for all manufacturers, they have a neutral density filter simulator, live bulb (see a long exposure developing before your eyes and you stop the exposure when you have what you want), Live composite where you leave the shutter open for a firework display and it does not overexpose the background but records the new fireworks and overlays them. In-camera editing will probably improve as will live links to a phone.
In the days of film, when you chose the film you wanted, cameras were differentiated by functions and build quality. With high quality sensors now appearing in even basic cameras, it will not be long before we are back to that situation.
I agree, its probably gonna go that route. I belive we can already see it with nikon and the z9 and z8. Same sensor, different body.

Historyically we always had the sports bodies with better af and more fps and the higher mp bodies, but now with high mp stacked sensors both is possible, so we might see a convergence here.

I wouldnt be suprised if the r1 and r5 ii use the same stacked sensor, and only the bodies and features are more specialised.

What do you think?
 
Upvote 0
I'd be very, very surprised.
It would be a first for canon, thats for sure, but we saw it with nikon now.

Sensor development seems to be extremly expensive, judging by how often the same sensor technology gets reused (e.g. the reusing of the sensor in sony a7r iv and v and also the usage of the same pixel pitch/technology in sensors: the sony a7r iv/v , fuji gfx100, hasselblad 100mp medium format, sony aps-c etc).

I could see a world in which its not financially worth it anymore to design a sensor just for a single camera. We'll see.

Would be nice tho, if the r1 geta a unique sensor. Just seems like the double use is a real possabiliy, as we can see from nikon.
 
Upvote 0
Jul 21, 2010
31,228
13,089
It would be a first for canon, thats for sure, but we saw it with nikon now.
Not really, the 5DII and 1DsIII used essentially the same sensor. But that was before the 1D and 1Ds lines were merged into the lower MP, higher performance 1D X line. I expect Canon will continue that theme into the RF 1-series bodies.
 
Upvote 0
It would be interesting to understand better how 'oversampling'/binning could be made to work with Canon's digital lens corrections to result in more optimal end sharpness.

Our expectation at the moment is that the advertised resolution of a sensor makes its way 1:1 into the final image. However, behind the scenes that isn't what happens - especially considering the intense de-warping that happens on some of Canon's new glass (especially wides). This does result in reduced resolution in these lenses in the corner, due to these gross corrections.

Some people seem to get upset at the 'poor design choice' Canon makes here - but from an engineering perspective it's reasonable to move optical corrections into the digital domain if that produces a better set of compromises around lens size, performance and cost.

Would we be able to get better quality edge resolution though, if we had much higher sensors (and more data) feeding into this pipeline?
 
Upvote 0