I think he had a point, but just didn't say it clearly.
He had a point, but not a correct one.
Lower MP cameras have larger pixels with wider center-to-center spacing would be a clearer statement. In the case of DPAF, each pixel is split in half with part right looking and part left looking. Larger pixels (with larger center spacing between the DPAF half pixels) would seem to have an advantage both from a light gathering perspective (less noise at any given ISO) and from an angular resolution perspective (needed for PDAF). I think this is why you are seeing faster AF acquisition on your R5 vs your R7 even though the R7 arguably has a generational advantage with its AF system derived from the R3.
First off, he said nothing about AF speed, but regardless that red herring has already been pickled with readout speed brine by
@koenkooi and
@AlanF.
Yes, a larger pixel gathers more light than a smaller pixel. Yes, more light means lower noise. But a full frame sensor filled with smaller pixels gathers the same total amount of light as a full frame sensor filled with larger pixels. Same total light means same noise.
Many years ago, that was not true. Each pixel only had a small light-sensitive area, and thus more pixels in a given area meant less light and more noise. The non-photosensitive area of pixels was in the 60-80% range, meaning a lot of light lost. But with the gapless microlenses and light guides used in modern sensors, that’s no longer true. Essentially the full surface of a pixel is photosensitive (because the microlenses collect light over the whole pixel area).
_________________________
Since this thread is also discussing pixel shift, it’s worth mentioning that the same gapless microlenses that obviate the effect of pixel size on image noise also reduce the benefit of pixel shift for spatial resolution.
Over 20 years ago, I had Zeiss cameras with pixel shift. The sensors lacked gapless microlenses, so when pixel shift was used to capture a 2x2 full-pixel array, that increased color resolution by sampling the same subject area separately in R/G/B channels. Using a 2x2 or 3x3 sub-pixel array increased spatial resolution, capturing more area in each pixel space by moving the ~1/3 of the pixel’s photosensitive area around that space to sample most of (2x2) or the entire (3x3) pixel area.
Modern sensors are
already sample the entire pixel area with a single capture, thanks to the gapless microlenses. Thus, pixel shift in today’s cameras increases color resolution, but has much less effect on spatial resolution. So when you use it with a Fuji GFX100, for example, the multiple captures with the 100 MP sensor result in a 400 MP image, but the resulting image has lower spatial resolution than you’d get with a 400 MP sensor.