Mikehit said:
haggie said:
So you see, how the diffraction that is caused by the lens, is recorded (captured) by the camera’s sensor DOES depend on the size of the photosensitive element in the sensor.
I agree. But the two photodiodes lie under a common microlens and output their image signal in a single output in so are acting as one unit - pixel size and (more importantly) pixelpitch in a 30MP sensor is the same with or without DPAF so in that respect the way they record diffraction is identical. So as far as I can see, any softening of the image due to being DPAF will be due to other properties.
haggie said:
I never wrote that the physical phenomenon of diffraction is caused by the lens and the pixel size, as you suggest. You just mix it all up.
I know you did not say it directly but I wanted to state the obvious to make it clear no assumptions were being made. It still surprises me how many people talk about high-MP sensor having more diffraction. They don't. But the misunderstanding arises from the fact the higher (density) MP sensors make it easier to see when viewing at 1:1.
However, whether or not the photosensitive elements share a micro lens or not has nothing to do with them capturing the effect of the diffraction caused by the lens. It is all about their size.
If they lie under the same microlens and output a single image signal, how do they 'record' separate images to be able to show diffraction?
Taking your logic it seems to be a 60MP image not a 30MP image.
I wrote that I wanted to avoid the definitions discussion about “pixels”, “dots”, “subpixels”, etc. But apparently that is not possible. However, the concepts behind it must be clear. Although not scientific definitions, the following can make this clear.
-A “Pixel” in a photograph is the smallest entity that can hold color (among others).
-To make a single Pixel visible, multiple composing entities are used. E.g. on a monitor screen a pixel is formed using a Red dot AND a Green dot AND a Blue dot. In a printed photograph a Yellow dot AND a Magenta dot AND a Cyan dot is used (also Black is present). To summarize: to ‘make’ a single Pixel, multiple composing items are required (I just call them ‘dot’ here).
-When capturing an image, something similar takes place. A photon wells by itself is not susceptible to color, it is just sensitive to the amount of light that hit it. A photon well can be made sensitive to only one composing color. This is achieved by placing colored ‘filters’ above it, thus making each photon well sensitive to only a specific basic color (often, but not necessarily, Red, Green and Blue). This means that the output of a photon well is not a "pixel" as usually perceived by photographers. A photon well captures one othe multiple composing elements for what is to be (!) a pixel.
-The next step is where the camera’s firmware makes a computation (and by the way, this is NOT a straightforward average due to the human eye’s sensitivity among others). This will construct an entity you might call a “pixel”. Therefore, ONLY after this step, you can talk about a “Pixel” in the usual sense.
Therefore, when you write “
If they lie under the same microlens and output a single image signal" when talking about the photosensitive elements of a sensor, you completely misrepresent how it works.
And from this misrepresentation you then draw a conclusion that are even more wrong ….. that you attribute to …. me!
When you write “
Taking your logic it seems to be a 60MP image not a 30MP image” you completely missed both the concept of "Pixel" versus "photon well" (again, the notion of the “Pixel” does not exist at that stage in the process) and also the concept of what Dual Pixel means after AF has taken place and the image is recorded by the sensor and formed by the camera’s firmware.
Furthermore, although it is hard to find any information about a specific camera, it seems very unlikely that the micro lenses are one simple geometric shape that is position above the two photon wells that form what Canon in its PR calls “Dual Pixel” sensors. That also makes your assumption a bit risky.
PM 1 In de specialist area of sensor technology, often the “photon well” is called “pixel”. But that should not be confused with what is meant with a “Pixel” in photography and graphics.
PM 2 On the internet there is also quite some misunderstanding going around on the subject of diffraction. This often results in the misconception that diffraction has no real negative effect on images from a higher resolution sensor.
A textbook on physics and on optics is the only way to avoid wrong interpretations and faulty simplifications, but that usually requires some background in mathematics. I googled to find this web page that describes more eloquently what I just wrote (or at least meant to write

) and that also offers some figures to illustrate it:
https://photographylife.com/what-is-diffraction-in-photography