Dare I wade into the pizza war?
Perhaps I can translate it into a wooden pizza to fit one of my other hobbies: If I have a 15" maple disc, cutting it into 6 pieces WOULD give me more maple surface area than cutting it into 8 pieces. Why? because there is waste from blade kerf. If I have a 1/8" kerf, I lose an approximately 1/8" slice of material with each cut. Let's say now that we fill in each cut with a 1/8" slice of ebony so we don't lose overall surface area when we glue it all up. The disc maintains its original surface area, but there is still less maple surface area with 8 slices than with 6. Using a 1/16" kerf blade will increase the ratio of maple surface area to ebony, but there will still be less maple surface area with 8 slices than with 6.
Now imagine the disc is actually a rectangle, and the pieces are squares instead of pizza slices. The maple is the photo-sensitive portion of the sensor, and the ebony is the border around each pixel. If sensor size and transistor size are constant, doesn't increasing the number of pixels increase the number of borders and transistors, and doesn't that reduce the portion of the overall sensor that receives light? Is moving from a 500nm process to a 180nm process like going from a 1/8" kerf to 9/200" kerf?
I'm obviously not a sensor geek, so I might be completely misunderstanding pixels, borders, et cetera. What am I missing in this analogy?
What you're missing is gapless microlenses, which essentially render the "blade kerf" largely moot by concentrating the light into the light-sensitive area between the "kerf lines".
Interesting. So you're saying that the microlenses can redirect virtually all the light (that would have fallen on the border) into the photo site (or that if any is lost, the final result is not appreciably different)? Good to know.
So moving to a smaller process to shrink the borders does not affect the amount of light captured for each pixel because of the gapless microlenses?
If microlens size = photo site + border, then it would seem that a larger pixel-with-microlens would gather more light than a smaller pixel-with-microlens. Are you saying that the resolution (given the same sensor dimensions) is higher for the smaller pixels so when you compress the image to the same resolution as the sensor with the larger (fewer) pixels, the overall light/data collected for the multiple smaller pixels, now sized-down to the lower resolution end up producing essentially the same image quality? Am I understanding this right? Does this mean that if I want to enjoy the same image quality as the sensor with fewer pixels I have to compress the resolution of my images to match?
One other thought: microlenses perfectly focusing the light on the photo site sounds great on paper. How precisely do the lenses do this in the real world? If they're nearly perfect, how in the world do they accomplish such precision on such a small scale? Simply amazing to me...
If the microlenses do their job, then I guess it's not light/surface-area that makes the difference between crop and full frame. Could it be that for the smaller pixels, there's more opportunity for noise to be introduced by the supporting circuitry? Something must be happening, because it seems that sensors with larger pixels seem to do better for noise at high ISO.
I'm obviously showing my ignorance here, and at the risk of inviting the sensor-tech-savvy among us to bury me in information over my head, but hey...why not?