Ultimatly, there are only so many photons.
Lets use a pair of imaginary sensors to show the point with easy math. Both sensors are perfect, and both have a perfect A/D circuit to read them. Both sensors cover the same area, one is 10X10 pixels, the other one is 40X40 pixels.
Both sensors are mounted into a camera and are going to take the same picture of a white wall at the same exposure and aperture. Let's say 102,400 photons enter the lens and because our imaginary sensors are both perfect, we get 102,400 electrons. The 10X10 sensor has 100 photosites with 1024 electrons in each one, and this gives us 10 bits of colour depth. The 40x40 sensor has 1,600 photosights with 64 electrons in each one, and that gives us 6 bits of colour depth.
The 10X10 sensor has greater colour depth, but the 40X40 sensor has greater resolution.
However, you can bin the pixels of the 40X40 sensor to recreate the same image as the 10X10 sensor, and end up with the same lower res image at the same colour depth, but you can not go the other way around. The 40X40 image has more information than the 10X10 image. think of it like this, 10X10 X10 bits of depth is 1000 bits of information, while 40X40X6 bits of depth is 9,600 bits of information.
In the real world, it isn't so easy. A/Ds are not perfect, sensors do not have 100% quantum efficiency, and there is a seam on the edges of microlenses so some light is lost there, but it still comes to the same conclusion. Smaller pixels capture less individual cell info than larger pixels but because there are more of them, the overall amount of info for the entire image is greater.
Lets use a pair of imaginary sensors to show the point with easy math. Both sensors are perfect, and both have a perfect A/D circuit to read them. Both sensors cover the same area, one is 10X10 pixels, the other one is 40X40 pixels.
Both sensors are mounted into a camera and are going to take the same picture of a white wall at the same exposure and aperture. Let's say 102,400 photons enter the lens and because our imaginary sensors are both perfect, we get 102,400 electrons. The 10X10 sensor has 100 photosites with 1024 electrons in each one, and this gives us 10 bits of colour depth. The 40x40 sensor has 1,600 photosights with 64 electrons in each one, and that gives us 6 bits of colour depth.
The 10X10 sensor has greater colour depth, but the 40X40 sensor has greater resolution.
However, you can bin the pixels of the 40X40 sensor to recreate the same image as the 10X10 sensor, and end up with the same lower res image at the same colour depth, but you can not go the other way around. The 40X40 image has more information than the 10X10 image. think of it like this, 10X10 X10 bits of depth is 1000 bits of information, while 40X40X6 bits of depth is 9,600 bits of information.
In the real world, it isn't so easy. A/Ds are not perfect, sensors do not have 100% quantum efficiency, and there is a seam on the edges of microlenses so some light is lost there, but it still comes to the same conclusion. Smaller pixels capture less individual cell info than larger pixels but because there are more of them, the overall amount of info for the entire image is greater.
Upvote
0