EOS Bodies - For Stills / Re: Full Frame and Bigger Pixels vs. APS-C and Smaller Pixels - The Reach War« on: August 10, 2014, 01:01:50 PM »
Great images and informative discussion. I have learned a lot. Very confusing to noobs. I remember someone on CR frequently talking about better resolution being related to " number of pixels on target." So with reach limited subjects, you need either higher focal length lens or more (ie smaller) pixels per area on the sensor, to get better detail resolution. Did I say that correctly?
Yeah, that's correct. BTW, it's me who has always said "pixels on target". I read that a long time ago on BPN forums, from Roger Clark I think, and started experimenting with it. I think it's the best way to describe the problem...because it scales. It doesn't matter how big the pixels are, or how big the sensor is...more pixels on target, the better the IQ. If you are only filling 10% of the frame, try to fill 50%. It doesn't matter if the frame is APS-C, FF, or something else...it's all relative.
It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area: the bigger the pixel, the greater the number of photons flowing through it and the greater the current generated, and the statistical variation in both becomes less important.
True. However that does not falsify my claims about pixels on target. We don't look at pixels. We look at images. Noise is relative to area. If you take 6.25µm pixels and 4.3µm pixels, you can fit 2.1 of the smaller pixels into every one of the larger pixels. Assuming the same technology (which is not actually the case with the 5D III and 7D...but humor me here), those 2.1 smaller pixels have the same amount of signal, and therefor the same amount of noise, as the single larger pixel. Noise is relative to area. If you increase the area of the sensor which your subject occupies, you reduce noise as a RELATIVE FACTOR.
The dynamic range is also greater for large pixels than can accommodate a large number of electrons. A low megapixel sensor should have very good signal to noise and DR, but poor resolution. Now, see what happens as we progress to the other extreme. As, we decrease the size of the pixel, the resolution increases but the statistical noise starts to increase as the number of photons hitting each pixel decreases per unit time.
Per-pixel noise is an absolute factor. You are absolutely right that larger pixels have less noise and higher dynamic range. However ultimately, to maximize IQ, you don't want to achieve some arbitrary balance between pixel size and pixel count. You simply want to maximize the number of pixels on subject, regardless of their size. Because it really isn't about the pixels...it's about the area of the sensor your subject occupies.
In a reach-limited situation, the absolute area of the sensor occupied by your subject is fixed...it doesn't matter how large the sensor is. You will be gathering the same amount of light in total for your subject regardless of what sensor your using, or how big it's pixels are. Therefor, the only other critical factor to IQ is detail...smaller pixels are better, in that case, all else being equal.
The electrical noise also increases until the inherent noise in the circuit becomes greater than that due to the fluctuation in number of electrons generated by the photons. We all experience this as the noise caused by increasing the iso setting. The dynamic range also decreases. Eventually, the pixel becomes so small that it loses all of its dynamic range because the well is so shallow it can hold only a few electrons.
Actually, electronic noise within the pixels themselves, ignoring all other sources of read noise (which tend to be downstream from the pixels) is due to dark current. Dark current noise is relative to pixel area and temperature...and dark current noise DROPS as pixel size drops. The amount of dark current that can flow through a photodiode is relative to it's area, just like the charge capacity of a photodiode is relative to it's area. So, technically speaking, electronic noise does not increase as pixel size decreases. Again, dark current noise is relative to the unit area...pixel size, ultimately, does not matter.
When it comes to read noise overall, that actually has far less to do with pixel size, and far more to do with the downstream pixel processing logic, how it's implemented, the frequency at which those circuits operate, etc. Most read noise comes from the ADC unit, especially when they are high frequency. I've seen read noise in CCD cameras that use Kodak KAF sensors change from one iteration to the next. A camera using a KAF-8000 series had as much as 40e- read noise a number of years ago. The same cameras today have ~7e- read noise. They are identical sensors...the only real difference is read noise. That's because read noise isn't a trait inherent to the sensor...it's related to all the logic that reads the sensor out and converts the analog signal to a digital signal. Canon could greatly reduce their read noise, without changing their sensor technology at all...because the majority of their noise comes from circuitry off-die in the DIGIC chips.
So, too large a pixel gives too little resolution, too small a pixel gives too much noise and too small dynamic range. You could have a 20 billion too small useless pixels on target where 20 million would be the optimal number. Because of the above reasoning, astrophotographers and astronomers match pixel size to their telescopes. For photographers, the optimal size for current sensors pixels is around the range of crop to FF.
Your ignoring the fact that you can always downsample an image made with a higher resolution sensor to the same smaller dimensions as an image made with bigger pixels. The 7D and 5D III are the cameras I used because they are the cameras I have. I often use the term "all else being equal" in my posts, because it's a critical factor. The 7D and 5D III are NOT "all else being equal". They are a generation apart. The 7D pixels are technologically inferior to the 5D III pixels.
So, ASSUMING ALL ELSE BEING EQUAL, there is absolutely no reason to pick larger pixels over smaller pixels, assuming your going to be framing your subject the same with identical sensor sizes. If your photographing a baboon's face, and you frame it so that face fills the frame with a nice amount of negative space. If you have a 10mp and 40mp camera, You should ALWAYS pick the sensor with smaller pixels. You can always downsample the 40mp image by a factor of two, and you'll have the same amount of noise as the 10mp camera. Noise is relative to unit area. It doesn't matter if that unit area is one pixel in a 10mp camera, or four pixels in a 40mp camera...it's still the same unit area. Average those four smaller pixels together, and you reduce noise by a factor of two. Which is exactly the same thing as binning for pixels during readout, which is also exactly the same thing as simply using a bigger pixel.
The caveat, here, is that with a 40mp sensor, you have the option of resolving more detail. You plain and simply don't have that option with the 10mp sensor. More pixels just delineates detail...and noise...more finely. Finer noise has a lower perceptual impact on our visual observation. If the baboon face is framed the same, then your gathering the same amount of light from that baboon's face regardless of pixel size. Photon shot noise (the most significant source of noise in our photos) is intrinsic to the photonic wavefront entering the lens and reaching the sensor. Smaller pixels simply delineate that noise more finely.