Hi,Hey, I am going on a unicorn photo expedition in January, I need that slightly-better-than-70D high-ISO noise performance.
Bigger pixels give more electron capacity per pixel (say, 4 micron pixel has 30,000 maximum capacity, 7 micro pixel has 100,000 maximum capacity). So, say you have 14-bit ADC, that's roughly 16,000 levels of electrons, or about 2 electrons per level for the 4 micron pixel and 6 electrons per level for the 7 micron pixel. Say you have 30 electrons worth of noise. Noise takes up the first 15 levels for the 4 micron pixel and the first 5 levels for the 7 micron pixel. That's why bigger pixels, all other things being equal, result in less perceptible noise.
This is still wrong. Bigger pixels mean more charge per pixel...but it's still the same TOTAL CHARGE for the WHOLE SENSOR! As Lee Jay said, slicing up a pizza into smaller slices doesn't mean you have more pizza, or more pepperoni on that pizza. It's still the same amount of food.
Same for sensors. You can have two APS-C sensors with 10µm and 5µm pixels. One has four times as many pixels as the other. The sensors are 22.3x14.9mm in size. The big pixel sensor is 2230x1490 pixels, the small pixel sensor is 4460x2980 pixels. One has pixels with four times the area as the other. The 10µm pixels gather 100ke- charge FWC, the 5µm pixels gather 25ke- charge FWC. The bigger pixels are better, right? They gather more light than smaller pixels. They mean less noise, right? Nope. Let's calculate the total charge in the sensor for a fully saturated sensor
(2230*1490) * 100000 = 332,270,000,000e-
(4460*2980) * 25000 = 332,270,000,000e-
Hmm. Something MUST be wrong, because these two sensors gathered the same amount of light! If your subject fills the same absolute area of the sensor, then either sensor is going to gather the same total amount of light. The only difference is that one divides the subject into smaller buckets. Each bucket gets less light, but the subject as a whole is resolved at the sensor with the exact same amount of light in total.
Oh, but I purposely used pixels that had a nice, neat little ratio between them. It doesn't work that way in real life, right? Let's prove the point. Let's take the 5D III and 6D, both full frame sensors. Their total charge capacities are:
5D III: (5760px*3840px) * 67531e-/px = 1,493,677,670,400e-
6D: (5472px*3648px) * 76606e-/px = 1,529,197,940,736e-
The 5D III has 49% Q.E., the 6D has 50% Q.E. Dividing the above by 49% and 50% respectively gives us:
1,493,677,670,400/49 = 30,483,217,763.27
1,529,197,940,736/50 = 30,583,958,814.72
Dividing those numbers gives us:
30,483,217,763.27/30,583,958,814.72 = 0.99670608203273169699921873489352
The 5D III and 6D are within 99.7% of each other as far as total charge goes. That means the difference in light gathering capacity is 0.3%...well within margin of error. Differences in technology, cherry picking the best sensors (as in the 1D X/D4 lines), using better companion electronics (again as in 1D X/D4), etc. can create larger discrepancies, but in general, differences in pixel size, until were talking about very small pixels where fill factor becomes an issue, are largely meaningless. It's sensor area that matters first and foremost, then quantum efficiency...then pixel size/fill factor.
The 7D II could employ some new technology to improve Q.E. They could use better materials (i.e. black silicon), control current better, maybe even switch from using a standard RGGB CFA to using something like color splitting, etc. and maybe double Q.E. That would allow them to realize a REAL one-stop improvement in noise performance at high ISO. I think it's doubtful that's happened...if the 20.2mp sensor rumor is true. In all likelihood, Canon has made some minor evolutionary improvements, maybe improved Q.E. a few percent, maybe found a way to recover some die area for photodiodes, improved the efficiency of their circuitry, etc. I don't expect the differences to be huge.
The 70D has 45% Q.E. The 7D II might have around 49% Q.E., and they may better utilize the sensor die area for photodiodes. We might see a boost from ~26ke- FWC to maybe ~30ke- FWC. That is not going to change things much...and accounting for the differences in quantum efficiency, the two sensors are still going to come within a fraction of a percent of each other as far as total light gathering capacity goes.
Err... I think you forget to consider the noise factor... if the noise for every pixel is the same, the larger pixel (more signal) will have better Signal-to-Noise ratio... that's mean more pixels equal more noise and since the total signal for the both sensor is the same, the sensor with less pixels will have better Signal-to-Noise ratio. Also, since smaller pixels hold less charge, the chance of blowing highlights is higher than a larger pixel sensor.
As a result, sensors with larger pixel have better dynamic range than sensors with smaller pixel even if the total sensor size is the same.
Have a nice day.