There have been strides in sensor technology, however pixel size still dominates the determination of how much noise you have. Smaller pixels will always have more noise, that's a simple matter of physics. We have improved READ noise with better sensor technology, but read noise is only a small contribution to total noise (especially at high ISO)...photon shot noise is the primary source of noise in images. The larger pixels of the 1D IV will always win out against smaller pixels of APS-C sensors. The only way the 7D II could do better is if it had larger pixels than the 1D IV, however that would make it something like a 10mp sensor...highly unlikely.
A smaller pixel generates less read noise, correct. For example, the 7D has 8e- read noise, vs. the 1D X's 35e- read noise. But read noise is a tiny, tiny, tiny contributor to overall noise. The lower FWC means TOTAL noise (including photon shot noise) is higher, because your maximum signal (as dictated by that lower FWC) is lower.
The overall sensor area is what dictates total noise in the image, and in that respect it doesn't really matter what the pixel size is. Smaller sensors have more noise than larger sensors due to their smaller total area/less total light gathered.
Jrista, I'm a little confused by your two statements - which seem at odds with each other, unless I misunderstand them.
My understanding is noise is determined by the total light gathered by the system, and that is a function of the sensor's area and its quantum efficiency. That would mean changing the 7D sensor for one which is the same size but has a smaller number of much larger pixels (which otherwise performed the same) wouldn't help with noise, because you wouldn't change the total light gathered by the system. Larger pixels would presumably have a larger FWC, which might enable more subtle colour/brightness gradation (and perhaps increase dynamic range?), but wouldn't actually reduce noise.
Am I missing something?
That is essentially correct. Pixel size doesn't matter much because you can always downsample, which is effectively the same as either binning or having larger pixels. Let's say you have a 32mp APS-C and a 8mp APS-C. Both sensors have a Q.E. of 50%. Neither sensor has an AA filter. These two sensors are a factor of four difference in pixel size...you can fit four of the 32mp sized pixels into one 8mp sized pixel. If you take the 32mp image and downsample it to 8mp (8000x4000 pixels downsampled to 4000x2000 pixels), the results are the same. The per-pixel noise of the 32mp image is higher, however once downsampled, basic averaging effectively nullifies the increase in noise, and largely nullifies the increase in detail, resulting in nearly the same detail and exactly the same noise as the 8mp sensor. The detail will be slightly higher as you started out with a finer level of detail, and the multi-sampling process of downsampling means that while you are concurrently averaging out noise, you are also compounding the quality of detail in each pixel.
Now, let's say the 8mp camera has 40% Q.E. and the 32mp camera has 80% Q.E. Now the 32mp camera only has noise that is 50% worse than the 8mp, rather than twice as bad. If you downsample the 32mp image to the same dimensions as the 8mp image, the downsampled 32mp image will have less noise and will show the same advantage in detail. It is highly unlikely we will ever see a consumer-grade sensor with 80% Q.E. I've only seen those levels in Grade 1 scientific sensors (the kinds of sensors you find in astrophotography cameras or the stuff they ship up to the Hubble.) We may see sensors with 65% Q.E. or so, however that is only about a half-stop improvement over the ~50% most current sensors have now.
Now, let's say we have two sensors of differing size. Let's say we have a 16mp mp FF sensor, and an 8mp 24x16mm sensor (exactly half the area of the FF sensor, slightly larger than APS-C). Both cameras have exactly the same pixel size. If you frame your subject in one vertical half of the FF sensor with the camera oriented vertically, and crop out the other half, you will have identical results to the 8mp APS-C sensor. If you frame the same subject horizontally using the full area of the FF sensor, you are putting twice as much sensor area on the subject. You have gathered double the amount of light with the FF sensor as you are with the APS-C sensor...and it has nothing to do with pixel size. If you downsample the FF image to the same dimensions as the APS-C image, your going to trounce it in both noise levels and detail levels.
The total amount of light gathered is really what matters. Assuming the same sensor size, then the actual pixel size does not really matter all that much. There are things that may result in improved performance of one sensor with one pixels size or another. Improved quantum efficiency is one way. There are also caveats with pixel size. If you want more pixels, that also means more wiring. In FSI sensors, the increased wiring with smaller pixels means there is even less total light sensitive area than with larger pixels. Theoretically, assuming an identical fabrication process is used, our 8mp camera from above will actually have more total photodiode (light sensitive) area than the 32mp sensor. If they both have the same Q.E. then the 8mp sensor will actually perform slightly better due to the slightly greater total photodiode area. This would be the only way I think a 7D II could perform as well as or better (highly unlikely) than the 1D IV. By reducing pixel count significantly, one can increase the total amount of light-sensitive sensor area. I'm not exactly sure where the cutoff point would be...however you would have to pretty drastically reduce the wiring area of the 7D II. You would probably also need to use a process shrink (500nm to 180nm). Another way to do it would be to move to a BSI design. (This all assumes that there is enough wiring in the 1D IV sensor that total light sensitive area is still not greater than the area of an APS-C sensor...if it is, then actually there wouldn't be any way the 7D II could actually perform better.)
In this respect, you are indeed correct
about color fidelity and dynamic range...larger pixels do have an edge here. However you are still going to find that greater total sensor area still has a greater impact on those aspects of IQ than larger pixels do in the long run (for example, the D800 has phenomenal color fidelity, however it's pixel size is only marginally larger than the 7D, which has pretty terrible color fidelity in the grand scheme of things...the greater total light gathering capacity, benefited by both higher Q.E. and being FF, of the D800 is it's real edge here.)
Other technology may be employed to increase the total light sensitivity of a sensor pixel. Currently sensors are effectively two dimensional...the only thing that really matters for total charge capacity is the area of the photodiode. Foveon-type sensors stack photodiodes, resulting in an increase in total charge capacity for each pixel. The same technique could theoretically be employed for monochrome and bayer sensors. Blue pixels would be least sensitive, as silicon will filter out most of the bluer wavelengths before they penetrate deeply. Green and red pixels would be most sensitive, allowing for two or three, maybe even four layers of photodiodes. Such technology could be employed in higher megapixel sensors to increase FWC and sensitivity. There is nothing that says the same techniques couldn't be employed with larger pixel sensors, though.