Second, microlenses are used on ALL sensors nowadays. The advantage of microlenses is not solely given to APS-C sensors, therefor there is no advantage at all.
That's not the point. The point is that with microlenses in place there is little difference between a sensor with 18m physical pixels (60D) and one with 40m physical pixels (70D). The light is directed toward one pixel or another, so pixel size does not have a significant impact on total image noise.
The 7D does not have 40 million physical pixels. It has 20.2 million physical pixels, who's photodiodes are split in half. There is a BIG difference there!! HUGE DIFFERENCE. From a light gathering standpoint, the 70D has 20.2mp, vs. the 60D 18mp. That is only a 2.2mp difference, not a 22.4mp difference. Please don't exaggerate this, because your being exceptionally misleading about how the 70D sensor is designed. It is NOT 40.4 million pixels...it's 20.2 million pixels. PIXELS. Not independently read or binnable photodiodes, but pixels. There is ALWAYS two photodiodes binned and read per output pixel when it comes to doing an image read (image read
, in contrast to an AF read, which is entirely different and not valid in the context of discussing IQ.)
Microlenses only serve to increase the incident light on the photodiode. That does not change the photodiodes capacity.
True. But that's about DR, not image noise. People...like the guy who started this thread...honestly think if Canon gave them an 8 MP sensor with modern circuitry that it would be some kind of super high ISO performer. It wouldn't do any better then a 70D on noise, though it should have better DR.
Sorry, but wrong. The more light on you get to the photodiode, the more charge you are going to accumulate in less time. That means you can get away with a lower gain for each ISO setting. That is SPECIFICALLY about noise, not DR. I mean image signal noise. Read noise is an entirely different beast, and does not really apply here.
I'd also point out that even with microlenses today, we aren't even close to 100% capture.
Source? Also: if it's not 100%, it seems to at least be sufficient to make pixel density irrelevant (again, 70D vs. 60D, or any modern high density sensor with microlenses vs. a lower density sensor before microlenses.)
The source would be dozens of patents. Go read ImageSensorsWorld or find where Chipworks breaks down sensors, then dig up the patents mentioned. You'll learn a lot about Q.E. and total light percentage incident on the photodiode. And that's really what matters here, the light incident on the photodiode itself. A "pixel" is more than just a photodiode...but only the photodiode is actually sensitive to light.
Here is one link: http://image-sensors-world.blogspot.com/2014/02/sharp-explains-ishccd-improvements.html
This is Sharps improved microlens (and it also talks about deep photodiodes, however this is for an IR camera, so the deeper photodiode doesn't apply the same way to visible light photography.) Sony, Samsung, and others also have similar patents for improved mirolenses, although I think Sharp's is a bit more advanced as it's newer (although there is still going to be off-axis light incident on the microlens that is still going to be lost, hence the reason Sony employs a double-layered microlens design, and I believe Canon may ultimately employ a double-layer approach in the future for their compact cameras as well.)
Even with improved microlenses, your still not getting 100% focus nor 100% conversion. The silicon of the photodiode itself has a specific Q.E. These days, at room temperature, about the highest Q.E. your going to get is about 65%, and most sensors are between 40-55% Q.E. So, assuming you did somehow accurately focus 100% of the light incident on the microlens directly onto the photodiode (a logical fallacy, it is impossible to get 100% efficiency out of ANYTHING), your still going to be losing 40-60% of that light simply because not every photon that strikes the photodiode is going to actually free an electron. Some are simply absorbed and converted into heat, and a small amount will even reflect off the photodiode itself.
Advancements in nano-scale silicon manufacture have given rise to things like nano-spikes and black silicon. Both of these work to produce a gradient transition between the well and the photodiode. Somewhat like a nanocoating works on a lens...by eliminating any abrupt transitions in index of refraction, you dramatically reduce the chance of a photon reflecting in the first place. Such advancements, if they make their way into still photography sensors, might increase Q.E. into the 70% range or so at room temperatures.
Third, the 70D is not a 40.4mp sensor. It is a 20.2mp sensor.
I would say at the hardware level you guys are talking about, it is a 40.4 MP sensor. The pixels are physically separated and basically half the normal size.
No, the photodiodes
are split in half, but every pixel has two binned photodiodes. As I already said above, the pixels
are what matter, because when you bin the charge in both photodiodes, the outcome is identical to having one single photodiode per pixel.
But who cares? Feel free to compare other sensors. Direct observations do not support the idea that a lower density sensor would automatically yield superior high ISO. And if this were the case, someone would be doing it.
Density is a matter of perspective here. You don't seem to have read the rest of my answer about wildlife photography. Assuming identical framing and identical aperture, the 5D III is technically a DENSER sensor than the 7D or 70D. With identical framing and aperture, you end up with MORE pixels on your subject...AND those pixels are bigger. This does not contradict equivalence, as a matter of fact it is entirely in line with it. The only difference is that equivalence assumes the aperture is stopped down and the ISO is increased in order to reach equilibrium.
My point is that in real live, we don't always do that. As a matter of fact, in real life, I would RARELY do that. In real life, the chances that I would be using a longer focal length with a faster aperture is very real, and highly probable, in which case no APS-C camera could ever possibly compare to any FF camera. Pixel size doesn't even matter here...all that matters is total sensor area, really. For any given aperture, at identical framing, the larger sensor is gathering more light in total.
Outside of that...it's all just theory, theory that otherwise indicates that the FF sensor will always have the advantage at the same or faster aperture than the APS-C sensor is used at
Unless you happen to want more DoF and not less
If you want more DOF, you can always have more DOF. But again, the FF camera is better, because you have more options there. You can have much thinner DOF with FF than with APS-C at any given focal length, without requiring more complicated retrofocal designs that eat away at IQ to get those ultra wide focal lengths and still have nice boke.
As I said, the one single use case where cropped sensor is better is when you are literally reach-limited. You cannot get closer, you only have one focal length. In that case, smaller pixels are better. Don't get me wrong, here. I've argued that use case a lot! I'm usually on your side, because that advantage for cropped sensors isn't given enough credit. (People mostly only credit APS-C with being cheaper.) For a lot of people, especially people who ARE on a budget, the reach advantage of a high performance cropped sensor camera like the 7D (or 7D II, as the case may be) is well and truly valuable. I most certainly don't deny the value of having a cropped sensor. It is valueable, and it can allow a LOT of aspiring bird and wildlife photographers the option to frame tightly without having to get closer than their skill may allow. That was and is my argument for getting the 7D in the first place.
I just have to point out that in every other circumstance, where you are not reach limited, FF sensors will do better. Bigger pixels, more pixels, and the ability to get more total light AND more pixels on your subject. You are basically trying to use the theory of equivalence for the exact opposite of what it actually says, which is that for any given quantity of light, a FF sensor is gathering more light in total...not the other way around.
Pixels on subject is really at the heart of the matter here. With a 5D III, I always have the option (although maybe not necessarily the ability in fringe cases) to get more pixels on my subject, and because the 5D III pixels are bigger...well, there is just no contest. No APS-C sensor could ever compare. Even assuming I did stop down...all that serves to do is reduce the IQ of the 5D III to the level of an APS-C sensor...it does nothing to lift the IQ of the APS-C sensor up.