photonius said:
Don't forget the 70D. This may be a 37.5 Mp camera with dual pixels. Use different exposures on each half-site, and you get expanded DR 14bit or 16 (like the ML trick).
Canon could also bin the pixels normally, but for tele, if cropping is desired, the unbined version could be selected (sort of like the Nokia purview).
Another alternative is lens correction. Due to the high sampling, distortion, CA, etc. can be corrected first with little loss before downsampling for storage.
Lots of things that can be done.
You would never really be able to "unbin", as the pixel halves are each under a single microlens and color filter. There wouldn't really be any point, since you would have two halves of gree, two halves of red, two halves of blue. That would create a real oddity for digital interpolation, assuming you could get any benefit at all.
The term MegaPixel usually refers to output image pixels, not photodiode count. Keep in mind, there are usually more real "pixels" in a sensor than can be counted from the output image anyways, and have been for some time. For example, an 18mp sensor usually has nearly 20mp actual pixels. It just doesn't seem logical for Canon to start counting their half pixels used for AF...
By my calculations, a 75mp FF sensor would be 10600x7050 pixels in size, with 3.4 micron pixels. That is actually not all that bad. That is similar to a 24mp APS-C sensor in size (which is very interesting...would make sense if Canon has already produced a prototype 24mp 7D II sensor.)
I really don't see how Canon could keep using a 500nm FSI sensor design with 3.4 micron pixels. Given they have a patent for a BSI design for APS-C and FF, I wonder if these two sensors are using the same architecture.