The reason no one is using Foveon style sensors is precisely because they are so terrible in low light. None of the layers are "trapping" all of the photons falling on that layer or there wouldn't be anything left for the layer(s) underneath. There is nothing intrinsically different about a photon of "green" light and a photon of "blue" light except the frequency at which it is vibrating.
Multinational companies can't afford
geniusesknow-it-alls and psychics. Those are all employed by the fact manufacturing plants anyway.
Actually, the greatest issue within the Foveon sensors, is the pass-through capabilities of the Bandpass filters used in the sensors themselves. Each layer SHOULD BE sensitive ONLY to the specified frequencies (i.e. wavelengths) for each the Red, Green and Blue colours and then REJECT AND/OR ALLOW NEARLY FULL PASS-THROUGH of all others. A Foveon stacked photosite uses chemical dopants that allow the EM waves within specific R, G and B colour bands to penetrate to the different depths within the CMOS layers (silicon).
Unfortunately, Sigma has manufacturing issues SPECIFICALLY with the Red colour channel material which is cross-contaminating the upper green (the brightest colour seen by the human eye) and bottom blue layer such that low light colour rendition is sloppy. They need to add Indium or maybe Copper and then the specific RGB layers can be made to emit a greater electrical signal making said layer "More Sensitive". Since Foveon is a MATURE technology, the dopants used for specific EM band "colour sensitivity" have stayed the same while NEWER materials are now available!
I would say Canon OR Sony simply BUY Sigma and update the dopants to allow higher photon sensitivity for the given RGB layer. The Foveon technology, not requiring de-Bayering, is more colour accurate (aka more true-to-life). It's the closest to what we see in real life when it comes to colour rendition BUT the actual photon-to-electrical-signal conversion process is flawed because of OUTDATED dopants than can be redone to increase (based upon what an Engineer I have talked to has indicated!) by an amount of luminance sensitivity in the range of 25% AND MORE maybe even 40% !!!
If that change was make, I would say we are getting into Sony A7s2 equivalencies at a photosite size that's around 6 to 8 microns which is SMALLER than the around 12 microns per photosite that the Sony uses! Not only is the photosite size SMALLER but the actually RGB pixel layer sensitivity itself IS GREATER so you are actually doubling the overall sensitivity of the entire sensor. Ergo, an APS-C sensor image using the re-made Foveon technology at the same resolution would be BRIGHTER with LESS NOISE than a normal Bayer sensor image at Full Frame size! NOW THAT would be a treat to see AND I understand Panasonic and SOny are actually persuing those very strategies of using a variable-type of NEWER dopants in their own versions of Stacked RGB layer image sensors!
ON a technical note, I DO SUSPECT that within each square area of the individual stacked RGB layers, the dopants ARE NOT adequately and/or fully being diffused throughout the width, height and depth of the individual layer. This means incoming photons of a given wavelength are NOT being passed through to the next layer OR NOT BEING ABSORBED PROPERLY into the current photosensitive layer as a cumulatively emitted electrical charge! Rather being converted into heat instead (i.e. into thermal rather than electrical energy)! A more precision doping mechanism may be needed for the individual Red, Green and Blue photosensitive layers to increase overall charge accumulation!
Google is your friend here. Canon's highest density sensor has a pixel size of 4.1 microns. Sony has an entire line of sensors with much smaller pixels all the way down 1.0 microns. How many 1.0 micron pixels you could put on a full frame sensor? I'm not going to do the math but it's a lot more than 100 million. I like Canon's system and I'm no Sony fan but you are kidding yourself if you think Canon is ahead of Sony in sensor development.
edit: sorry, forgot about the sensor in the 80D. That one is 3.7 microns.
Also important to remember that it's a linear measurement so Sixteen 1-micron pixels fit into the same area as One 4-micron pixel. Fair to say that Sony is pretty good at small pixels.
Anything involving multiple images for dynamic range improvements will result in motion artifacts in some situations.
Canon could also just make access to the extra stop of dynamic range from the dual pixel sensors that is currently discarded easier.
And also, can a high resolution sensor not do everything a low res one can, if it has some good hardware binning implemented?
It's done with birds all the time. More importantly my point is that to get maximum sharpness from a high MP camera you need a sturdy tripod, high shutter speed and also Strobe if possible.With bird and wildlife photos?
The sadness is that all too many people think it is acceptable behaviour to make gratuitous comments like that rather than simply present a logical case.It's done with birds all the time. More importantly my point is that to get maximum sharpness from a high MP camera you need a sturdy tripod, high shutter speed and also Strobe if possible.
Some cases do not allow for strobe so then, absent a tripod your high MP camera is not delivering the res you paid for.
Sorry you are sad.
You also seem to have forgotten many of the sensors in Canon's studio broadcast cameras and other types of cameras with sensors significantly smaller than APS-C.