« on: October 08, 2014, 10:48:11 PM »
Can someone explain to me why they're working on this rather than more megapixels? Rather, why the focus is on more layers? I understand it's to better represent colors. But what is wrong with colors? My cameras have always nice, realistic, vivid colors as long as I use a good lens. I've never had a photograph where I even had the slightest hint of a thought that the color is not accurate. When I look at my pictures, it looks like when I was there. Granted, the dynamic range is not the same, but we're not taking pictures with our eyes, so that's expected. What is it about color that they need to squeeze that last 0.01% of color accuracy out of the camera?
It's more than just better color fidelity. Bayer sensors have sparse data (sparse color data, you generally have more complete luminance data, but it's still not ideal), and need to be debayered. Assuming Canon is able to create a layered sensor with similar photosite counts as bayer sensors today, say 20mp, the image from a layered sensor should be much more complete, more detailed, sharper. The only real drawback to current Foveon sensors is they are really low resolution. For cameras of similar resolutions, Foveon is better because it's sharper out of camera for the given file size.
Sparse color information, and the act of debayering, is a primary source of color noise. Canon weakened the color filters in their more recent sensors (excluding the 7D II...not sure about that one yet), which results in more color bleed between pixels of differing colors, which just makes the color noise issue even worse. Luminance information is also biased...while it's higher resolution than the color information, different color channels have different sensitivities. When the color profile tone curves are applied to correct that discrepancy, it exacerbates noise (both luminance and color.)
When you gather a full constituent of color information at every photosite, if done right, you should have far lower color noise (doubtful it can be eliminated, but certainly lowered), and since every photosite gathers full luminance information, you won't get that increase in luminance noise due to different amplification of each color channel.
There are a lot of benefits for moving to a layered sensor design. The difficulties lie in getting good sensitivity at each layer, and in handling the photodiode count. A 20mp layered sensor with three colors is 60 million photodiodes that need to be read out. That's roughly triple Canon's current highest pixel count...I don't think even DIGIC 6 can handle that at even moderately reasonable frame rates...assuming 14-bit, a 20mp RGB layered sensor could do maybe 3.3fps with a pair of DIGIC 6 (based on the 10fps frame rate of the 7D II.) At best, that's a slow studio camera.
If Canon is intending to use this in the 1D X replacement, either they have something seriously powerful in DIGIC 7, or they are dramatically lowering the photosite count. If they released a 7mp RGB layered sensor with ~21 million photodiodes, they could get 12fps with dual DIGIC 5 or 6. They would need twice the throughput of DIGIC 5/6 to do 12fps at 14mp. They would need to process 2GB/s (which is basically the equivalent of eight DIGIC 5/6) to do 12fps at 28mp.