I've been through every free Chipworks article they have ever published.
Hmm. Obviously not, because Chipworks has a free partial die photo of the 70D sensor:
Take a careful look and consider the geometry of a dual-photodiode pixel:
- you can have two rectangular photodiodes that form a square pixel
- or, you can have two square photodiodes that form a rectangular pixel
- finally, you can have two square photodiodes plus wasted space on the die that form a square pixel
The image you are referring to does not even have any of the dual pixels in it, assuming those are pixels at all (on the contrary, they kind of look like readout pins in a land grid array, which would be on the BOTTOM of the sensor, the opposite side from where the actual pixels are, and assuming they are not readout pins, I would know a CMOS sensor pixel if I saw one...those are not even remotely close to what a CMOS pixel looks like...they don't even have microlenses or color filters...it's just wiring and bare silicon substrate). That image is from the outer periphery of the sensor die, which is usually riddled with power regulation transistors and other non-pixel logic. Canon's DPAF pixels are only in the center 80% of the part of the die that actually contains pixels...so even if the image WAS of pixels (which it is not), then they wouldn't be DPAF pixels...they would be standard single-photodiode pixels.
Now, as I said, take a careful look at the partial sensor die and tell me if you see:
a) anything rectangular features on this photo
b) any apparently wasted space
A partial die photo is certainly not a definitive proof.
It isn't proof, because you are gravely mistaken about what that photo is actually of. There is even some kind of stamp on top of the electronics in the region of that photo that ChipWorks has shared. You don't stamp the actual pixels...and usually such stamps are again on the back side or very outer periphery of the sensor, not the side with the pixels. This photo is either of peripheral logic on the top side of the sensor, or circuitry or pinning on the bottom side of the sensor.
It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not a dual one.
Again, just think of the geometry of a dual pixel design and make your own conclusions.
Again, your completely misinterpreting what that image is.
As for the resolution of a non-bayer filter: I should have been more clear.
The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer.
Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.
Now, imagine if each of the photodiodes had their own, individual color filters.
I don't need to imagine, as that is exactly what a sensor with split photodiodes WITHOUT DPAF or QPAF would be...each photodiode would have it's own color filter...because each photodiode would be a pixel
Thus, what you are proposing is the removal of DPAF technology, and a factor of two reduction in pixel size, and a higher resolution. That's it! There really, truly, honestly isn't anything special about giving each smaller photodiode it's own filter. That just means you have a sensor with four times as many pixels, which is pretty much what each new generation of sensors gets anyway. (Well, not four times as many pixels, but a pixel size reduction and an increase in pixel count is a pretty consistent fact of just about every new still photography camera release.)
You still have a single pixel with a single microlens.
If you do this, then you are going to have problems properly distributing light into each photodiode. The entire purpose of the microlens is to guide as much light as possible onto the photodiode. If you try to increase the pixel resolution below the microlens, then the problem you have is that one of those four subpixels gets more light than the rest, as the microlens, just like any other lens
, FOCUSES LIGHT
. The focal point, where the majority of the light is concentrated, is rarely dead center underneath the microlens (the farther from the center of the sensor you go, the more off-centered the focal point from the microlens will be). So, if you split the color filter and photodiode underneath the microlens, you'll greatly increase noise levels...one out of four subpixels will get most of the light, and the other subpixels will get significantly less light. You idea effectively trades noise for resolution.
You counter might be, well just use more layers of microlenses for each photodiode. If you throw in more layers of microlenses, then you further screw with the AF capability of the subpixels, as you would be mucking with the phase of the light below the initial microlens. Muck with phase, and you can no longer "phase detect" (PD), or at least not detect it as well or as accurately. So again, as I said before, all you are proposing is a factor of two reduction in pixel size, or a factor of four increase in pixel count. In other words, a standard (non-AF capable) sensor with higher resolution...and more noise.
Underneath, however, there are four individual color filters - one for each photodiode.
Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color.
And instead of deriving a single R/G/B color, as in a bayer sensor, you derive all three primary colors.
Look up Micro Color Splitting Sensor. Panasonic's design is vastly superior to any kind of di/poly-chromatic filter
, because it simply doesn't filter
. It splits
light, but directs all of it
In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are wasting less light than a monochromatic filter.
And, by definition, MCS wastes zero light. Why invest time, money, and effort into a very complicated pixel design, one that is prone to being much noisier due to improper use of a microlens, when there are proven techniques that eliminate filtration entirely?
Back to the topic of extra resolution:
The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor.
Admittedly, the resolution increase is not all that big - but it's still an increase.
What your proposing is
a significant increase in resolution. The fact that you don't understand even that demonstrates that you don't understand sensor technology all that well, which indicates that your just speculating and dreaming. Nothing wrong with dreaming, but you should be aware that's what your doing.
Your DOUBLING resolution in both the horizontal and vertical by making each photodiode one quarter the size. The D800 clearly has a lot more resolution than the 1D X, and it's basically the same thing...twice the resolution.
You are really just talking about a pixel size reduction. Again...there isn't anything special here, and because your proposing that one single microlens be used for multiple pixels, your going to have an increase in noise due to what I described above. The increase in noise is going to be a severe drag on IQ, so again...your talking about at the very best, a net neutral
difference, and at worst, your going to get WORSE IQ with your sensor design because of the increased noise.
Think about all those things.
You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.
I fully understand what DUAL-photodiode technology is, how it works, why it's designed the way it's designed, and I also understand that it isn't some magical technology that will suddenly slingshot Canon ahead of the competition. You are dreaming
, pure and simple, that somehow Canon has solved their IQ problems with an AF
invention. It's just a dream, though. It's the same dream a lot of Canon users have, because they all want better IQ out of Canon sensors, but it's still just a dream. It's an ill-educated dream, I am sorry to say, and your misinterpreting a lot of information (such as the Chipworks photo of the OUTER PERIPHERY of the 70D sensor...anyone who knows anything about die fabrication understands that the outer periphery of any CMOS die, sensor, cpu, memory, whatever, is the domain of power regulation, control circuitry, wiring and pin solder points, etc. not core logic, memory cells, or pixels.)
Canon does not have quad pixel technology. If they had already used it in the 70D, then they would have received patents for it years ago. I've read all of Canon's photography-related patent releases for the last three years. They have several for DPAF technology, some new ones since the 70D that have not been implemented anywhere. Their patents, being patents, MUST be extremely precise and explicit about the design (that's what patents are, specific details about specific implementations of a concept). Not one single patent Canon has ever filed for DPAF has ever detailed quad photodiodes. Neither would Canon have sold themselves short by announcing DUAL pixel technology if in reality they had QUAD pixel technology...if they had QPAF, they would have told the world. It would be big news.
Finally, Canon also already has patents for layered sensor technology that really, truly DOES have the potential to increase image quality. Given some of the things their patents discuss, such as the use of what is basically akin to the nanocoating technology they use on some of their lenses on the second and third photodiode layers, Canon has the potential to improve the total amount of light their red and green photodiodes are sensitive to by reducing the chance of reflection at those lower layers, thereby increasing Q.E. Canon Foveon-like technology has the potential to be superior to Sigma Foveon technology, and with Canon's R&D budget, they certainly have the power to bring the technology to market and continue improving it.
If you want to root for Canon, and really want better image quality (which has less to do with photodiode count, and more to do with pixel design quality, quantum efficiency, etc.), then you should look into their layered sensor patents and root for them to actually make a DSLR camera that uses it. If Canon is indeed using nano-crystal technology to reduce reflection and increase Q.E. of the photodiodes in their layered sensors, I think they really have something that could outdo Sigma's Foveon, and outdo it enough that Canon could produce a 30 or 40 megapixel layered sensor that not only has the benefit of higher color fidelity, but also have higher native, non-bayer spatial resolution. THAT is where a meaningful increase in IQ for Canon DSLRs will come from....not DPAF.