No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES.
Jrista, you are just assuming that Canon's dual-pixel tech is in fact a dual-photodiode tech.
My assumption is that it's already a quad-photodiode tech - and it's equally valid, as neither
one us has info on the actual implementation.
Sorry, but I am NOT assuming. I've actually read Canon's own patents. Those patents describe a system where the photodiodes for each pixel have been divided in half. This stuff isn't a mystery. Patentes are ESSENTIAL for the protection of intellectual property. Canon has been filing patents for DPAF for quite some time, a couple of years at least now, with the most recent ones being near the end of last year.
My assertions are based on concrete fact as described by the DPAF engineers at Canon themselves. Your assumptions are just that, assumptions based on an extremely TINY image posted on the ChipWorks page of the BACKSIDE of some sensor, an image which you have gravely misinterpreted, and an image we all can only assume is even of a Canon sensor, let alone one with DPAF technology (although it certainly is not of a Canon sensor with QPAF technology...since such a sensor doesn't exist yet.)
In general, before making any claims for photodiodes and pixels, consider the following:
A 'classic' pixel design has a photodiode plus three transistors (you can read about it on Wikipedia):
- a reset transistor for resetting the photodiode voltage
- a source-follower transistor for signal amplification
- a row select transistor
So, one definition of a pixel is a photodiode with three transistors.
Sure. An extremely basic kind of "pixel" that you might find in an entry level course on image sensor design. Modern sensors often have a lot more logic than that per pixel. That logic usually involves some level of noise reduction, potentially charge bucketing for global shutter sensors, anti-blooming gates and shift registers in CCDs, extra logic to allow the selection of which photodiode to read in shared-pixel designs (which most smaller-pixel sensor designs are these days), etc.
The thing is, to improve fill factor and for other design considerations, modern sensors are using transistor sharing.
That is, a single set of the 'classic' transistors is shared between multiple phododiodes.
Transistor sharing is widely used in small sensors.
In the case of these sensors, though, each photodiode has its own microlens.
Thus, the photodiode is the pixel in these designs.
The photodiode is the light-sensitive part of
a pixel. A standard bayer pixel is comprised of a photodiode, at least one microlens layer (sometimes two), and a color filter, as well as the row/column activate wiring, amplifier, and readout transistors.
In a DPAF pixel, the photodiode has been split in half, with insulating material between the two halves. Each halve has independent readout. The photodiode, despite being split, still exists below the color filter and microlenses. Therefor, there is still ONE pixel...with two photodiodes. Canon did not increase the pixel count...they increased the photodiode count.
In short, depending on the implementation, a photodiode and a pixel could mean the same thing.
Your interpretation is wrong. ;P Sorry. Go read the darn patents, and stop making assumptions.
Canon's 'dual-pixel' tech is assumed to be based on a shared-transistor design.
That is, it is a multi-photodiode design.
But since in a shared-transistor design photodiodes are effectively equivalent to pixels (as explained),
Canon's tech could be called multi-pixel design as well.
But photodiodes and pixels are not effectively equivalent. A pixel is more complex than a photodiode. A photodiode is simply a PART of a pixel. Your conflating the two for the sake of your argument, but that does not mean your conflation is valid.
So, you can stop correcting people who use dual/quad-pixel terminology, as these could in fact be used interchangeably.
The line between between a pixel and a photodiode is blurred in shared-pixel designs.
And the fact that the two photodiodes are read independently for auto-focus further
indicates that these could very well be independent pixels - if they didn't share the
same microlens and color filter.
You misunderstand shared-pixel designs. Shared pixels do not share the photodiode. Each pixel still has it's own independent photodiode. What's shared in a shared-pixel design is the readout logic...transistors. Usually, the sharing is diagonal, although some prototypical designs share directly neighboring pixels. Green pixels usually share their readout logic
diagonally. Those two green pixels, however, each still have their OWN photodiode. The purpose of a shared pixel design is not to share the light-sensitive charge collector...that would be useless, since it would share each pixel's charge in one bucket, meaning you couldn't actually read them out independently.
The purpose of a shared pixel design is to save die space FOR the photodiode by reusing transistors and wiring for more than one pixel. The use of shared transistors to activate, amplify, and read the pixel has nothing to do with blurring the line between pixel and photodiode. The pixel is a vertical stack of layers of silicon materials. The photodiode is (usually) at the bottom of a physical well...it's the bit of silicon that is actually sensitive to light and converts some ratio of incident photons to free electrons (charge). Above that is a layer of translucent silicon material, usually silicon dioxide. Above that is often a microlens, and above that is a color filter array. There is sometimes buffer materials in between these layers, on top of which you finally have the primary microlens. THAT is a "pixel". The photodiode is just one part of the whole pixel. If you split the photodiode underneath all those other layers...you still have just one pixel. You have a pixel that is now capable of detecting phase, but it's still just one pixel, not two pixels. Regardless of what kind of readout logic it has...a pixel is a pixel, independent and atomic, and a photodiode is just a part of a pixel.
Also, your claim that there are exactly TWO PHOTODIODES (and that's it!) is not based on fact.
We don't know for sure if Canon's design is a dual-pixel design (your assumption) or a quad-pixel design
Your assuming I am assuming. Your assumption is, once again, wrong. You are also assuming that "we" don't know anything "for sure" about Canon's sensor designs. Sorry, but again, your assumption there is WRONG. Canon has filed patents for all of their DPAF designs. Those patents are the basis for their technology...the technology that actually exists in the 70D, for example. I am not assuming. My assertions are based on actual fact as clearly and definitively defined by Canon engineers themselves.
You can go look up these patents for yourself. They aren't hard to find. Many of them have been posted right here on CR in the past. This stuff isn't some mysterious, mystical, magical sensor technology that Canon is keeping obfuscated. Obfuscation and secrecy is the worst form of protection for technology. By filing and receiving patents, Canon LEGALLY protects their work from theft by other manufacturers...they have no reason to hide or obfuscate anything.
Canon's marking is selling it as a 'dual-pixel' tech likely because it's easier this way to communicate
the concept to the general public.
But we don't know for a fact what the actual implementation is.
We DO know what the actual implementation is. Not only that, we know EXACTLY what it is. See my prior comment.
So, your TWO PHOTODIODES claim is based on marketing materials, really.
If I were you, I wouldn't put too much weight into these .
Again, wild assumption, and a wrong one. You assume WAY too much. You might want to verify your facts first, before putting yourself out like that. I have never based anything I've said about Canon sensor technology on marketing materials. I read patents, of which there are many thousands filed by Canon every year, and many thousands more filed by all the other entities involved in sensor research and design. I know EXACTLY what I am talking about, and it's based on actual sensor designs that have either been manufactured for commercial use, or have been prototyped and thouroughly demonstrated at one of the numerous ICS conferences around the world every year.
The only person who puts weight into something they shouldn't is you...putting a lot of weight into the validity of your assumptions.
My assumption for a quad-pixel design is based on simple geometry.
If there are just two photodiodes per pixel, these photodiodes need to be rectangular.
This would be uncommon - if not even a first in the industry.
But with a quad-pixel design, the photodiodes are square just like in any other sensor.
Considering the potential future advantages of a quad-pixel design (e.g. for a non-Bayer sensor),
I'd speculate that Canon would have invested in a quad-pixel design from the start - rather than
designing rectangular photodiodes that later would need to be made square anyway.
Just a speculation, of course - but based on some informed assumptions.
The photodiodes ARE rectangular! That's EXACTLY what they are! That's exactly how they are described in Canon's patents on the technology!
It's not a first in the industry...for decades, there have been sensors with non-square photodiodes, even non-square pixels. There have been hexagonal pixels (Fuji first released sensors with hexagonally shaped pixels with extra small "white" pixels filling in the diagonal spaces between them many years ago), triangular pixels (Sony has a prototype 50mp sensor with triangular pixels), even pixels with non-uniform pixel sizes and layouts (some sensor designs, usually from Fuji, have had large regtangular white pixels, along with a non-standard layout of smaller rectangular red, green, and blue pixels). I currently use a CCD camera for guiding my astrophotography that uses rectangular pixels, due to the use of an anti-bloom gate. Again...your making some wild assumptions that have absolutely no basis in fact. Your assumptions are FAR from informed, as well. I don't know where you think your "informing" yourself, but you really need to go right to the source...patents. You seem to think that all this technology is kept secret and obfuscated and hidden away within the bowls of "Canon the over-protective corporation". That is, once again, an assumption. Canon has decades of sensor technology filed legally as patents in countries around the world. Those patents are fully available, in complete detail, with abstracts, technical diagrams, and full-blown conceptual and functional dissection and breakdown, for review by anyone who wishes to spend the time looking them up. If patents weren't freely available, then they would be useless. Competitors have to be able to investigate what technology their competitors have already invented and patented, so they don't try inventing the same exact thing to patent themselves...that would be a patent violation. Potential licensees of patented technology need to know how the technology is implemented, so they may implement it themselves in their own products, with the added requirement of a royalty fee.
This technology is WELL KNOWN, because it has to be. "We" know EXACTLY how DPAF is designed...and it is not quad-pixel. It's, quite literally, dual-photodiode. There are now multiple patents that PROVE that FACT.