dual pixel tech going forward

Mar 2, 2012
3,188
543
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.
 
Mar 2, 2012
3,188
543
Good point.

I was thinking along the lines of how bayer filters have twice as much green as either red or blue. So perhaps it's not so much a read resolution increase (like 20MP becoming 40MP) as it's an information increase that comes with the same dimensions.

This way (again assuming they can read/record them individually), they could have as I mentioned red or blue at each pixel, rather than one red and one blue per every four pixels. They still get the 2:1 ratio of green to red and green to blue, but do so without dedicating individual pixels to green - they get green everywhere. It's like 2/3 of a foveon.
 
Upvote 0

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,246
1,939
Canada
neuroanatomist said:
Hope you like the pano look… ;)

The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.

That's why the 7D2 is held up.... Quad Pixel technology so they can use vertical and horizontal phase for the AF system... :)
 
Upvote 0
neuroanatomist said:
The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.
I think that's not the right way to look at this. It'd be more like having two color channels per pixel in the raw file rather than only one as input to the demosaic.
 
Upvote 0
Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.
 
Upvote 0
Mar 2, 2012
3,188
543
Drizzt321 said:
Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.

I don't know if you'd lose any additional light. Right now, there is a color filter immediately covering two diodes. If you had two smaller color filters adjacent to one another, you aren't going to halve the light, though you may move it around. Rather than "all light hitting here is red", it would be "some of the light hitting here is red and some of it is green," and they would have varying intensities. I think. :p
 
Upvote 0
Mar 2, 2012
3,188
543
caruser said:
neuroanatomist said:
The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.
I think that's not the right way to look at this. It'd be more like having two color channels per pixel in the raw file rather than only one as input to the demosaic.

To be fair, in my initial post I did indeed mean spatially, so neuro's comment was pertinent.

That being said, correct: You wouldn't have say 10,368 × 3,456 (1Dx with twice as many horizontal), you'd have 5,184 × 3,456, but each pair of pixels would be sufficient to yield RGB values rather than every four, meaning the color accuracy could be doubled, which could have a meaningful impact on a subsequent raster.
 
Upvote 0

Bahrd

Red herrings...
Jun 30, 2013
252
186
I think you need to take into account the side-effect of the split pixels: they collect different (shifted) half-images in out-of-focus areas. See the following simple example of the POV-Ray-produced GIF presenting left and right half-images (the scene consists of just two, front- and back-focused, spheres):

NCN.gif

It thus seems that you can take advantage of the twice-as-much number of pixels in the in-focus regions if you are able to precisely distinguish them from the out-of-focus ones...
 
Upvote 0

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,246
1,939
Canada
pwp said:
Don Haines said:
That's why the 7D2 is held up.... Quad Pixel technology so they can use vertical and horizontal phase for the AF system... :)
Interesting...speculation or information?

-pw
A healthy mix of speculation and sarcasm..... I have zero inside information. I also have a perfect record on predicting future camera bodies..... wrong every time :)
 
Upvote 0
3kramd5 said:
Drizzt321 said:
Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.

I don't know if you'd lose any additional light. Right now, there is a color filter immediately covering two diodes. If you had two smaller color filters adjacent to one another, you aren't going to halve the light, though you may move it around. Rather than "all light hitting here is red", it would be "some of the light hitting here is red and some of it is green," and they would have varying intensities. I think. :p

Well, you realistically would. Since a photon can only hit 1 of the photo-diodes, if you give a photo-diode less surface area, there are less photons that can hit it. With the current, you end up with 2 photo-diodes that get the same color of light, which ends up with nearly as much surface area combined as a single photo-diode at the same pixel location.

It probably would also screw up the phase detect AF, since now you have different colors of light be compared for the phase, and which you can't be sure you're getting the same _amount_ of light of the different colors...so it'd probably be really, really hard to accurately do phase-detect. Then again...I'm no scientist, so maybe it's not so bad and you can reliably correct it via software.
 
Upvote 0
Mar 2, 2012
3,188
543
But don't you double your chances of getting the correct color?

Maybe my understanding is wrong, but doesn't the CFA filter out all but one color (frequency range) per pixel? So a red pixel in blue light won't recieve any charge?

In the case of red light on red pixels, yah you'll get half as much, but in the case of green light on red or blue pixels, you'll get a reading. So yah, on a per pixel level maybe you'd affect available signal, but it seems like it would average out across the entire array. But I'm not a scientist either :p

And yah, it would likely prevent sensor level phase detection, which is why I suggested it could be for a "1DmkVs" model while the Bayer + dual pixel AF could go to a sports 1DmkV. Two lines, identical hardware except the CFA; different firmware.
 
Upvote 0
3kramd5 said:
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.

Having two photodiodes per pixel means the photodiode pair exists underneath the CFA filter and the microlens(es). That is actually the only way DPAF really works...to be able to detect a phase differential, you need to check the HALVES of each PIXEL. If you just shrink the pixel size and put different color filters over those smaller pixels...well, now you have smaller pixels (and an odd image ratio), and you no longer have DPAF. It's a tradeoff...resolution or a focus feature, which do you want/need? (Or, as the case may be, you get a cross between both, slightly smaller pixels (i.e. 20mp 70D vs. the 18mp that came before) AND DPAF.)

I know everyone likes to speculate about all the wonderful things that DPAF might potentially bring to the table...but so long as it is Dual-Pixel Autofocus, that's all your really going to get. There really isn't any magic bullet here, no trickery that you can pull of by somehow using one half of the pixels at ISO 100 and the other half at ISO 800 for more dynamic range, etc. Pixel area is pixel area, and phase detect is phase detect. DPAF pixels serve one purpose when read out for AF, and another purpose when the halves are binned and read out for an image. Those are really the only two functions DPAF will ever serve, and while I'm sure the Magic Lantern guys will figure out something cool about the specific mechanism of DPAF's implementation...they will still only be able to work within the bounds of the sensors design. The ML DR increases was ultimately thanks to an OFF-die downstream amplifier that allowed them to control the readout process, not really due to any specific nuance of Canon's actual sensor design.

Assuming Canon does not remove that downstream amp in favor of some kind of on-die parallel ADC and readout system, I honestly don't expect them to be able to do anything more radical with DPAF. They may find a way of doing creative focus things with AF, maybe add the ability to remember AF positions for video purposes, things like that...but the design of DPAF doesn't really mean Canon suddenly has some amazing wildcard on their hands that can give them a significant edge in the stills photography department.
 
Upvote 0
Mar 2, 2012
3,188
543
jrista said:
3kramd5 said:
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.

Having two photodiodes per pixel means the photodiode pair exists underneath the CFA filter and the microlens(es). That is actually the only way DPAF really works...to be able to detect a phase differential, you need to check the HALVES of each PIXEL. If you just shrink the pixel size and put different color filters over those smaller pixels...well, now you have smaller pixels (and an odd image ratio), and you no longer have DPAF. It's a tradeoff...resolution or a focus feature, which do you want/need? (Or, as the case may be, you get a cross between both, slightly smaller pixels (i.e. 20mp 70D vs. the 18mp that came before) AND DPAF.)

I know everyone likes to speculate about all the wonderful things that DPAF might potentially bring to the table...but so long as it is Dual-Pixel Autofocus, that's all your really going to get. There really isn't any magic bullet here, no trickery that you can pull of by somehow using one half of the pixels at ISO 100 and the other half at ISO 800 for more dynamic range, etc. Pixel area is pixel area, and phase detect is phase detect. DPAF pixels serve one purpose when read out for AF, and another purpose when the halves are binned and read out for an image. Those are really the only two functions DPAF will ever serve, and while I'm sure the Magic Lantern guys will figure out something cool about the specific mechanism of DPAF's implementation...they will still only be able to work within the bounds of the sensors design. The ML DR increases was ultimately thanks to an OFF-die downstream amplifier that allowed them to control the readout process, not really due to any specific nuance of Canon's actual sensor design.

Assuming Canon does not remove that downstream amp in favor of some kind of on-die parallel ADC and readout system, I honestly don't expect them to be able to do anything more radical with DPAF. They may find a way of doing creative focus things with AF, maybe add the ability to remember AF positions for video purposes, things like that...but the design of DPAF doesn't really mean Canon suddenly has some amazing wildcard on their hands that can give them a significant edge in the stills photography department.

Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.
 
Upvote 0
3kramd5 said:
Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.

I'm not really sure what kind of sensor design your proposing. The name, technically speaking, is missleading. There are not actually "dual pixels" in Canon's design...given what a pixel actually is. There is a split photodiode within each single pixel. Since PIXELS are the elementary element of an image, the fact that there are two photodiodes per pixel doesn't actually change anything from an imaging standpoint.

Here is a diagram of the DPAF FSI Sensor design:

Hx59hkb.jpg


The split photodiode is beneath the color filter and the microlens. This is an essential aspect of sensor-plane phase-detection, as in order to detect phase, you have to have phase in the first place. The way a DPAF pixel works is light from the left half of the lens, the left phase, is detected by the left side of the split photodiode, and the right phase is detected by the right side. The PDAF firmware in the camera deals with determining if there is a phase differential between these two detections, and if there is, it computes how much of an AF adjustment is necessary to eliminate the differential. This CAN NOT BE DONE if the two halves of the photodiode are not contained within a SINGLE pixel.

Canon's marketing moniker, DPAF or "Dual Pixel" AF is missleading. It is not dual pixels. It's dual photodiodes PER pixel. The two photodiodes in each pixel are electronically binned during readout. Electronic binning isn't the same as pixel averaging...by binning, Canon is effectively able to maintain the same behavior with a split photodiode as they would have with a single photodiode...instead of having half the full well capacity, because they are binning the two charges, they maintain the same kind of FWC as they would have with a single photodiode.

If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF. They would simply have a higher resolution sensor, although that sensor would have a 2:1 pixel aspect ratio (twice as many pixels horizontally as vertically), which would be a little odd to demosaic and might not produce the best quality results. Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that.

DPAF, as it is currently designed (according to the diagram above) is pretty strictly an AF thing. As far as imaging goes, since the photodiode halves are binned per pixel, there is really no difference vs. a sensor that just has one photodiode per pixel. There isn't anything special or magical about DPAF that will give Canon the ability to do something no other manufacturer can. It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens), it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain...you can't really improve DR, you can't improve SNR, it won't be giving ML anything better than they already have by using Canon's downstream amplifier...actually, the downstream amp is better.)

DPAF is just that...an autofocus feature. Nothing more.
 
Upvote 0
jrista:
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.


jrista:
"it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain"

If you can make one of both photodiodes "less sensitive" by some procedure (I do not know how) you have additional non saturated information about brightness.

Both theoretically possible improvements need
* the capability to read out both photoiodes independently. That is possible because it is necessary for DPAF
but it is questionable that you can read the WHOLE sensor in this manner
* the capability to play with sensitivity curves of both photodiodes independently ...

So basically you are right that - at the moment - the sensor will use the two-photodiode-per-pixel-design for AF only. And binning (adding both photodiode charges) will give reasonable "photosite size".
 
Upvote 0
mb66energy said:
jrista:
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.

Trust me, the entire structure below the filter is not the photodiode. The photodiode is a specially doped area at the bottom of what we call the "pixel well". The diode is doped, then the substrate is etched, then the first layer of wiring is added, then more silicon is added, more wiring. Front-side Illuminated sensors are designed exactly as I've depicted. The photodiode is very specifically the bit of properly doped silicon at the bottom of the well.

As for phase, it doesn't matter how deep the photodiode is, as I've said many times before, depth does not matter, only area. Phase is detected because the left half of the photodiode only receives light from the left half of the lens, and the right half only receives light from the right half of the lens. This is exactly how dedicated PDAF sensors work...the AF unit contains lenses that do exactly the same thing...split the light from the lens, sending light from one half to the AF strips on one side of the sensor, and sending light from the other side do the AF strips on the other side of the sensor.

mb66energy said:
jrista:
"it won't improve dynamic range (I've discussed this at length elsewhere, but reading one half at one ISO and the other half at another ISO ultimately results in a net-zero gain"

If you can make one of both photodiodes "less sensitive" by some procedure (I do not know how) you have additional non saturated information about brightness.

In terms of the actual silicon, there is only one sensitivity. Quantum Efficiency dictates how efficient the sensor is, and that is a fixed trait based on materials purity, doping, dark current levels, temperature, etc. At room temperature (usually that is defined as 70° or 72°), Q.E. of current Canon sensors is around 50% (+/- 2%).

The photodiodes are as sensitive as they are. The only thing that can chance how sensitive they are is to design an entirely new sensor with the explicit goal of improving Q.E. (ISO has nothing to do with sensitivity, ISO is simply a means of controlling gain, the amount the signal is amplified, during readout.)

mb66energy said:
Both theoretically possible improvements need
* the capability to read out both photoiodes independently. That is possible because it is necessary for DPAF
but it is questionable that you can read the WHOLE sensor in this manner

This is already possible. If it was not, then there would be no way for the AF feature to work. Both halves of the photodiode are indeed read independently, and the entire sensor (or to be more specific, the 80% of the sensor that actually has dual photodiodes) must indeed be read out at once, for FP-PDAF to work. There is no need to "innovate" this "improvemnt", it is essential and intrinsic to the design in the first place.

But again...this has nothing to do with imaging. It doesn't matter that the two photodiode halves of the entire sensor can all be read out at once. They are BELOW THE COLOR FILTER. I don't know how else to explain it, but since the photodiodes are below the CFA, it doesn't matter if you read them as independent halves, or binned...they are still just one color. You aren't gaining any improvement in resolution or anything like that by reading them independently. All your doing is creating two pixels with half the signal range and therefor half the maximum brightness. You would still need to find some way of digitally binning them in post to achieve the proper brightness levels to produce a full pixel at the proper exposure.

Again...no magic bullet here. What your saying is necessary is already possible and present. It's actually essential for DPAF's sensor-plane (focal-plane, or FP)-PDAF function to work in the first place.

mb66energy said:
* the capability to play with sensitivity curves of both photodiodes independently ...

Again, sensitivity is an intrinsic and fixed trait of the silicon itself. ISO is simply a means of controlling gain, not sensitivity. There is no way to play with sensitivity curves of photodiodes...period. Doesn't matter if there are one, two, or more per pixel. Their sensitivity is fixed for a given sensor design.

mb66energy said:
So basically you are right that - at the moment - the sensor will use the two-photodiode-per-pixel-design for AF only. And binning (adding both photodiode charges) will give reasonable "photosite size".

This will be right forever, so long as Canon desires to support AF via the image sensor. Even if they move to a quad design, for phase detection in both the vertical and horizontal, the fundamental design characteristics will not change...the photodiodes will still have to be beneath the CFA and Microlens layers for the ability to detect phase to work. This would also remain true even if Canon moved to a BSI design...all that would change is where the wiring is and how deep the pixel well is...the photodiodes would again remain below the CFA.
 
Upvote 0
jrista said:
mb66energy said:
jrista:
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.

Trust me, the entire structure below the filter is not the photodiode. The photodiode is a specially doped area at the bottom of what we call the "pixel well". The diode is doped, then the substrate is etched, then the first layer of wiring is added, then more silicon is added, more wiring. Front-side Illuminated sensors are designed exactly as I've depicted. The photodiode is very specifically the bit of properly doped silicon at the bottom of the well.

The "well" or better potential well of a photodiode is the part of the photodiode where the charge is stored during exposition. It is made of (doped) silicon which is intransparent. The image you provided seems to me a little bit strange: How could the light hit the photodiode at the bottom if the well is intransparent? Please send me the source of the image and hopefully I could find some enlightening information about it!

Thanks in advance - Michael
 
Upvote 0
Mar 2, 2012
3,188
543
jrista said:
3kramd5 said:
Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.

I'm not really sure what kind of sensor design your proposing.

Not proposing, just wondering.

jrista said:
If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF.

Agreed. Giving up sensor level phase detect is part and parcel to what I'm asking. Specifically, COULD they do what you wrote: put different color filters and shrink the microlenses, and read out each photodiode individually rather than binning them?

jrista said:
which would be a little odd to demosaic and might not produce the best quality results.

Interesting? Why could it not produce as good if not better. I'm assuming an array like this:

Code:
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG

Each pixel would read two colors, either green + either red or blue. That maintains the same color ratio of a bayer type CFA (2G/1R/1B), it's just collocating each green with another color.


jrista said:
Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that.

Sure, but if they could build one sensor and then have either high resolution OR sensor level phase detect, depending on which CFA is packaged, I figure they could cut their development and production costs in a two-camera offering.

shrug.
 
Upvote 0