dual pixel tech going forward

mb66energy said:
jrista said:
mb66energy said:
jrista:
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.

Trust me, the entire structure below the filter is not the photodiode. The photodiode is a specially doped area at the bottom of what we call the "pixel well". The diode is doped, then the substrate is etched, then the first layer of wiring is added, then more silicon is added, more wiring. Front-side Illuminated sensors are designed exactly as I've depicted. The photodiode is very specifically the bit of properly doped silicon at the bottom of the well.

The "well" or better potential well of a photodiode is the part of the photodiode where the charge is stored during exposition. It is made of (doped) silicon which is intransparent. The image you provided seems to me a little bit strange: How could the light hit the photodiode at the bottom if the well is intransparent? Please send me the source of the image and hopefully I could find some enlightening information about it!

Thanks in advance - Michael

Silicon is naturally semitransparent to light, even well into the UV range, and particularly in the IR range. The natural response curve for silicon tends to peak somewhere around the yellow-greens or orange-reds, and tapers slowly off into the infrared (well over 1100nm). The entire structure I've drawn is only a dozen or so microns thick at most. Light can easily reach the bottom of the well or photochannel or whatever you want to call it. The photodiode is indeed at the bottom. Sometimes the entire substrate is doped, sometimes it's an additional layer attached to the readout wiring. Sometimes it's filled with something. Here is an image of an actual Sony sensor:

qhsMzPT.jpg


Here is Foveon, which clearly shows the three layers of photodiodes (cathodes) that penetrate deeper into the silicon substrate for each color (the deeper you go, more higher frequencies are filtered, hence the reason the blue photodiode is at top, and red is at bottom), and there is no open well, it's all solid material:

9oYZVpo.jpg


Here is an actual image of one of Canon's 180nm Cu LightPipe sensor designs. This particular design fills the pixel "well" as I called it with a highly refractive material, the well itself is also lined with a highly reflective material, and the photodiode is the darker material at the bottom attached to the wiring:

tO0ASUD.jpg


Regardless of the actual material in the well, which is usually some silicon-based compound, the photodiode is always at the bottom. Even in the case of backside illuminated sensors, the photodiode is still beneath the CFA, microlens layers, and all the various intermediate layers of silicon:

JnnBWFw.jpg


This image is from a very small sensor. It's overall thickness is a lot thinner than your average APS-C or FF sensor. The entire substrate is apparently photodiode cathodes, you can see the CFA, microlenses, and some wiring at the bottom. The readout wiring is at the top. The photodiode layer is in the middle.

Every sensor design requires light to penetrate silicon to reach the photodiode.
 
Upvote 0
3kramd5 said:
jrista said:
which would be a little odd to demosaic and might not produce the best quality results.

Interesting? Why could it not produce as good if not better. I'm assuming an array like this:

Code:
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG

Each pixel would read two colors, either green + either red or blue. That maintains the same color ratio of a bayer type CFA (2G/1R/1B), it's just collocating each green with another color.

You have a pixel size ratio issue here. You have twice as many pixels horizontally as you do vertically. I think this was the first thing Neuro mentioned. To correct that, you would have to merge the two halves during demosaicing...in which case...why do it at all? You lose the improved resolution once you "bin"...regardless of whether your binning is electronic or a digital algorithmic blend.

Regarding color fidelity, I don't know that there is any evidence that your particular design would improve color fidelity. There have been a LOT of attempts to use various alternative CFA designs to improve color fidelity. Some may have worked, for example Sony added an emerald pixel (which is basically a blend of blue and green), Kodak experimented with various arrangements with white pixels. Fuji has used a whole range of alternative pixel designs, as well as utilizing a 6x6 pixel matrix with lots of green, some red, and some blue pixels, extra luminance pixels, and a variety of other designs. Sony has even designed sensors with triangular and hexagonal pixels and alternative demosaicing algorithms to improve color fidelity, sharpness, reduce aliasing.

None of these other designs have ever PROVEN to offer better color fidelity than your simple, standard RGBG bayer CFA. The D800 is an excellent example of how good a plain old bayer can get...it's color fidelity is second to none (and even bests most MFD sensors).

Anyway...DPAF isn't a magic bullet. It solved an AF problem, and solved it quite nicely, while concurrently offering the most flexibility by rendering the entire sensor (well, 80% of it) as usable for AF purposes. To get more resolution, use more pixels that are smaller. If you want better dynamic range, reduce downstream noise contributors (bus, downstream amps, ADC units.) If you want better high ISO sensitivity, increase quantum efficiency. If you want improved light gathering capacity, make the photodiodes larger, increase the transparency of the CFA, employ microlenses (even in multiple layers), move to BSI, use color splitting instead of color filtration, etc. Sensor design is pretty strait forward. There isn't anything magical here, and you also have to realize that a LOT of ideas have already been tried, and most ideas, even if they ultimately get employed at one point or another, often end up failing in the end. The good, old, trusty, strait-forward Bayer CFA has stood the test of time, and withstood the onslaught of countless alternative layouts.
 
Upvote 0
jrista said:
[...]

Every sensor design requires light to penetrate silicon to reach the photodiode.

Thanks to your extensive explanations but I disagree in some important details.

Your last sentence ist truly correct - you need to reach the pn-junction of the photodiode which is "inside" the dye structure.
But after checking a lot of images in the web I came to the following conclusion:

1 micron of silicon would (according to http://www.aphesa.com/downloads/download2.php?id=1 page 2) reduce the amount of light at 500 nm to 0.36^3 = 0.05 or 5 % - a sensor with 1 micron silicon between front and photodiode structure would be orthochromatic (red sensitive).

Therefore the space between semiconductor chip surface and photodiode is filled by oxides. If silicon is the base material the oxide is usually silicon dioxide which is the same as quartz and highly transparent. I have tried to depict that in the sketch "Simplified Imaging Sensor Design" attached here (transistors, x-/y-readout channels are omitted).

According to photodiode sensitivity: You can shurely reduce the sensitivity of the photodiode in a system by
(1) using a filter
(2) initiating a current that discharges the photodiode permanently
(3) stopping integration during exposure independently
For (1) think about a tiny LCD window in front of the second photodiode of one color pixel: blackening the LCD has the same effect like a - e.g. ND3 - gray filter. Both photodiodes read the same pixel at different sensitivity. The unchanged photodiode has full sensitivity, the filtered photodiode has 3 EV lower sensitivity. The LCD should be closed during exposure but is left open for DPAF.
For (2) think of a transistor for the second photodiode of a pixel which acts as a variable resistor between sth. like 1000 MOhms and 100 kOhms - photodiode 1 of the pixel integrates the charge fast, photodiode 2 of the pixel integrates the charge slowlier because some charge is withdrawn by the transistor acting as discharge resistor.
For (3) you need a transistor too and stop integration after e.g. 10% of the exposure time before the full well capacity is reached.
All methods require to replace information from saturated photdiodes 1 by the non saturated photodiodes 2 (with slower integration rate). It is like doing a HDR shot combined from 2 images which were taken SIMULTANOUSLY (except (3)).

Enhancing resolution (perhaps) slightly (according to 3kramd5's or caruser's description): ( <=EDIT)
Typical pattern is (for DPAF sensor in current config, AF and exposure): ( <=EDIT)

Code:
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb

Just resort to this (after AF is done) to the following readout with 20MPix but 2 colors per (virtual) pixel: (<=EDIT)
Code:
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b

You are right (and that was my feeling to) that this will not dramatically enhance resolution but I see one special case there it might help a lot: Monochromatic light sources which will used more and more while signs (street signs, logos, etc.) are lit by LEDs. I observed that de-bayering works bad with LED light, especially blue and red light because the neigboured green photosites aren't excited enough. I very often see artifacts in that case that vanish if you downsample the picture by a factor 2 (linear).

My conlusion is that "dual photodiode per pixel"-structures might have a strong potential beyond the AF-method which it provides now. Don't know if the current 70D sensor has this potential but I think there is some headroom for real products.
 

Attachments

  • bildsensor_vereinfacht.jpg
    bildsensor_vereinfacht.jpg
    110 KB · Views: 525
Upvote 0
mb66energy said:
jrista said:
[...]

Every sensor design requires light to penetrate silicon to reach the photodiode.

Thanks to your extensive explanations but I disagree in some important details.

Your last sentence ist truly correct - you need to reach the pn-junction of the photodiode which is "inside" the dye structure.
But after checking a lot of images in the web I came to the following conclusion:

1 micron of silicon would (according to http://www.aphesa.com/downloads/download2.php?id=1 page 2) reduce the amount of light at 500 nm to 0.36^3 = 0.05 or 5 % - a sensor with 1 micron silicon between front and photodiode structure would be orthochromatic (red sensitive).

Therefore the space between semiconductor chip surface and photodiode is filled by oxides. If silicon is the base material the oxide is usually silicon dioxide which is the same as quartz and highly transparent. I have tried to depict that in the sketch "Simplified Imaging Sensor Design" attached here (transistors, x-/y-readout channels are omitted).

Indeed. I did mention that it was a silicon-based compound, not pure silicon: "Regardless of the actual material in the well, which is usually some silicon-based compound, the photodiode is always at the bottom. "

I agree, though, SiO2 is usually the material used for layers deposited above the substrate, or is silicon dioxide based, but not always. It depends on the design and size of the pixel. As most small form factor designs have moved to BSI, I don't think there are many lightpipe designs out there, however in sensors with pixels around 2µm and smaller, the channel to the photodiode is lined with SiN, then filled with a highly refractive material. In one paper (http://www.silecs.com/download/CMOS_image_sensor_with_high_refractive_index_lightpipe.pdf, very interesting read, if your interested), they mentioned two other compounds used: Silecs XC400L and Silecs XC800, which are organosiloxane based materials (partially organic silicates, so still generally SiO2 based, but the point is to make them refractive enough to bend light from highly oblique angles from the microlens down a deep, narrow channel to the photodiode).

I have another paper bookmarked somewhere that covered different lightpipe materials, but with BSI having effectively taken over for small form factor sensors, I don't think it much matters.

mb66energy said:
According to photodiode sensitivity: You can shurely reduce the sensitivity of the photodiode in a system by
(1) using a filter
(2) initiating a current that discharges the photodiode permanently
(3) stopping integration during exposure independently
For (1) think about a tiny LCD window in front of the second photodiode of one color pixel: blackening the LCD has the same effect like a - e.g. ND3 - gray filter. Both photodiodes read the same pixel at different sensitivity. The unchanged photodiode has full sensitivity, the filtered photodiode has 3 EV lower sensitivity. The LCD should be closed during exposure but is left open for DPAF.
For (2) think of a transistor for the second photodiode of a pixel which acts as a variable resistor between sth. like 1000 MOhms and 100 kOhms - photodiode 1 of the pixel integrates the charge fast, photodiode 2 of the pixel integrates the charge slowlier because some charge is withdrawn by the transistor acting as discharge resistor.
For (3) you need a transistor too and stop integration after e.g. 10% of the exposure time before the full well capacity is reached.
All methods require to replace information from saturated photdiodes 1 by the non saturated photodiodes 2 (with slower integration rate). It is like doing a HDR shot combined from 2 images which were taken SIMULTANOUSLY (except (3)).

I understand what your getting at, but it isn't quite the same as doing HDR. With HDR, your using the full total photodiode area with multiple exposures. In what you have described, your reducing your photodiode capacity by using one half for the fast-saturation and the other half for slow-saturation. Total light sensitivity is determined by the area of the sensor that is sensitive to light...your approach effectively reduces sensitivity by 25% by reducing the saturation rate of half the sensor by one stop.

If your photodiode is 50% Q.E., has a capacity of 50,000e-, you have a photonic influx rate of 15,000/sec, and you expose for five seconds, your photodiode ends up with a charge of 37,500e-. In your sensor design, assuming the same scenario, the amount of photons striking the sensor is the same...you end up with a charge of 18,750e- in the fast-sat. half, and 9,375e- in the slow-sat. half. for a total saturation of 28,125e-. You gathered 75% of the charge that the full single photodiode did, and therefor require increased gain, which means increased noise.

I thought about this fairly extensively a while back. I also ran through Don's idea of using ISO 100 for one half and ISO 800 for the other, but ultimately it's the same fundamental issue: sensitivity (true sensitivity, i.e. quantum efficiency) is a fixed trait of any given sensor. Aside from a high speed cyclic readout which constantly reads the charge from the sensor and stores it in high capacity accumulators for each pixel, for standard sensor designs (regardless of how wild you get with materials), there isn't any magic or clever trickery that can be done to increase the amount of light gathered than what the base quantum efficiency would dictate. The best way to maximize sensitivity is to:

A) Minimize the amount of filtration that occurs before the light reaches the photodiode.
B) Maximize the quantum efficiency of the photodiode itself.

I think, or at least hope, that color filter arrays will ultimately become a thing of the past. Their name says it all, color FILTER. They filter light, meaning they eliminate some portion of the light that reached the sensor in the first place, before it reaches the photodiode. Panasonic designed a new type of sensor called a Micro Color Splitting array, which instead of using filters, used tiny "deflector" (SiN) to either deflect or pass light that made it through an initial layer of microlenses by taking advantage of the diffracted nature of light. The SiN material, used every other pixel, deflected red light to the neighboring photodiodes, and passed "white minus red" light to the photodiode of the current pixel. The alternate "every other pixel" had no deflector, and passed all of the light without filtration. Here is the article:

http://image-sensors-world.blogspot.com/2013/02/panasonic-develops-micro-color-splitters.html

The ingenuity of this design results in only two "colors" of photodiode, instead of three: W+R and W-R, or White plus Red and White minus Red. I think that, if I understand where your going with the descriptions both above and below, that this is ultimately where you would end up if you took the idea to it's extreme. Simply do away with filtration entirely, and pass through the microlenses as much light as you possibly can. Panasonic claims "100%" of the light reaches the photodiodes...I'm doubtful of that, there are always losses in every system, but it's certainly a hell of a lot more light reaching photodiodes than is currently possible with a standard bayer CFA.

I think Micro Color Splitting is probably the one truly viable alternative to your standard Bayer CFA, however the sad thing is it's Panasonic that owns the patent, and I highly doubt that Sony or Canon will be licensing the rights to use the design any time soon...so, once again, I suspect the trusty old standard Bayer CFA will continue to persist throughout the eons of time. :p


mb66energy said:
Enhancing resolution (perhaps) slightly (according to 3kramd5's or caruser's description): ( <=EDIT)
Typical pattern is (for DPAF sensor in current config, AF and exposure): ( <=EDIT)

Code:
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb

Just resort to this (after AF is done) to the following readout with 20MPix but 2 colors per (virtual) pixel: (<=EDIT)
Code:
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b

You are right (and that was my feeling to) that this will not dramatically enhance resolution but I see one special case there it might help a lot: Monochromatic light sources which will used more and more while signs (street signs, logos, etc.) are lit by LEDs. I observed that de-bayering works bad with LED light, especially blue and red light because the neigboured green photosites aren't excited enough. I very often see artifacts in that case that vanish if you downsample the picture by a factor 2 (linear).

I understand the general goal, but I think Micro Color Splitting is the solution, rather than trying to use DPAF in a quirky way to increase the R/B color sensitivity. Also, LED lighting is actually better than sodium or mercury vapor lighting or even CFL lighting. Even a blue LED with yellow phosphor has a more continuous spectrum than any of those forms of lighting, albeit at a lower intensity level. However progress in the last year or so with LED lighting has been pretty significant, and were starting to see high-CRI LED bulbs with around 89-90 CRI, and specially designed LED bulbs are starting to come onto the market that I suspect will ultimately replace the 95-97 CRI CFL bulbs that have long been used in photography applications where clean, broad-spectrum light is essential.

Regardless of what kind of light sources we'll have in the future, though, I think that, assuming Panasonic can get more manufacturers using their MCS sensor design, or maybe if they sell the patent to Sony or Canon, standard Bayer CFA designs will ultimately disappear, as they simply filter out too much light. MCS preserves the most light possible, which is really what we need to improve total sensor efficiency. Combine MCS with "black silicon", which employs the "moth eye" effect at the silicon substrate level to nearly eliminate reflection, and we have ourselves one hell of a sensor. ;D

(Sadly, I highly doubt Canon will be using any of these technologies in their sensors any time soon...most of the patents for this kind of technology is held by other manufacturers...Panasonic, Sony, Aptina, Omnivision, SiOnyx, etc. There have been TONS of CIS innovations over the last few years, some with amazing implications (like black silicon)...the only thing Canon developed that barely made it on the sensor-innovation radar is DPAF, and it was like someone dropped a pebble into the ocean, the DPAF innovation was pretty much ignored entirely...)
 
Upvote 0
So I think we have some substantial "overlap" now - great.

The silecs paper is interesting read! It's funny what they put together in these dimensions and 20 million fold to make an image sensor that really works well. But the improvements are gradual and ...

jrista said:
[...]

The alternate "every other pixel" had no deflector, and passed all of the light without filtration. Here is the article:

http://image-sensors-world.blogspot.com/2013/02/panasonic-develops-micro-color-splitters.html

The ingenuity of this design results in only two "colors" of photodiode, instead of three: W+R and W-R, or White plus Red and White minus Red.

[...]

... I agree that this is the "Königsweg" as we in germany say, the "kings way": Splitting the color instead of filtering out miswanted colors (throwing light away) at the cost of system efficiency.
I read about that technology and I think they use interference filters which reflect a part of the spectrum and transmit the opposite part of the spectrum.
An alternative might be a sensor which uses a prism or optical grating to separate wavelengths and three or four photodiodes to sense the colors.

These sensors are the counterpart to OLED displays which omit filtering (like LCD displays) and produce light in the wanted colors directly. It is the way of the future.

Before I forget: I found an interesting patent about a dual photodiode per pixel architecture which is used to increase the DR to 120 dB called "Dynamic-Range Widening in a CMOS Image Sensor Through Exposure Control Over a Dual-Photodiode Pixel". They have a pixel split into a L shaped photodiode with 75% area and the 25% area which complets the L-shape to a square (might be not available at the moment due to a web site update):
http://www.researchgate.net/publication/224611868_Dynamic-Range_Widening_in_a_CMOS_Image_Sensor_Through_Exposure_Control_Over_a_Dual-Photodiode_Pixel/file/e0b495219bbfda2524.pdf

Best - Michael
 
Upvote 0
mb66energy said:
Before I forget: I found an interesting patent about a dual photodiode per pixel architecture which is used to increase the DR to 120 dB called "Dynamic-Range Widening in a CMOS Image Sensor Through Exposure Control Over a Dual-Photodiode Pixel". They have a pixel split into a L shaped photodiode with 75% area and the 25% area which complets the L-shape to a square (might be not available at the moment due to a web site update):
http://www.researchgate.net/publication/224611868_Dynamic-Range_Widening_in_a_CMOS_Image_Sensor_Through_Exposure_Control_Over_a_Dual-Photodiode_Pixel/file/e0b495219bbfda2524.pdf

Aye, I read about that. There are a few other patents for similar technology as well. They all use a different exposure time for the luminance pixels, though, and the way they achieve that is to extend the exposure time for the luminance pixels across frames. The majority of these sensors are used in video applications, which is the primary reason they can employ the technique. They can expose luma for two frames, and blend that single luma value into the color for both. (I cannot actually access the patent article you linked without an account, and they require an institutional email to sign up, however based on the abstract it sounds pretty much the same as other patents that use exposure control for DR improvement.)
 
Upvote 0