Could "dual pixel" help DR (HDR)?

Status
Not open for further replies.
Could Canon (or Magic Lantern) use dual pixel sensors to shoot different exposures to make single-shot HDR to give extended dynamic range?

The different exposure would have to be done with ISO since shutter and aperture would obviously be set only one way in each shot.

I may not be understanding something fundamental here.
 
I don't think anyone has one to test. Its possible that there is access to control iso on each half of a pixel but without a camera and many hours of experimentation, I don't know how any one would know.
It doesn't seem likely that ISO of each half of a pixel can be controlled independently unless Canon has something in mind for higher end cameras.
 
Upvote 0
By itself it is unlikely. However it is very likely that Dual pixel necessitated a change in the readout scheme. Changes in the readout scheme could effect DR. So for the time being it is best to wait until the tests start coming out. After it is in production and in the hands of the review sites.
 
Upvote 0
I'd imagine there would be some interesting effects in the bokeh, should this be done - presuming a hack along the lines of the 7D/5D3 with alternating lines, you'd get the left half of every pixel exposing for shadows and the right half of every pixel exposing for highlights. As each half sees a different phase, out of focus areas could take on a strange look - for example, an OOF bright light source could end up with one half of the bokeh ball being close to full white, and the other being mid grey (the shadow channel would be blown out).

If they could alternate between the left and right halves for the two exposures, it could alleviate this potential problem.
 
Upvote 0
tcmatthews said:
By itself it is unlikely. However it is very likely that Dual pixel necessitated a change in the readout scheme. Changes in the readout scheme could effect DR. So for the time being it is best to wait until the tests start coming out. After it is in production and in the hands of the review sites.
Big changes in readout, there is a separate processor dedicated to the readout for autofocus, so I'm sure that many experimenters will try to hack it. I'd think that it is controlled in the firmware, so it may be possible unless Canon took some drastic measures to make it difficult.
 
Upvote 0
That will give you the weirdest bokeh of all time.

Those vertically split pixels takes half image from the lens, each half will has to be different for the phase AF to work. If they are also exposed differently, then the final image will have something like dark left half circle + bright right half circle in the background blur, which would be totally bizarre.
 
Upvote 0
BozillaNZ said:
That will give you the weirdest bokeh of all time.

Those vertically split pixels takes half image from the lens, each half will has to be different for the phase AF to work. If they are also exposed differently, then the final image will have something like dark left half circle + bright right half circle in the background blur, which would be totally bizarre.

At best I'd expect just a tiny bit of dark fringing at the very edge of the bokeh where it only hits one fractional subpixel and not the other, resulting in approximately half the expected brightness. I suspect that such artifacts could easily be avoided with some sophisticated digital filtering, though—perhaps something like this:

  • Take one half-image and interpolate the image to calculate approximately what it would look like when shifted by half a subpixel.
  • Convert both images to floating point subpixel values (to add room for additional precision).
  • Do a subpixel-by-subpixel search to replace off-scale high values in the brighter image with scaled-up values from the darker version of the image.
  • Similarly replace any off-scale low subpixel values in the darker image.
  • Do a subpixel-by-subpixel search for in-scale values where the values differ by more than a plausible amount (the sum of the expected maximum noise amplitudes for the chosen ISO settings, presumably) and replace the values in the shifted image with the values from the unshifted image (after scaling the value). This fixes the crisp edge problem mentioned previously; you have to pick one image to be "truth", so you should pick the one with the least distortion—i.e. the unshifted image.
  • For each subpixel, compute the average of the two images to (at least in theory) lower the noise floor.
  • Convert back to a higher-bit-depth integer value.

And now you have higher dynamic range and lower noise without fringing (albeit at half the original resolution in one direction).

Or if you actually need that extra resolution, you could do the same thing without the averaging step and then repeat the process in reverse without the averaging step, and you should get a plausible approximation of high dynamic range with full resolution, but no noise reduction and a little bit of imprecision in the location of high-energy (crisp) edges that are outside the dynamic range of one set of subpixels or the other. Not sure how well it would work in practice, but it could be a fun experiment. :)

Where I'd expect fascinating differences would be in lens flare.
 
Upvote 0
People concerned about "weird bokeh" need not be. Just think about a (non-Xtrans) bayer array, there's only red and blue in alternate lines. Does the bokeh turn out with weird colours? No, it's all interpolated and corrected in software. (Unless you're the type who likes taking pics at 3200ISO of the inside of your lens cap and pushing 5-stops in PP).

Think of using 'dual pixel' the same way that Magic Lantern just hacked the 5/7D to use alternate ISOs on different lines with the dual-readout thingummy. Assuming that you could hack the firmware to allow readout from each individual half-pixel at a time, then I see this working exactly as the ML hack does right now.
That assuming that it's possible in hardware, maybe there's only one readout transistor for each double-half-pixel, in which case this whole discussion is moot.

It might turn into a RAW processing nightmare, but if ML can hack it to produce nice in-camera Jpegs then it might just be popular.

Also, read this. It is definitely possible to get extended DR using funky high/low sampling. The main reason the Fuji wasn't successful was (afaik) the fact that not many (available) programs could decode the RAWs
 
Upvote 0
dr croubie said:
People concerned about "weird bokeh" need not be. Just think about a (non-Xtrans) bayer array, there's only red and blue in alternate lines. Does the bokeh turn out with weird colours? No, it's all interpolated and corrected in software.
The main difference is everything that has gone before (be it monochrome sensors, bayer, x-trans, foveon) has had each and every photosite capturing the same angle of view. This dual pixel AF sensor is different. Each photosite (sub pixel) is only able to see half of the phase.

Imagine for a moment that you could capture an image independently from the left and the right hand phase of this dual pixel sensor (which Canon will obviously not let you do natively). If you've got everything in focus, the two halves will look identical, but if the something is out of focus, it will appear in a different place on the frame in the two images (think of a split image focusing screen).

Instead of a full sized square photodiode behind the microlens, there are two photodiodes - according to the canon marketing material, they are rectangular in shape, together forming a square of the size/shape of a normal photodiode, with a vertical division. This will allow one side to see one phase, and the other side, the other phase. Theoretically if you have a perfect point source of light (bright, about the size of one pixel should it be in focus) out of focus, together the two phases will see a perfect circle of blur. But on its own, the one side would theoretically see just a semi-circle of blur, with a vertical cut off. The other side will see the other half.

This is not going on over the width of just one sub pixel, but could in extreme cases of blur (think 85L wide open at close focusing distances) cover half the frame. Unlike with a bayer sensor and a simple AA filter to blur out neighbouring pixels, there is no way you can re-assemble this data meaningfully if each half is exposed differently, and one half is blown out.

The only way I can picture it working is to do something along the lines of staggering it - one dual pixel having left underexposed, right overexposed, and then the next dual pixel being the opposite way around - left overexposed, right underexposed. That way, some post processing could look at them in pairs of left phase, and check first of all to see if the overexposed channel has blown out (if so, use the neighbours underexposed left phase value). And secondly, do the same for the underexposed channel if it gets down to the noise floor. Without doing that, bokeh would look odd.
 
Upvote 0
Status
Not open for further replies.