Patent: Expanded dynamic range using DPAF sensors

SecureGSM

2 x 5D IV
Feb 26, 2017
2,360
1,231
Unfortunately no, they don't. Have you actually used this DPRSplit app? On a couple of occasions I was able to recover some highlights.
But in general it doesn't really work as expected, merged files have no additional DR and it often creates an awful greenish colour cast in the highlights.
Good to have as a last resort, but I never rely on it. And I wouldn't recommend anybody to rely on it.
What software have you used for blending the two subframes into a single one? If you haven’t that greenish colourcast in either subframe - and you shouldn’t- the resulting frame should be clean.
On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end.

Now, I am keen to understand what DR advantage a quad pixel RAW technology may provide in terms of Dynamic Range broadening.
 
  • Like
Reactions: 1 user
Upvote 0
What software have you used for blending the two subframes into a single one? If you haven’t that greenish colourcast in either subframe - and you shouldn’t- the resulting frame should be clean.
On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end.

Now, I am keen to understand what DR advantage a quad pixel RAW technology may provide in terms of Dynamic Range broadening.

I use Lightroom for HDR merge, but the colour cast is evident before the merge, check the file produced by DPRSplit on the left and the normal CR2 on the right. The cast is actually greenish-blueish, but anyway

1565833326687.png

(some LR sliders where tweaked on the left and on the right, I didn't bother to reset the sliders, the point is the cast is always there, you can't recover).

As above, a couple of times DPRSplit worked ok for me, but I didn't bother to figure out what conditions make it produce the colour cast sometimes. It's just unreliable and can't be used as a part of normal workflow - e.g. I wouldn't recommend to deliberately overexpose relying on DPRSplit recovery.

On another note, it has been proven and confirmed that there is about 0.7 to 1 full stop of DR improvement that we gain In the end.

I highly doubt it was 'proven' and 'confirmed'. It might be working in certain conditions. For me, it just spoils the images.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Mar 25, 2011
16,848
1,835
I did try DPR split long ago, it required DNG files and a lot of hassle, but it worked to provide more DR.

Here are 3 files, a jpeg created from the original raw, notice the detail at the top of the white post, and then a helicon merge image and a photomatix merge image.

I also tried Lightroom merge, but the result was so poor I must have deleted it.

Notice that the software does have different results you could tweak them to look the same, but I chose not to.

Orig Ver2.jpghelicon output.jpgphotomatix realistic output.jpg
 
Upvote 0
Mar 25, 2011
16,848
1,835
I say that because people assume that it will work with DPAF and I understand DPAF.

I don't know Japanese, but I won't be surprised if the patent says nothing about DPAF at all.
Follow the link. Click on English in the upper right corner. It does go thru the DPAF process in fine detail.


Prior to the part where different amplification is applied to one of the pixels, the pixels are both operating in normal mode with no differential amplification and a DPAF is performed as usual. Then, the additional amplification is applied, the output averaged, and sent to the D/A converter. A 2nd method involves the D/A converter as part of the averaging, its done as a full frame, or individual pixel columns. Lots of things are happening in a very short period, most of them normal things that happen for every digital photo, but they step thru it all.

They have a single transistor switch that switches from averaged output to separate output so its very fast. But as I said, lots of things are happening to make it all work smoothly.

This is the description / chart for DPAF, you need to see the entire patent to see what parts of the circuit they refer to. There is a 2nd figure for how the part works with different ISO for each half of the pixel. Then, there are alternate methods ...

1) AF read mode
Referring to FIG. 3, a reading method (referred to as an "AF reading mode" or an "AF reading") for acquiring phase difference information from an object will be described. In the AF reading, the amplification gains of the 1 read circuit 211 - 1 and the 2 read circuit 211 - 2 are set to be equal. At time t 1, SEL 1 and SEL 2 are set high, and the pixel of the corresponding row is set to the select state. At time t 2, RES 1 and RES 2 are set low to end pixel reset. Thereafter, sampling of the noise level is performed until time t 3. At time t 3, TX 1 and TX 2 are set high, and charge transfer from the photoelectric conversion unit is started. At time t. sub. 4, TX. sub. 1 and TX. sub. 2 are low and charge transfer is terminated. Thereafter, sampling of the signal level is performed until time t 5. At time t 5, RES 1 and RES 2 are set high, and pixel reset is performed. At time t 6, SEL 1 and SEL 2 are turned low, and the pixel selection is canceled. During time t 1 to time t 6, ADD is always low, and the signals corresponding to the photoelectric conversion units 202 1 2 are individually amplified and input to the signal output circuit 106 independently. By simultaneously performing the AF reading by the 2 photoelectric conversion units 202 - 1 and 2 -, it is possible to obtain the phase difference information of the subject. When 2 photoelectric conversion sections 202 - 1 and 2 - are arranged on the left and right sides in a pixel, phase difference information in a horizontal direction is acquired.

When 2 photoelectric conversion units 202 - 1 and 2 - 2 are arranged vertically within a pixel, vertical phase difference information can be acquired.




JPA 501129491_i_000005.jpg
 
  • Like
  • Love
Reactions: 1 users
Upvote 0

SecureGSM

2 x 5D IV
Feb 26, 2017
2,360
1,231
As I mentioned above, a quad Pixel raw tech may theoretically provide up to 3 addition stops of DR. I do not care how large the resulting file might be. 4 frames taken simultaneously, moving objects or not. This may end up being a strong value proposition for landscape, architectural and outdoor sports where we are forced to shoot under lighting conditions that are available at the time . Having up to 18stops of DR on demand .... just wow.
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,575
4,110
The Netherlands
As I mentioned above, a quad Pixel raw tech may theoretically provide up to 3 addition stops of DR. I do not care how large the resulting file might be. 4 frames taken simultaneously, moving objects or not. This is may end up being a strong value proposition for landscape, architectural and outdoor sports where we are forced to shoot under lighting conditions that are available at the time . Having up to 18stops of DR on demand .... just wow.

I do wonder, if this becomes available in a real product, how Canon will present this to the user. Will it be a single, regular CR3 file with 18 stops DR or something like the current DPRAW CR3s? If it's the latter, I hope 3rd parties like Adobe will add support for that. But I fear I will still be using DPP to generate TIFFs that I import into LR for a long, long time.
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0
I do wonder, if this becomes available in a real product, how Canon will present this to the user. Will it be a single, regular CR3 files with 18 stops DR or something like the current DPRAW CR3s? If it's the latter, I hope 3rd parties like Adobe will add support for that. But I fear I will still be using DPP to generate TIFFs that I import into LR for a long, long time.
I highly doubt it ever becomes a real product. Realistically, what we're going to get is a new tech sensor, probably (hopefully) better than 5DIV, but not exceptional, just good enough.
 
Upvote 0
Apr 25, 2011
2,510
1,885
This is basically ML's dual-ISO, just done on the subpixels in DPAF Sensor while ML does it on neighbouring full pixels.
While it is fabulous that Canon implements something ML invented years ago, how on earth can this be patentable? Canon's inventive step is minimal.
Do you realize that in order for ML to use this functionality, first it needs to be implemented in hardware by... guess whom?
 
Upvote 0
Apr 25, 2011
2,510
1,885
Follow the link. Click on English in the upper right corner. It does go thru the DPAF process in fine detail.
Ah, it auto-translates when you select "Text" instead of "PDF"?

Still, I couldn't find anything that addresses the concern that amplifying the subpixel that got highlighted by one half of the lens exit pupil and merging it with the subpixel that did not get highlighted by another is a bad idea. Bad translation, omission or not a concern in practice?
 
Last edited:
Upvote 0
Mar 25, 2011
16,848
1,835
Ah, it auto-translates when you select "Text" instead of "PDF"?

Still, I couldn't find anything that addresses the concern that amplifying the subpixel that got highlighted by one half of the lens exit pupil and merging it with the subpixel that did not get highlighted by another is a bad idea. Bad translation, omission or not a concern in practice?
You can't prove a negative. Please explain why its a bad idea? Both halves of the pixel are under the same micro lens, is that what you are referring to? Its the same as averaging two adjacent pixels as in pixel binning that has been done for many years, why is it suddenly a problem?
 
  • Like
Reactions: 1 user
Upvote 0
Apr 25, 2011
2,510
1,885
You can't prove a negative. Please explain why its a bad idea? Both halves of the pixel are under the same micro lens, is that what you are referring to? Its the same as averaging two adjacent pixels as in pixel binning that has been done for many years, why is it suddenly a problem?
Imagine the case with an ideal separation by phase shift: left half-pixel is almost saturated, right half-pixel is almost black. Now, imagine you decided to amplify the left half-pixel. The result will be full saturation and loss of the actual amplitude information about that defocused highlight.
 
Upvote 0
Imagine the case with an ideal separation by phase shift: left half-pixel is almost saturated, right half-pixel is almost black.

I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?
 
  • Like
Reactions: 1 users
Upvote 0
Apr 25, 2011
2,510
1,885
I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?
It's when we have a close to perfect separation of a phase shift on a defocused point light source in the night, for example.
 
Upvote 0
I can't image a situation where the left half-pixel is almost saturated and the right half-pixel is almost black. And that's talking about a single pixel... how about the other pixels around the same area? Same thing where one half is almost saturated and the other half is almost black? Do you realize how nearly impossible such a scenario would be?
Happens in almost every image. Sharp edges of contrast objects, such as a line between the sky and buildings.
 
Upvote 0
Not for nothing, what is that schematic from? Does anyone remember their highschool analog electronics? That is about as basic as a differential amplifier gets, like page 4 in any electronics 101 type book.

Also - isn't this what A1ex of Magic Lantern / CHDK fame did with Canons back in '13 with the dual iso hack? https://www.magiclantern.fm/forum/index.php?topic=7139.0 Without low-level access to the opamp circuits he was toggling ISO high and low alternating... he didn't have per-pixel access but as I understand it the sensor data is scanned, so it was line-by-line... and now a Canon engineer who does in fact have low level access implements it correctly? nice.

I've always wondered if they allowed the hacking so they could get ideas and implement the ones that were "oh duh! why didn't we think of that!"... I guess it's too much to hope that a1ex learned Japanese and moved to Ohta Ku.
 
Upvote 0

Sharlin

CR Pro
Dec 26, 2015
1,415
1,433
Turku, Finland
While it is fabulous that Canon implements something ML invented years ago, how on earth can this be patentable? Canon's inventive step is minimal.

It may have escaped you, but Canon did not try to patent the single sentence "Expand dynamic range by using different gains on each DPAF subpixel" or something. The patent is about a detailed design for electronics that implements that idea on Canon's real-world sensor hardware, and does it fast enough to be useful. That's the patentable part.
 
  • Like
Reactions: 1 users
Upvote 0
Happens in almost every image. Sharp edges of contrast objects, such as a line between the sky and buildings.
The alignment where the left-half pixel and right-half pixel such that the left-half pixel is sky and the right-half pixel is the building and vice versa... on one row of pixels?
 
  • Like
Reactions: 1 user
Upvote 0
I've been predicting this feature since months and it seems the gods of Canon have heard me.
Finally Canon pulled this off, and if I'm right we are going to see this in the next cameras. I'm already exited to see Sony fanboys shut up about their great 1 stop higher dynamic range.
What would be even greater was, if Canon would put this into EOS R with a firmware update. However I'm realistic and don't get my hopes up, but still crossing my fingers.
If they would pull this stunt with the EOS R, it would take away one of Sonys unique selling points, so they just have 2 left (full frame 4K and IBIS).
 
Upvote 0