Patent: Quad pixel AF sensor

Sporgon

5% of gear used 95% of the time
CR Pro
Nov 11, 2012
4,719
1,537
Yorkshire, England
I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.

Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?
 
Last edited:
Upvote 0

Sibir Lupus

EOS M6 Mark II + EOS M200
Feb 4, 2015
167
129
40
Given they discussed QPAF in the orginal DPAF patent way back in arund 2012 it's a long time coming. They also discussed asymmetric DPAF and suing it for HDR. One does not really need QPAF for x-type, they could easily make groups of pixels with DPAF in the perpendicular direction so AF points could consist of just two orientations of DPAF. Would be easier to implement and have greater sensitivity than QPAF.

My guess for the delay might have to do with processing power catching up with QPAF tech, as I'm sure it needs at least twice as much calculations vs DPAF II. We'll either see a cranked up DIGIC X in the R1, or possibly dual DIGIC X to handle QPAF.
 
Upvote 0

jam05

R5, C70
Mar 12, 2019
916
584
I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.

Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?
Even the ctoss types are arranged in columns and have vertical sensing and horizontal components. Now rotate the camera. That pixel is still in it's original alignment.
 
Last edited:
Upvote 0
Mar 25, 2011
16,848
1,835
Making a quad pixel sensor isn't so difficult but the software to operate it must be a nightmare. If it ever comes to market, the s0ftware is going to be fairly simple at first. I expect that they have been working on software for years in research, but in the real world I'd bet that there are all kinds of strange issues. I've seen Canon patents in the past for "n" numbers of subpixels since they were dealing with the electronics portion and patents want to cover every possible permutation.

When dual pixel came out, Canon said that it was the software that was the problem with bring it to market and they brought in experts from their professional video division to help figure it out. Even then, it was difficult. Presumably, there are now engineers with a much greater understanding as to how a quad pixel software might work to have autofocus vertically and horizontally from the same pixel. I think that diagonally will be a future development if ever. The other possibilities like independent gain for each sub pixel may be future developments, the complications in processing something like that will also require a lot of testing. They have it working for dual pixel sensors, the processing power needed may restrict it to video cameras right now, but its coming.
 
  • Like
Reactions: 1 users
Upvote 0
Quad gain output would not surprise me in the RF mount C700 replacement.
I would not expect it in a flagship hybrid mirrorless but I could see it in a flagship cinema camera.

Well, Canon is always getting bagged on for not enough dynamic range... If they went with quad gain in a flagship like the R1 and got a very usable 15-16+ stops, that would put a lot of heat down on Sony and Nikon.
 
Upvote 0
I'm curious as to why Canon would use all vertical sensing DPAF on sensor when in their higher end DSLRs the AF points that aren't cross type ( or dual cross type) are horizontal sensing.

Just thinking on this, I thought DPAF had the split vertical, left and right, (with camera horizontal) so it would be horizontal sensing, not vertical ?
If, as in Canon's current DPAF cameras, the pixels are split horizontally (meaning the line that splits the pixel in two halves is vertical and the pixels are therefore horizontally next to each other, i.e. left and right pixel half in horizontal, i.e. landscape camera orientation) in DPAF, then the arrangement can focus on vertical structures. Horizontal structures appear identical in both left and right pixels and thus cannot be used for focusing. Because focusing relies on the different appearance of the structure to focus on in the two pixel halves.
 
Upvote 0

Sporgon

5% of gear used 95% of the time
CR Pro
Nov 11, 2012
4,719
1,537
Yorkshire, England
If, as in Canon's current DPAF cameras, the pixels are split horizontally (meaning the line that splits the pixel in two halves is vertical and the pixels are therefore horizontally next to each other, i.e. left and right pixel half in horizontal, i.e. landscape camera orientation) in DPAF, then the arrangement can focus on vertical structures. Horizontal structures appear identical in both left and right pixels and thus cannot be used for focusing. Because focusing relies on the different appearance of the structure to focus on in the two pixel halves.
Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder. (y)
So in fact it’s the same orientation as the non x type on DSLRs.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,355
22,534
Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder. (y)
So in fact it’s the same orientation as the non x type on DSLRs.
Just make sure your subject is not a vertical line 1 pixel wide on the sensor.
 
  • Like
Reactions: 1 user
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
Since we're on the subject of quad-pixel or dual-pixel focus, I was wondering if anyone knew of the details of how a phase detect (not contrast detect) pixel actually works. Everywhere I look on the internet they don't really explain it. I do understand that there are 2 sensor areas next to each other and there is some sort of micro lens in front of each area that "somehow" splits the light differently into the two areas. But how can they make one of the areas focus at a nearer distance relative to the farther distance of the other sensor so that they can decide which direction to move the focal distance to reach the correct focus? Do they have a concave lens above one and a convex lens above the other? And if so, why would this technique be sensitive to vertical lines and not to horizontal lines (which would be what I'd expect for a contrast detection sensor but I'm interested in a phase-detect sensor).

This is the kind of detail I'm interested in, if anyone knows?
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,355
22,534
Since we're on the subject of quad-pixel or dual-pixel focus, I was wondering if anyone knew of the details of how a phase detect (not contrast detect) pixel actually works. Everywhere I look on the internet they don't really explain it. I do understand that there are 2 sensor areas next to each other and there is some sort of micro lens in front of each area that "somehow" splits the light differently into the two areas. But how can they make one of the areas focus at a nearer distance relative to the farther distance of the other sensor so that they can decide which direction to move the focal distance to reach the correct focus? Do they have a concave lens above one and a convex lens above the other? And if so, why would this technique be sensitive to vertical lines and not to horizontal lines (which would be what I'd expect for a contrast detection sensor but I'm interested in a phase-detect sensor).

This is the kind of detail I'm interested in, if anyone knows?
This is the best explanation - Marc Levoy's applet http://graphics.stanford.edu/courses/cs178/applets/autofocusPD.html just rejig it so the focussing sensors are on the image sensor for mirrorless.
 
  • Like
Reactions: 2 users
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
This is the best explanation - Marc Levoy's applet http://graphics.stanford.edu/courses/cs178/applets/autofocusPD.html just rejig it so the focussing sensors are on the image sensor for mirrorless.
Thanks, AlanF, for the link. While the example shows how to focus on a bright dot on a black background with 2 lenses and 2 arrays of sensors that are not on the final image sensor itself, I really would like to see a diagram where they show how they put the system together on the final image sensor itself. It's difficult to imagine each pixel of the 45MP array having such a system in each pixel.
 
Upvote 0

Joules

doom
CR Pro
Jul 16, 2017
1,801
2,247
Hamburg, Germany
Dual Pixel AF = DSLR Lines AF
Quad Pixel AF = DSLR Crosses AF
Octa Pixel AF = DSLR Double-Crosses AF
Wouldn't Quad Pixel AF already be a bit more precise than just a cross, as you'd get two pairs of horizontal contrasts, and two of vertical (although each along the same line, so different from double cross) in each pixel with QPAF?
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,355
22,534
What's been really impressive is the way that Canon has been able to get the current DPAF to focus so fast and accurately. Some of us were worried that the processing of the dual pixels would be more processor intensive not be able to compete with embedded phase detect, but the R5 is up there with Sony speed and with a more flexible system, not requiring missing out pixels to accommodate phase detect.
 
  • Like
Reactions: 2 users
Upvote 0
Mar 25, 2011
16,848
1,835
What's been really impressive is the way that Canon has been able to get the current DPAF to focus so fast and accurately. Some of us were worried that the processing of the dual pixels would be more processor intensive not be able to compete with embedded phase detect, but the R5 is up there with Sony speed and with a more flexible system, not requiring missing out pixels to accommodate phase detect.
A bigger AF area has been a development as well. I recall patents with various tweaks to sensor design proposed that would allow more accurate AF at the edges and corners. I wonder if any of those have been implemented. A global shutter type sensor has been rumored, and perhaps quad pixel as well for a R1.

As I understand it, a global shutter CMOS sensor will have memory associated with each photosite and values will be saved in those memory sites all at once. Then read out to the camera processor and memory in the standard parallel/sequential fashion. It may require backlit sensor technology to do that. Canon has a ton of patents for doing it, its just a matter of cost to get enough good sensors out of the process. Canon is extremely price conscious, they squeeze every penny.
 
  • Like
Reactions: 1 user
Upvote 0
Ah, thanks for that. I’d just assumed that they worked like a split image focus finder but I now see it’s more like a rangefinder. (y)
and @usern4cr:
Actually, dual pixel autofocus in fact works pretty much like an optical split image range finder. In a split image range finder with which you can focus on vertical structures, there is a horizontal line splitting top and bottom image in your focus area. Your image is in focus if the top and bottom images match up, i.e. the vertical structure in your image that you are trying to focus on has no horizontal offset between top and bottom image but instead runs in one line from top to bottom. If your image is out of focus, then the top image is either shifted to the left or to the right vs. the bottom image, corresponding to focus too close or too far away (the actual direction is specific to the particular implementation in the camera). This is how in a manual focus camera you can determine the direction that you need your focus to change - by the direction of misalignment between top and bottom image. Such rangefinders are realized by including a prism in the focusing screen in a way that the top half shows only rays from the left side of the lens, and the bottom half shows only rays from the right side of the lens (or vice versa, depending on the implementation). Dual pixel autofocus works exactly the same: one subset of pixels receiving light from the left side of the lens, the other subset of pixels from the right side of the lens, by placing appropriate microlenses on top of each pixel. So if you take a horizontal row of pixels (say e.g. 50 pixels) in the image area in which you want to achieve focus, you compare the image (in our example this 'image' is 1 pixel high and 50 pixels wide) you get from the 50 left-sensitive pixels to the image from the 50 right-sensitive pixels. If the two images are shifted to the left or to the right with respect to each other, your image is out of focus, and the direction of the shift determines the direction of the necessary focus change. The left- and right sensitive pixels have essentially the same function as the top and bottom optical split image focusing screen, but the optical split image focusing screen is rearranging the light from left and right to top and bottom in order to achieve both a 'human interpretable' optical focusing information and showing a complete image in the focusing screen. For DPAF of course this is not necessary, the outputs from the left- and right-sensitive lines of pixels are compared to each other and the relative shift between the images is determined for autofocus. Note (an apparently common misconception) that a single pixel in DPAF is NOT enough to perform autofocus. You always need a number of (in current Canon implementations horizontally) adjacent pixels in order to compute the shift between left- and right-sensitive images.

So in fact it’s the same orientation as the non x type on DSLRs.
In fact (at least e.g. in the 5D4) it appears to be the other way round: in the manual it says that the non x-type autofocus points are sensitive to horizontal structures. But this could also be due to technical reasons on where a particular type of autofocus sensor can be placed in the presence of everything that is also necessary in a DSLR (mirror box, viewfinder prisma, etc.), and not so much because it is more advantageous from a real-world focusing perspective, in which I believe that accurate focusing on vertical structures might be more important.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
and @usern4cr:
Actually, dual pixel autofocus in fact works pretty much like an optical split image range finder. In a split image range finder with which you can focus on vertical structures, there is a horizontal line splitting top and bottom image in your focus area. Your image is in focus if the top and bottom images match up, i.e. the vertical structure in your image that you are trying to focus on has no horizontal offset between top and bottom image but instead runs in one line from top to bottom. If your image is out of focus, then the top image is either shifted to the left or to the right vs. the bottom image, corresponding to focus too close or too far away (the actual direction is specific to the particular implementation in the camera). This is how in a manual focus camera you can determine the direction that you need your focus to change - by the direction of misalignment between top and bottom image. Such rangefinders are realized by including a prism in the focusing screen in a way that the top half shows only rays from the left side of the lens, and the bottom half shows only rays from the right side of the lens (or vice versa, depending on the implementation). Dual pixel autofocus works exactly the same: one subset of pixels receiving light from the left side of the lens, the other subset of pixels from the right side of the lens, by placing appropriate microlenses on top of each pixel. So if you take a horizontal row of pixels (say e.g. 50 pixels) in the image area in which you want to achieve focus, you compare the image (in our example this 'image' is 1 pixel high and 50 pixels wide) you get from the 50 left-sensitive pixels to the image from the 50 right-sensitive pixels. If the two images are shifted to the left or to the right with respect to each other, your image is out of focus, and the direction of the shift determines the direction of the necessary focus change. The left- and right sensitive pixels have essentially the same function as the top and bottom optical split image focusing screen, but the optical split image focusing screen is rearranging the light from left and right to top and bottom in order to achieve both a 'human interpretable' optical focusing information and showing a complete image in the focusing screen. For DPAF of course this is not necessary, the outputs from the left- and right-sensitive lines of pixels are compared to each other and the relative shift between the images is determined for autofocus. Note (an apparently common misconception) that a single pixel in DPAF is NOT enough to perform autofocus. You always need a number of (in current Canon implementations horizontally) adjacent pixels in order to compute the shift between left- and right-sensitive images.


In fact (at least e.g. in the 5D4) it appears to be the other way round: in the manual it says that the non x-type autofocus points are sensitive to horizontal structures. But this could also be due to technical reasons on where a particular type of autofocus sensor can be placed in the presence of everything that is also necessary in a DSLR (mirror box, viewfinder prisma, etc.), and not so much because it is more advantageous from a real-world focusing perspective, in which I believe that accurate focusing on vertical structures might be more important.
I'm still trying to understand this: So, you're saying you have a horizontal row of 50 pixels, each pixel with 2 sub pixels. One sub pixel is sensitive to the rays from the left of the main lens, and the other sub pixel sensitive to the right of the main lens. I'm wondering what type of micro lens above each of these 2 sub pixels can be so selective? Wouldn't each of the micro lenses have to reject half (at least) of the light, and how would they make such an optical structure that would be so completely effective in the splitting when they sub pixels are both next to each other in the sensor? I think the physical construction of the two sub pixel micro lenses and how they split the sensation of light from the left & right side of the main lens is the crux of what I need in order to really understand it.

I also assume that you could shift your computational logic by 1 (or more) whole pixel to the left or right to have another 50 pixels (25 to the left, and to the right) to give you a new AF value, correct? That is, the 2 sub pixels of a single pixel could be used by up to 50 different sets of AF logic if they wanted to design that many AF points, correct?
 
Upvote 0