Canon News has uncovered a patent application for a quad pixel autofocus image sensor.
Canon News explains Japan Patent Application 2019041178:
This patent application from Canon deals with a quad pixel autofocus sensor. Right now Canon is using dual pixel autofocus sensors, but if you ever tried to use an EOS R or an EOS M in landscape orientation to focus on a horizontal line you'll quickly realize that the phase detect sensors just go in one direction, and have little sensitivity in the other 90 degrees offset direction.
This patent application indicates that Canon has split the pixel into 4 pieces, and also offsets the microlenses as you go further out from the center.
The sensor they are describing in this document seems to be a 20.7MP sensor, with 83 million focus detection points! The pixel size seems to be 4 micrometers, which would make that approximately 22mm on the width (5575×3725) or in other words an APS-C sized sensor.
It's possible that Canon is looking at decreasing pixel density and moving to quad pixel autofocus in the future. Let's call it QPAF.
Canon uses 180nm tech for its APS-C sensors that can incorporate copper wiring. This is probably fine for a 20mp image sensor. There would be a loss of efficiency splitting the pixels further, and may lead to Canon dropping the pixel count on APS-C sensors. This would only matter if we actually do see QPAF sensors in the future.
It would be interesting if this is a new sensor technology that we would see in a future camera!
This could be a breakthrough but a holdup in a pro R camera, say a Nikon 850-Z7-Sony A7Riii competitor. The amount of information coming off a sensor like this is a torrent. Could it be four times what comes of a dual pixel sensor? How to process it all? How to exploit it to maximum advantage? It could well be worth waiting for. Or not.
Not necessarily. X pixels in DPAF translates into 2X value in the raw files. X pixels in QPAF would translate into 4x values in raw file. So for the same resolution you'd get a raw file twice as large, question is whether Canon would make a QPAF sensor with the same resolution.
Apparently it would allow two stops of latitude processing over exposed pixels, and reconstructing 3D info from the sub-pixel phase info. I wouldn't bet on either catching. IIRC, it was noted the former was problematic with DPAF, as the left sub-pixels have a different perspective than the right sub-pixels, which would be worked around with QPAF (average left-top w/ right-bottom and left-bottom w/ right-top), so it might have better success with QPAF.
I'm gonna keep an open eye for the improved AF performance.
I am with uri.raz: 2 times the data.
IMO QPAF resolves one problem with DPAF - DPAF can only detect vertical structures. Try focusing on distant venetian blinds and sometimes it struggles. Rotate the camera to portrait mode and AF acquisition is VERY fast. That is an example for the "problem" described in the article
I see QPAF as a solution to that problem: Use a 2nd DIGIC processor and secondary readout lanes orthogonal to the existing ones and you have no speed penalty (just theoretically) for to improve AF for horizontal structures.
With QPAF the whole sensor is a cross type sensor resulting in smaller AF points (no need to include a larger sensor region to find some vertical structures) which might find enough detail just in lower light.
Maybe DPAF and its maybe-successor are much more important for the future of very reliable AF than I thought before: I am not shure if it is possible for other sensor designs of e.g. Sony to implement some QPAF analogon with their phase detect pixels (but maybe they have it just now).
It was a great idea to give imaging pixels both functions: recording image + providing AF information - seems to me that this makes a more homogenous sensor compared to designs with different pixel types!
It always baffled me how DPAF sensor actually performed better in this regard than sensors with "whole" pixels.
AFAIK, the pixels don't cover 100% of the sensor area, and the microlenses concentrate the light into the active area. Thus, as long as the four sub pixels have the same total area as the pixel whence they were split, everything should be fine.
Why would it? As long as the total light-sensitive area stays the same, nothing changes. 2*n/2 equals n.
Not correct. The outputs from the dual pixel are combined into one pixel before saving to the card as a CR2 / CR3 so files are no larger. Try it with any dual pixel camera. It will be the same for Quad pixel cameras.
The 5D MK IV and the EOS R have a option to output separate values using Dual Pixel RAW, and that doubles the file size, but few use that. I'm not certain why Canon bothered to put it in the EOS R, perhaps there are future plans to make better use of it.
The way you're describing it is assuming they'll use quad bayering for the sensor like Sony Semicondocutor is using on the newer products. The A7S III rumors shows a quad bayer sensor, the GH5 S is also using a variation of a quad bayer sensor. Below is an idea of how the concept works on small sensors that are already in the field.
Updated: Huawei Mate 20 Pro camera review
Yeah. I think people underestimate the DPP factor on the lack of uptake for Dual Pixel features. That is precisely and only what has kept me from exploiting the focus-after-exposure feature. Even though its effects are slight, I'd still go out of my way to get the extra millimeter of sharpness if I could do it in Lightroom or, really, anything else. That software is truly dreadful.