Quote from: arco iris
, what is the probability that Canon could do anything better than the five major sensor manufacturer.
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Just caught this guy out of the corner of my eye. Wasn't prepared for BIF at that moment. Some motion blur but not too bad.
840mm, 5.6, 1/500, ISO 1250
Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.
Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.
I'm not really sure what kind of sensor design your proposing.
If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF.
which would be a little odd to demosaic and might not produce the best quality results.
Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that.
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.
So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?
If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.
I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.
Having two photodiodes per pixel means the photodiode pair exists underneath the CFA filter and the microlens(es). That is actually the only way DPAF really works...to be able to detect a phase differential, you need to check the HALVES of each PIXEL. If you just shrink the pixel size and put different color filters over those smaller pixels...well, now you have smaller pixels (and an odd image ratio), and you no longer have DPAF. It's a tradeoff...resolution or a focus feature, which do you want/need? (Or, as the case may be, you get a cross between both, slightly smaller pixels (i.e. 20mp 70D vs. the 18mp that came before) AND DPAF.)
I know everyone likes to speculate about all the wonderful things that DPAF might potentially bring to the table...but so long as it is Dual-Pixel Autofocus, that's all your really going to get. There really isn't any magic bullet here, no trickery that you can pull of by somehow using one half of the pixels at ISO 100 and the other half at ISO 800 for more dynamic range, etc. Pixel area is pixel area, and phase detect is phase detect. DPAF pixels serve one purpose when read out for AF, and another purpose when the halves are binned and read out for an image. Those are really the only two functions DPAF will ever serve, and while I'm sure the Magic Lantern guys will figure out something cool about the specific mechanism of DPAF's implementation...they will still only be able to work within the bounds of the sensors design. The ML DR increases was ultimately thanks to an OFF-die downstream amplifier that allowed them to control the readout process, not really due to any specific nuance of Canon's actual sensor design.
Assuming Canon does not remove that downstream amp in favor of some kind of on-die parallel ADC and readout system, I honestly don't expect them to be able to do anything more radical with DPAF. They may find a way of doing creative focus things with AF, maybe add the ability to remember AF positions for video purposes, things like that...but the design of DPAF doesn't really mean Canon suddenly has some amazing wildcard on their hands that can give them a significant edge in the stills photography department.
The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.I think that's not the right way to look at this. It'd be more like having two color channels per pixel in the raw file rather than only one as input to the demosaic.
Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.
Regarding AF point-linked spot metering for the 5DIII, while the AF systems are nearly the same as you state, the metering systems are vastly different. Here are the 61 AF points superimposed on the 5DIII's 63 zone iFCL metering grid:
The resolution of the 5DIII's metering sensor simply may not be high enough to support spot metering with the AF points, whereas the 100,000 pixel metering sensor of the 1D X can do so. Even when the 1D X's metering sensor reverts to zone metering (in very dim light or for flash exposure metering), it's divided into 252 zones - 4 times the density of the 5DIII's metering sensor.
I can't say for sure that those tecnhical limitations are absolute, but you might consider the possibility that there are technical reasons for those features being available on the 1D X but not on the 5DIII. After all, they did add f/8 AF to the 5DIII.