Any possible answer as to whether those partially masked off pixels used for PD-AF could be spread further out of the center of the frame? Could they not also be placed near the very border of the sensor?
If yes ... why did Canon limit FPPD-AF to such a narrow area in the center? Processing power of the AF-CPU? Or Canon "marketing differentiation" ... so they can offer "new, improved 30% sensor coverage" in a 70D then 50% in a 7D II and eventually 90% in the 1D X Mk. II? And another 3 years later they unlock another 10% via a firmware update?
I would assume PD pixels will have the same kind of spread limitations as a dedicated AF sensor. For one, the farther out to the edge of the frame you go, the less accurate you can assume
the incoming image is. You may have a superbly stellar lens that has very little in the way of corner softness, edge CA, or other types of aberration. As you near the edge of the frame, you experience vignetting as well, even on the best of lenses, by as little as a stop or as much as 3 1/2 to 4 stops. Phase detection requires a certain amount of light, and has to make a certain amount of assumptions about the characteristics of the light its using to judge focus. The peripheral regions of a lens' image circle are less viable for AF purposes, as your largely stuck with the lowest common denominator when making assumptions about IQ in those regions.
Additionally, because those pixels are partially masked off, they are still working with less light, just like an AF sensor. The average AF "point" involves sensitive CMOS strips of pixels, arrayed in a very specific way, that receives light from a specially built lens that is part of the AF unit (usually under the mirror). That special lens splits light by as many AF points x2 for strips, x4 for cross, and x8 for double cross (the latter only exist on Canon's 61pt AF system). It may be that standard AF sensors have generally less light to work with than FPPD-AF systems, I can't be sure...it would depend on how much of the pixel has to be masked off, and whether there is anything else special about those particular pixels that may restrict light more. It may also be that PD pixels generally have more light to work with.
I would generally expect that AF point spread with FPPD-AF would be similar to dedicated AF sensors, with possibly less restriction if they are not as light-limited. Even if they are less light-limited, that would only mean you could have more f/8 sensitive AF points in a similar spread as a dedicated AF sensor, as you still have vignetting and aberrations to deal with in the periphery. The new 61pt AF system from Canon has a point spread that covers 53% of the frame. That seemed to be quite a feat, and they had to drop f/8 AF support to achieve that (which really confuses me, as you generally only do f/8 AF with the center point(s), which wouldn't be subject to the vignetting and aberrations near the periphery of the point spread area). FPPD-AF, if it is more light sensitive, might reach 60%. I wouldn't expect anything extreme though...fully-effective full-frame point spread might not be something we see right away.
To speculate more, I don't see why it couldn't be possible to utilize lens profiles of chipped Canon lenses to dynamically tune the AF system. If you could tie in lens profiles into the AF system, it would know what amount of vignetting and what types of optical aberrations (and to what degree) it has to deal with. If you are using a top-shelf lens like the EF 600mm f/4 L II, your probably working with near-perfection and minimal vignetting. On the other hand, if you are working with a kit EF-S 18-55mm, you probably have a moderate amount of vignetting and some pretty major CA in the periphery. No reason the AF system couldn't dynamically reconfigure the available AF points and point spread by lens with such knowledge.