That will give you the weirdest bokeh of all time.
Those vertically split pixels takes half image from the lens, each half will has to be different for the phase AF to work. If they are also exposed differently, then the final image will have something like dark left half circle + bright right half circle in the background blur, which would be totally bizarre.
At best I'd expect just a tiny bit of dark fringing at the very edge of the bokeh where it only hits one fractional subpixel and not the other, resulting in approximately half the expected brightness. I suspect that such artifacts could easily be avoided with some sophisticated digital filtering, though—perhaps something like this:
- Take one half-image and interpolate the image to calculate approximately what it would look like when shifted by half a subpixel.
- Convert both images to floating point subpixel values (to add room for additional precision).
- Do a subpixel-by-subpixel search to replace off-scale high values in the brighter image with scaled-up values from the darker version of the image.
- Similarly replace any off-scale low subpixel values in the darker image.
- Do a subpixel-by-subpixel search for in-scale values where the values differ by more than a plausible amount (the sum of the expected maximum noise amplitudes for the chosen ISO settings, presumably) and replace the values in the shifted image with the values from the unshifted image (after scaling the value). This fixes the crisp edge problem mentioned previously; you have to pick one image to be "truth", so you should pick the one with the least distortion—i.e. the unshifted image.
- For each subpixel, compute the average of the two images to (at least in theory) lower the noise floor.
- Convert back to a higher-bit-depth integer value.
And now you have higher dynamic range and lower noise without fringing (albeit at half the original resolution in one direction).
Or if you actually need that extra resolution, you could do the same thing without the averaging step and then repeat the process in reverse without the averaging step, and you should get a plausible approximation of high dynamic range with full resolution, but no noise reduction and a little bit of imprecision in the location of high-energy (crisp) edges that are outside the dynamic range of one set of subpixels or the other. Not sure how well it would work in practice, but it could be a fun experiment. :)
Where I'd expect fascinating differences would be in lens flare.