For some number of pixels, it is going to reduce (halve?) the amount of light that is received by the photo diode.
If this is the green pixels, as suggested by some diagrams, then it may make little/no overall difference as there are already twice as many green receptors as there are red/blue.
Additionally, this means that there will be some pixels that do not record the same level of light as others. This will need a bit of new fancy footwork for raw converters to properly evaluate what it means to have a pixel that is not and will never have the same luminosity as all of the others around it.
This has potential to have an adverse impact on noise simply due to there being less signal available.
Will be interesting to see the outcome!
Each and every of those 20.2 million active pixels that makes up the picture (whether red, green or blue) is divided into two pixels. One for the left phase, and the other for the right phase. They both hide behind one micro lens, and are positioned next to each other (hopefully without any appreciable gap, as that might cause a strange bokeh effect) to receive the phases. Combined, they theoretically cover pretty much the same area as a conventional photodiode, and should give the same light gathering capability. Its no more than pixel binning to recreate a normal image from this sensor, with normal light gathering capabilities.
So in theory this dual pixel configuration should have no detrimental effects on SNR over a conventional 20.2MP APS-C sensor. Let's hope they've used a new sensor fabrication process to manufacture this sensor, to bring along the much anticipated (and reported) improvements in SNR - I'm guessing that was is in effect a 40.4 MP APS-C sensor would be next to impossible to make with the old sensor fabrication process.