If you really think it is impossible then you should post the mathematical analysis that shows it to be so No. Theory bends to observation, never the other way around.
Well, if I have a noisy image (ie with poor DR), it will be usable only at smaller print sizes than a less noisy one. That is a real world example of down-sampling, and if what you claim were true the small print would show exactly the same noise and DR than the larger one. Fortunately, this is not what we see in the real world!
As to the continued assertion that downsampling can not increase bit depth, consider a hypothetical one-bit sensor. I have two pixels, each of which can only take a value of 0 or 1. If I down sample by a factor of two, averaging pairs of pixels, I now have one pixel which can take values of 0, 1/2 or 1 and I now need more than one bit to store that pixel. I have traded-off resolution for improved dynamic range and I have more dynamic range than the original data (which you keep claiming is impossible).
It does not matter what the data is in this case - it just a basic property of the math.
If you do not think that this argument extends to a 14 bit file, I suggest writing out all the possible pixel values before and after downsampling from 22MP to 8MP and then work out how many bits of DR you have in the result...
So I say again, if you continue make the claim that DXO are "obviously wrong" because it is "impossible" to get pixel values with more than 14 bits of data after downsampling a 14 bit RAW file, explain why...
BTW, downsampling a 1 bit image is not an artificial example. Early monochrome printing relies on this technique. If you stand up close you see a noisy mess of dots. If you stand further away (making the image smaller - ie downsampling), you start to perceive the image as have graduated tones rather than just patches of plain white or plain black.