Right now the 14-bit data per color channel in the RAW is scaled appropriately to store the dynamic range of the camera's sensor, whatever that might be. If the scaling was changed, then the image from the camera's sensor could actually contain much more data.
Yes, I realize that ADCs are typically matched (loosely) to the DR of the sensor, considering full-well capacity & noise. And what I'm saying is that for the current sensors in question, your statement that the sensor with lower DR will contain more information for a lower DR scene (that is still within the DR of the camera), if of course the end points of the lower DR sensor were still mapped to the endpoints of the ADC, is not correct because of the read noise of the system. If the read noise were lower (say 1 electrons instead of >20 electrons), then your statement would be correct.
But as of ~2008, Martinec convincingly shows quantization error is largely absent because the read noise is more than adequately sampled (~6 ADU for the 5D Mark II/III!). Therefore, any more 'accurate' representation of the signal, as you are suggesting, will only more accurately represent the noise (fluctuations) within that signal, without any tangible benefit in actual image data.
I think what helpful means is that IF the DR of the ADC was MATCHED to the DR of the scene, then more info could be had from a low DR scene, even with a lower resolution ADC, but he's not describing his idea clearly.
You could certainly get more info from a low DR scene and low DR ADC if they were matched up to the same low and high levels - but there aren't and they won't be. That kind of fiddling is ridiculous to do in a practical ADC system that's used in imaging this way. Not saying it can't be done, it just isn't at this point and doing so would require that a lot more post-processing scaling would have to be done for each image and then you would still have to move it up or down to place the relative data you acquired into an absolute frame of reference. That's why cameras work the way they do now.
Full scale of the ADC needs to be set to the full-well capacity of the sensor, everything else below scaled normally until base noise is a problem. We don't want to putz around by setting ADC's max to 2/3 of full well for this shot because nothing in the scene is that bright. If you were going to do that you'd do better scaling the analog signal from the sensor to match its peak value to ADC max. but that's still not a good system because we don't want relative data, we need absolute data to simplify creating the final image.
14 bits of ADC is good on SoNikon's latest sensors at low ISO
14 bits of ADC is 2 or more LSBits of noise on Canon's, even at low ISO
16 bits might be done on some specialty or analytical cameras with cooled sensors to get super low read noise.
compress all that down with the non-linear gamma we use and we get a useful 8 bit jpg sort of image.
Helpful, if you and the rest of the academics really want to improve SNR and DR performance, you'd devise a non-linear ADC and processing system to better utilize those bits by spreading them over a log function instead of a linear one. Then a 14 bit log ADC could = 14 stops of DR.