Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.

The noise floor does not only 'exist' in the lower-end of the signal range, it exists through-out the range event to the clip point.

Let's do a math exercise:

Here's a stream of signal:

0 10 100 1000 10000 100000

If my sensor has noise floor of 3-bits (0~8), even if I sample it using full precision, I got this:

**3 12 107 1004 10002 100005**, my lowest 3 bits are drown in noise, even for the highlight.

If I sample it with 1/8 precision (chops off lower 3 bits), I get:

0 1 13 125 1250 12500

Then recreate the signal by multiplying the sample signal by 8 times:

0 8 104 1000 10000 100000

And mix it with random number generator for 3-bits:

**5 10 109 1001 10007 100004**The results of high and low precision sampling only fluctuates within noise floor, so are essentially the same.

The conclusion? If you are sampling more precision than your SNR, you are just sampling noise more precisely, which is still noise, and is same as if you don't sample as precise, then add noise in post.

Apply this to your example, when you have 35e- noise floor, your tonal range is not 65535-35 = 65500, but rather 65535/35 = 1872 levels (~10.8 stops), because the bottom 5 (!) bits are unstable noise.