What you can do in software doesn't matter. Dynamic range benefits what you do in-camera. It doesn't matter if you can use clever software algorithms to massage the 13.2 stops of DR in an original image to fabricate artificial data to extract 14.0, 14.4, or 16 stops of "digital DR" (which is not the same thing as hardware sensor DR). I'll try to demonstrate again, maybe someone will get it this time.
"I am composing a landscape scene on-scene, in-camera. I meter the brightest and darkest parts of my scene, and its 14.4 stops exactly! HA! I GOT 'DIS! I compose my scene with the D800's live view, and fiddle with my exposure trying to get the histogram to fit entirely between the extreme left edge and the extreme right edge. Yet, for the life of me, I CAN'T. Either my histogram rides up the right edge a bit (the highlights), or it rides up the left edge a bit (the shadows). This is really annoying. DXO said this stupid camera could capture 14.4 stops of DR!! Why can't I capture this entire scene in a single shot?!?!?!!!1!!11 I didn't bring any ND filters because this is the uberawesomedonkeyshitcameraoftheyearpureawesomeness!!!!!"
The twit trying to capture a landscape with 14.4 stops of DR in a single shot CAN NOT because the sensor is only capable of 13.2 stops of DR! The twit of a landscape photographer is trying to capture 1.2 stops (2.4x as much light) in a single shot and his camera simply isn't capable of doing so. He could take two shots, offset +/- 2 EV and combine them in post with HDR, but there is no other way his camera is going to capture 14.4 stops of DR.
THAT ^^^^^ UP THERE ^^^^^ IS MY POINT about the D800. It is not a 14.4 stop camera. It is a 13.2 stop camera. You can move levels around in post to your hearts content, dither and expand the LEVELS YOU HAVE. But if you don't capture certain shadow or highlight detail TO START WITH....you CAN'T CREATE IT LATER. All your doing is averaging and dithering the 13.2 stops you actually captured to SIMULATE more DR. Ironically, that doesn't really do anyone any good, since computer screens are, at most, capable of about 10 stops of DR (assuming you have a super-awesome 10-bit RGB LED display), and usually only capable of about 8 stops of DR (if you have a nice high end 8-bit display), and for those of you unlucky enough to have an average $100 LCD screen, your probably stuck with only 6 stops of DR. Print is even more limited. An average fine art or canvas print might have 5 or 6 stops. A print on a high dMax gloss paper might have as much as 7 stops of DR.
There is little benefit to "digital DR" that is higher than the sensor's native DR. Your not gaining any information you didn't start out with, your simply redistributing the information you have in a different way by, say, downscaling with a clever algorithm to maximize shadow DR. But if you didn't record shadow detail higher than pure black to start with, no amount of software wizardry will make that black detail anything other than black. And even if you do redistribute detail within the shadows, midtones, or highlights...if your image has 14 stops of DR you can't actually SEE IT. Not on a screen. Not in print. You have to compress it, merge those many stops into fewer stops, and thus LOSE detail, to view it on a computer screen or in print.
Again, and I agree with Zim, I have learned a lot of this discussion about hardware capabilities, screen capabilities, as well as print capabilities. I am waiting for my Pixma Pro 1 printer, and because of this latest information from you, I will go straight to check the stops it has. So my uninformed question yielded even more information