Neuro is a very knowledgeable guy, and you're probably pretty bright too. And comparing noise at different resolutions makes no sense at all.
Were not comparing noise, though. IMAGE quality...not NOISE quality...IMAGE quality. An image constitutes far more than noise.
You're really on the wrong side of your own argument for two (or more) reasons. One is that the "IMAGE" does not consist of a single pixel. The other is that NOISE is what defines the baseline (lower end of) DxO's dynamic range (both the screen and print scores).
So it's nonsense to pretend that you can have a discussion about the relative merits of screen vs print DR and ignore noise.
Trying to say one can objectively "see" more DR in a D800 image scaled down to 8mp vs. at its native 36.3mp is naive
You're on the wrong side of this argument too. Suppose you have to assign a "score" to the downsampled image and the original image. Do you think both images should get the same score ? If so, you should normalize. If not, you shouldn't normalize.
The overall impression of dynamic range won't change (after all, you were going to view the two images at the same size anyway), but the per pixel dynamic range changes considerably.
If you have a computer analyze the information contained in an image, and it tells you "Well, yes sir, your 8mp image has an additional 1.2 stops of DR!", that may make you feel better,
but it doesn't change how you OBSERVE the quality of the image on your screen.
Well that's the thing -- if you don't normalize, you will find that sensors with lower resolution have more "dynamic range" as defined by saturation point / black point, even though when you view the two images at the same size (not 100% crops) on your screen, they appear to have comparable dynamic range.
As it's been explained, the choice of 8mpx as a "target" is arbitrary -- the point is to get everyone on the same playing field. The difference in dynamic range scores for two sensors does not change when you normalize to a different resolution. I also made the point that even though quantization puts a limit on your ability to get a darker blackpoint, it doesn't stop you making improvements at 5db, 10db, etc, so even if you hit the quantization limit, you do get an increase in usable dynamic range.
You just hit the nail on the head, though. The "actual measurement graphs" (Screen Statistics) are indeed quite objective, and that has always been my point. I'll happily use the DXO Screen DR measurement to compare the hardware capabilities of sensors. The argument against my point is that DXO's measurements are useless as mechanism of comparing sensors because they were not taken from normalized images. I believe that notion is fundamentally wrong.
I think you're having trouble understanding some really fundamental concepts here, such as "objective", "hardware", and "measurement".
Let me pose a question -- suppose hypothetically, you have a 40mpx sensor and a 10 mpx sensor. The 10mpx sensor has a higher dynamic range per pixel. You could reasonably ask the question, if I downsampled (traded resolution for dynamic range), would I have more dynamic range in the 40mpx image ?
You are asking a different question than I am asking, which may be where the problem lies. I am not interested in how much dynamic range the "image" has, at native size or downscaled, where image in this case is a digitized two-dimensional matrix of RGB pixels. Images are virtual constructs, and they can be manipulated in near-infinite ways with software, trading detail for DR or the other way around, removing noise with deconvolution, etc.
The question I am asking is, what is the "sensor" capable of? At what point is shadow detail completely overpowered by the electronic noise in the circuit, and at what point do my whites start clipping? In the physical hardware? The sensor itself isn't scalable...you can't halve or double its resolution or pixel pitch...it is a fixed construct. If you pointed the D800, the physical device, at a test display containing something meaningful...a person, a landscape, whatever, with 14.4 stops of DR...according to DXO's Screen DR results...it will fail to capture all of the DR in that scene. The sensor is capable of 13.2 stops, so 1.2 stops worth of DR are going to be lost somewhere. It will be lost either entirely to noise, or entirely to clipped highlights, or some ratio to both.
Lets assume, for the sake of discussion, that you take a photo anyway. Let's say you expose to preserve the highlights, right up to the limit (so the brightest swatch in your test scene is exactly at maximum saturation.) You've lost a lot of dynamic range not to "noise"...that is too general a concept. There are a variety of types of noise. So lets be specific...you've lost a lot of dynamic range to "electronic noise" that is present in the sensors electronic circuit, which interferes with shadow detail. When you actually expose, and convert the image present in the sensor as charge readings at each pixel via the ADC, you are permanently losing a certain amount of the information that might potentially be recoverable from that read noise, and potentially losing or at least diminishing the rest of the information that might potentially be recoverable from that read noise. To be exact, about 0.4 stops (14.4 of our scene minus the 14.0 limit imposed by the ADC) worth of DR can be lost forever when the ADC digitizes the image, and 0.8 stops (14.0 clipping limit minus the 13.2 stops of DR the sensor is actually capable of) worth of DR will effectively be indiscernible from electronic noise...because it is either going to digitize a pixel that contains pure read noise, digitize a pixel that contains very low signal indiscernible from read noise, or digitize a pixel that contains a strong enough signal to differentiate it (even if only to a minuscule degree) from electronic noise.
You probably won't lose ALL of that 1.2 stops to noise, but you'll lose most of it, especially if your electronic noise is as low as it is in the D800. The remainder, where you still have a very low signal that might be just electronic noise or might be actual signal information...well, you could never really know for sure which it was (at least in the case of a signal floor of 0db...in a Canon sensor that uses a bias offset, you probably could discern a fair bit of image detail that was below the bias offset, and effectively within the range of it's FPN and HVBN). Even if you convert the output RAW to TIFF and scale that TIFF image down to a quarter it's original size...you still aren't gaining back that information, it was digitized (i.e. hard coded, permanently registered, whatever you want to call it) as either useful information representing your scene or non-useful information that might be noise or might be scene detail. (As a matter of fact, your losing a lot more information than you originally lose to noise if you scale an image down that much, and while your black point might approach closer to zero, pixels that constitute the lower decibel of your signal won't be any more meaningful than they were before.)
Perhaps the argument just hasn't been made properly. It may be that dtaylor put it into better words than I, although I am pretty sure I've used the same terms and concepts he has in the past. Let me try to state it in different terms that might be more meaningful.
Dynamic Range as a simple score is generally meaningless. DR that might be gained in the process of downscaling an image, at least to me, still feels rather meaningless...I understand what you (elflord) are saying when you state "you can still gain at 5db, 10db, etc.)...but that is in relation to general noise caused by the random physical nature of light, and applies at all levels to all cameras. (Read noise, however, exists only in the shadows, and exhibits in a different way than photon noise, so the same simplistic averaging rules you apply to photon noise may not apply to less random forms of electronic noise.)
As a tool to gauge how much detail you WILL NOT
LOSE to ELECTRONIC NOISE (vs. photon shot noise) if you expose a scene of known dynamic range with a sensor of known dynamic range, assuming you expose to maximize the retention of detail from the deep shadows to the brightest highlights...I believe such a DR measurement is very meaningful. That is what I refer to as Hardware DR, or what DXO calls Screen DR. It may be more accurately termed ADC DR, since it is really the ADC that imposes a limiting factor. Hardware DR tells you that even though your computer screen can only display 8 stops worth of DR when rendering your photos, if you exposed properly, you could push around about five point two
additional stops worth of useful detail-retaining information (assuming D800), and "recover" information that otherwise might simply look like pure black or pure white on your screen.
Either way...once your otherwise fluid and easily redistributable image signal on the sensor hits the ADC, the non-discrete or effectively "analog" signal is quantized, potentially "recoverable" detail (i.e. if you change exposure via shutter speed or aperture) well below the noise floor is permanently lost as the electronic noise in each pixel is permanently recorded, and no amount of post-processing will recover
what you lost (although if what you call a gain in DR is simply making black pixels blacker and/or white pixels whiter, regardless of whether doing so actually increases the amount of meaningful information those pixels contain...I guess thats something...)
Well, either you understand that, or you don't. Either way, these conversations (in multiple threads) have gotten well out of hand, and I don't want to keep contributing to that. So I'm out.