Nope it's an oversimplified example to illustrate a simple point: When you downscale an image you gain additional information per pixel in the downscaled image (assuming you use a sensible downscaling algorithm). Each pixel in the downscaled image will use information from multiple pixels in the original. In practice it is really trivial to observe, take a somewhat noisy image and downscale, which is more noisy and thus has less dynamic range: a pixel in the original image or a pixel in the downscaled image? Obviously you don't gain editing latitude for the image as a whole (you loose it), but each pixel individually gain DR.
Gains DR to a point. There's a ceiling, and that ceiling is the maximum DR at capture. Sorry, but your suggestion that combining four pixels with 14-bits at capture and getting 16-bits of real data is ludicrous. But you're probably making dilbert happy as you waft the stench of DxO's BS (aka Biased Scores) around the forums. Nothing to be proud of, IMO.
It sounds like you're dismissing it out of hand when you really should be saying something else.
I'm sure you can remember a day, back when (are you old enough?), early digital scientific instrumentation often made use of (noise) dithering to improve dynamic range and reduce noise.
I don't care to get into all the math, so I don't know just HOW much effect it can have, but it does work.
In the case of a camera sensor, averaging 4 pixels is gonna buy some extra DR at the expense of spacial resolution.
Read about it here, for those interested:
EDIT: adding quote from above referenced article to keep things interesting:
Digital Averaging Increases Resolution and Reduces Noise
The effects of input-referred noise can be reduced by digital averaging. Consider a 16-bit ADC which has 15 noise-free bits at a sampling rate of 100 kSPS. Averaging two measurements of an unchanging signal for each output sample reduces the effective sampling rate to 50 kSPS—and increases the SNR by 3 dB and the number of noise-free bits to 15.5. Averaging four measurements per output sample reduces the sampling rate to 25 kSPS—and increases the SNR by 6 dB and the number of noise-free bits to 16.
Yes, I've played with digital averaging and dithering (both before and – with much less consternation – after MATLAB). My $300 EOS M does multishot noise reduction. Woot.
Dismissing out of hand? No. Simply defining boundaries beyond which the logic breaks down. I notice your pasted example indicates signal processing to achieve 16-bits of real data from a 16-bit ADC. Would you care to provide an example demonstrating signal processing which delivers more than the bit depth of real data initially acquired (as in msm's suggestion of combining 14-bit pixels to achieve data with true 16-bit depth)?
canon's only got 12 bits of data on a 14 bit conversion.
So you're telling me those last 2 bits of noise are a FEATURE by implementing noise dithering?
If only the noise were purely random, it could qualify.
They are getting closer to that.
In fact, it's entirely likely that Canon did go to 14b ADC back then so that noise dither would improve tonal transitions; something I remember being stated in a promo for the 40D in its day. Altho, "noise" was not mentioned, don't want people to get any negative impressions or customer confusion from that word!
Back in the 80s, I used to get annoyed by articles from math geeks telling us designers how to get an extra half or full bit (or more?) effective resolution from a 12 bit ADC by dithering and processing. This was the beginning times of digital processing.
I never paid much attention because, for the instruments I was building, I could get ADCs with enough precision and resolution without having to resort to any DSP for a result.
If I need 16 bits, I'll buy 18 or 20 bit converters. I don't have to worry about cost cutting for consumer scale production.
If noise was an issue, filtering and averaging was easily done with simple software routines the end users preferred to have control of anyway.
So, yes, it's possible to get more bits of data than you have available bits of an ADC.
The process is similar whether you have a 16b ADC with 2LSB worth of noise or a 14b ADC you dither and oversample.
So, I think there are already some examples that define that the boundary of a 14b ADC is potentially greater than 14 bits.
Again, plenty of astute signal processing math geeks here can explain how that works.
Jristas astrophotography examples, pulling nebula detail from stacks of dozens or hundreds of images, are a good example of how to extract more bits of info from limited ADCs and noise which equates to a terrific amount of DR and effective bits of conversion.