Dynamic range is the number of stops between "saturation point" and blackpoint. Saturation point is where the highlights get "burned out". This doesn't change. Black point is the point at which SNR is 0db (that is, signal to noise ratio is 1). Downsampling reduces noise, so the SNR at what used to be the blackpoint goes up (to 5db for example) and the new blackpoint after downsampling is some way down from that.

I didn't quite catch this. Could you expand your answer on my example? Here's how I understand downsampling:

Assume that we have 4 neighboring pixels that represent in reality 4 black squares, but in our RAW file (due to noise impact) they have next brightness levels:

1 0

0 3

So, when I downsample I get 1 pixel that has brightness level 1 (for example, if I downsample by next formula: (1+0+0+3)/4 ).

It means that before downsampling we were looking at image with average noise level equal to 1 and we do the same after downsampling.

So what am I doing wrong compared to all of you that have noise level reduced after downsammpling?

P.S. Sorry for asking silly questions, i'd just like to understand

There are a couple of issues with the above example. First, the right measure for error is RMS, not arithmetic mean (that is, if you take two measurements, +1 and -1 for a variable whose true value is 0, the "error" is certainly not 0).

In your example, the RMS error is

sqrt( 1/4( 3

^{2} + 1

^{2} + 0

^{2} + 0

^{2}) ) = 1.6.

After you downsample, you get an RMS error of 1 -- that is, sqrt( (1-0)

^{2}/ 1 ).

Second the example is a bit unusual because you generally don't expect the error terms to be entirely of the same sign, and by averaging the signal the positive and negative errors cancel each other out.

If you have (for example) a "true" value of 10 and readings 6,3,9,17 (these I just randomly sampled from a normal distribution with mean 10, standard deviation 5). We average these and we get 9 (an error of 1), whereas before we had an error of 5 --

sqrt[ ((10-6)

^{2}+(10-3)

^{2})+(10-9)

^{2}+(10-17)

^{2}))/4 ].

In general, the expected value of our error drops by a factor of sqrt(N) -- so in my above example, the expected error in the beginning is the standard deviation of our distribution, 5. After we average four pixels, our expected error is 2.5 (5/sqrt(4)).