neuroanatomist said:
I get the normalize to compare idea. But if you take a 'sensor' with 5 stops of DR (Velvia was being discussed), and take a picture of a scene with 15 stops of DR, you've lost 10 stops of DR at capture - you have parts of the scene with saturated data, and parts with zero data outside of noise/gain - clipped highlights and blocked shadows. Same idea if you have a sensor with 13 stops of DR and a 15 stop scene - you've lost 2 stops of information. Now, when you downsample that 130 MP drum scan or that 36 MP image file to 22 MP, do you get the data from the blocked shadows and clipped highlights back?
If you buy a stalk of celery at the grocery store and cut it into 5 pieces, you can later cut it into 20 pieces...but that won't get you the celery root or the leafy greens for your stock.
Actually, the data is not completely lost like it is in your celery example.
Consider a set of data divided into three parts. The first part has a (normalized) value of 1. The second part is 11 stops below that, but not exactly zero. The third section has a value exactly zero. Put some random noise (i.e. photon noise or read noise) on top of that. Then "digitize" the result to 10 stops. In other words, truncate the result so you have data with 10 bit numbers for each pixel.
If you average over enough pixels you will find that you can distinguish the low signal section from the section which is exactly zero. This is because the mean values of the two sections will be different. The data is not in any one pixel, but statistically is spread across many pixels.
When you are downsampling, you are averaging, and that helps recover that statistical information. If the noise is random and obeys ergodicity, then your signal-to-noise will improve as the square-root of the number of pixels averaged during downsampling [see
signal averaging]. Average 4 pixels, and you gain a factor of two (1 bit) of extra information.
Keep in mind that the total information in the image does not change. In this case, you are trading off bit depth for spatial resolution. If you know your 16 megapixel image only has one bright and one dark region, your knowledge of those bright and dark values can be much more accurate than if you request each pixel in your original image to be a potentially different value. It is the space-bandwidth product which matters. [for very technical discussion, the following paywall-restricted
paper is one example.]
In short, if you take a 16 megapixel image with 10 bits per pixel, you can downsample that to a 4 megapixel image with approximately* 11 bits of information per pixel.
[edit]
Just so there is no confusion later, the extra bit in the above example will come from improved shadows. You can't do much about clipped highlights, but there is more information in those noisy shadows than you might think.
[/edit]
---
* The approximation is because it depends on the noise and signal properties. In most cases of interest to this group, it is a reasonable approximation. This approximation also assumes your downsampling algorithm is using more than 11 bits to represent each number.