sarangiman said:
Quote from: helpful on May 04, 2012, 05:39:39 PM
A really high DR better than the 5D3 doesn't really help. As I have explained in previous posts, a lower DR actually stores more data and detail from a scene than a camera with high DR. Ideally the dynamic range would match the scene's DR. Canon's DR probably fits more scenes better than Nikon's. If the dynamic range is higher than the dynamic range intrinsic to the scene, then it actually makes the picture worse (less fine variations in detail of recorded luminosity).
This is false. Increasing the sensor DR will either give you more headroom before clipping, or less noise in the shadows. In both cases, you gain information (for some scenes there might be no data to record there, still you loose nothing).
In a low dynamic range image (like a frame filled with nothing but green grass), the histogram of a high DR camera like both the 5D3 and the D800 show nothing but a thin peak of data that was recorded. This means lots of detail is being lost because not all 14-bits are being used.
This is false. Most cameras are noise-limited, not quantizer level limited. This means that once the signal reach the ADC, there is (at most) 14 bits of information from the saturation level and down to the noise floor.
Well done clearing up that misinformation, hjulenissen
In your comments there are some mistakes, or perhaps you just read my first sentence which was not clear except in its original context.
* You don't gain information when DR is increased--you exchange one type of information for another. Unless more bits of data are recorded, there is a tradeoff when DR is increased.
I did make one mistake. I should have quoted what I was referring to at the end of the sentence, "A really high DR better than the 5D3 doesn't really help." I was referring to the case being discussed when the scene does not contain more stops of dynamic range than the range already available in the 5D3.
Increasing the DR range of the recorded image does lose data for a scene that does not contain that large of a dynamic range. This is just a mathematical fact--anyone know of the pigeonhole principle? You can't have a RAW file that contains 14 stops of DR and contain as much information in each stop as a RAW file which contains 12 stops of DR. Both RAW files contain 14 bits per color channel, and you can't store those extra two stops without losing data somewhere. The data is lost because the variation between slightly larger changes in color or intensity is "rounded off" to the same value in order to achieve higher DR.
I would like to point out that you actually realized the truth, but didn't grasp it. At the end of your remark, you said, "for some scenes there might be no data to record there, still you loose nothing." So if there is "no data to record there," then that means nothing is recorded there. You may think, "Well, a pixel IS recorded" and so there is no loss. But there is something lost--because that pixel that was recorded could have been recorded with a gamma value closer to one (depending on whether encoding or decoding is being referred to, it could be less than one or greater than one) and stored more tonality and detail.
I am not just speaking off of mathematical logic, but out of my knowledge, training, and education. I have my Ph.D. in mathematics and my field is inverse problems and mathematical imaging. I have multiple articles on the subject published, and the latest is pending publication. These are in top journals--the cheapest of which costs almost $1,500 British pounds for a yearly subscription.
I am also not just a theoretical textbook person. I just got back from spending the entire day photographing.
My purpose is to be helpful through my knowledge, for free and anonymously. You can take it or leave it.
* The last sentence is completely unbelievably painful to even read:
"Most cameras are noise-limited, not quantizer level limited. This means that once the signal reach the ADC, there is (at most) 14 bits of information from the saturation level and down to the noise floor."
That sentence is equivalent to worshiping RAW and saying that 14 bits is the end-all, be-all of everything--the most data that can possibly be contained in an image. If that were true, it wouldn't even be possible to change the ISO level on the camera, because the camera would be recording everything that could be recorded.
There is a compromise in everything. Isaac Newton made the mistake (based on 3rd-party eyes that he trusted better than his own vision) of thinking that there was no diffraction, and hence, he wrongly believed that an infinitely small aperture would be perfect. Likewise, a high dynamic range does not comes without trade-offs, unless a higher quality image format is introduced to store the additional stops of data. Personally, I am advocate of a 48-bit color (3x 16-bit RGB) + 16-bit logarithmic luminosity channel, like Sony's new RGB+W encoding, except with more bits to fit into the current processing standard of 64-bit "words" (i.e., chunks of data).
We can think of these encodings all day long, but no one has control of the market, and no one knows what will be successful.