neuroanatomist said:As a working pro stated earlier, shadows are important for the art...and shadows are supposed to be dark.
This is a loaded topic. You could argue that *because* today's monitors/output devices have pretty small DR (10 stops for the better monitors today), that you shouldn't try to pack more than 10 stops worth of DR into an image. This presupposes you want to maintain a linear relationship between tones - the same relationship you would've had in the real world. So is that what you want to do, or do you want to maintain the global DR your eye-brain experienced in the real-world for a particular scene? Or something in between?
It all depends on what you want to emphasize. If you want to emphasize light vs. shadow, you might even decrease the DR of what you captured by darkening darks and brightening brights past the relationship they had in the real-world. Or you might do that for some scene elements while retaining more global DR.
The point is that when your sensor introduces little to no noise over your image data, you have the freedom to do whatever you want. You even have the option of - in the future - one day going back to your high DR Raw files so you can reprocess them for that new HDR display that actually displays 18 EV of DR (the motion picture industry is very interested in HDR displays, e.g.). And a HDR display that doesn't just give you blacker blacks, but actually gives you brighter whites as well. Ever wonder why Velvia looked so beautiful on a lightbox? B/c it had 11-13 EV of output DR (though only ~5-6 EV scene DR), with a white point on your typical lightbox 5-6x brighter than your digital LCD monitor with its brightness maxed out. It actually expanded contrast - possibly getting closer to maintaining absolute brightness differences between objects in the real-world in the process... but I digress.
So when you say that shadows are supposed to be dark... that can really open up a whole can of worms. For example, if you shoot a sunset where you've shot to preserve the orange/red tones in the sky, the cityscape buildings in the shot might be completely black when you view it on your monitor - but they were perfectly visible when your eyes saw the sunset. These 'shadows' weren't exactly 'shadows' in the real-world, yet they're 'shadows' now on your monitor b/c your entire monitor's brightness scale is much smaller than, and on the lower end of, the brightness range we experience in the real world. So are they really shadows, or are they just shadows b/c of your exposure decision & your imaging hardware? You might decide they're not 'shadows' at all; that they should be darker midtones, say. Low read noise will enable you to pull those 'shadows' up to darker midtones, then assign some other even darker tones to 'shadows' so that your final image does have good output DR/contrast.
Look at Ryan Dyar's or Marc Adamus' landscapes for gorgeous examples of capturing a ton of global DR while still having shadows/dark tones in the image that give the impression of high contrast with low global contrast (high DR). That sounds counterintuitive, but they pull it off beautifully. And they effectively have to 'tone-map' for our LDR output devices & prints. I can only imagine how much more stunning they'd be on higher DR monitors - and of course that'd likely require entire re-processing from the Raw(b/c the tone-mapping would have to be different). Or not - maybe our eye-brain system does enough 'filling in the gaps' that we'd only really appreciate HDR display of such content in a side-by-side next to a LDR monitor.
Anyway, my point here is that how you define 'shadows' itself is flexible. I can tell you one thing shadows *aren't* supposed to have: FPN & read noise.
Upvote
0