jrista said:9VIII said:neuroanatomist said:9VIII said:Go to your live view settings, you get to chose between exposure simulation and stills display, in exposure simulation it will only show you what the exposure will look like, in stills display it adjusts for lighting just like your eyes do.
I know, but which setting increases the DR of the sensor to the ~20 stops my eye can see through an OVF? :![]()
You only see 10-14 stops at any given time. That 20 stops of dynamic range is a post processed HDR image combining multiple exposures.
This is what your misunderstanding. It doesn't matter the mechanics of how we see 20 stops...we SEE 20 stops! We see what our brains tell us we see, not what our retinas sense. As a human being, I am not individually seeing 14 stop frames from my eyes...I am SEEING that nearly 20-stop HDR post-processed image that my brain produces.
You can't break down human vision to mechanical steps, and claim that because our eyes, which take an exposure every 1/500th of a second, are only capable of discerning about 14 stops of dynamic range for each and every one of those 1/500th second frames, is limiting our VISION to 14 stops. Vision in the human brain isn't really even HDR. It is more like a rolling exposure stack...fresh full-detail frames flow in while stale, old frames fade. It's like an astrophotography calibration, stacking, and stretching process all rolled into a biological process that occurs hundreds of times per second. We see ~20 stops because we see what ends up in our visual cortex, and that is AFTER all the processing. Our total dynamic range is over 24 stops, because our retinal sensitivity adjusts over a period of time as we move from dim environments to bright environments. Our eyes can become dark adjusted, but become overly sensitive to brighter light...therefor "clipping" it. Our eyes can become bright adjusted, yet limit our ability to see the same kind of detail in the dark as we did when we were dark adjusted. When dark adjusted, our momentary dynamic range is closer to 10 stops (in large part because our cones don't deliver sensory impulses until they have accumulated enough photons in a given time slice, so we lose a good portion of our total sensitivity per retinal area). When bright adjusted, our momentary dynamic range is closer to 20 stops, as both our rods and our cones are working at full capacity.
But simple fact of the matter...we don't see each of the 500 "frames" per second our eyes deliver to our brains...we SEE the HDR image our brains generate in our visual cortex.
I read an article a while ago stating that our perception of motion is actually purely analogue, even if you have a display running at 1,000hz or higher, you're still going to see a choppy image as long as it's moving fast enough.
Display refresh rate seems to be another one of those things were you just have to pick a point of diminishing returns, in theory the number can never be high enough.
Unfortunately I haven't been able to find the article for a while, but all a person would have to do is wave around a light source pulsating at 1000hz to see first hand.
Do you have a source for that information?jrista said:When bright adjusted, our momentary dynamic range is closer to 20 stops, as both our rods and our cones are working at full capacity.
After a few minutes on Wikipedia the best quote I can find is (again) one of these blog articles (the website makes it clear that it is not associated with Cambridge, but the author did get a PhD in chemical engineering there, which may or may not be as relevant as a PhD in psychology in this context).
http://www.cambridgeincolour.com/tutorials/dynamic-range.htm
because our eye's sensitivity and dynamic range actually change depending on brightness and contrast. Most estimate anywhere from 10-14 f-stops.
Back to the subject at hand.
If you sit there and stare at a lightbulb your eyes aren't going to pick up the detail in some dark space behind it unless you specifically stare into the dark space. The only difference is you use your eyes by instinct, where pointing the camera to a different spot is a more intentional action.
In a lower dynamic range scenario I have to agree that you will naturally do all that subconsciously. Which still doesn't tell me why all this is such a big deal for getting the right exposure or composition in a picture.
As far as I can tell this is one of the best examples of people rejecting new technology simply because of a lack of familiarity ( the "dynamic range" argument specifically).
I don't actually want to perpetuate the EVF as the be-all and end-all of viewfinders, but from a logical standpoint I cannot wrap my head around why so many people get so strung up about it.
You don't need pinpoint accuracy to tell if you're cutting the head off a bird, and while it would be nice to see an exact representation of the final image as far as avoiding over or under exposing I don't see where the EVF fails, or that you're doing any less guessing with an OVF. I'm not going to say that either one is better or worse all things considered.
If everyone would just say they prefer the "feel" of it that would be great, but I guess the defensiveness comes from seeing an equally large group needlessly adopting new technology when there is nothing wrong with the old.
Except with manual focus lenses. My 5D2 OVF really is useless with a manual focus 85f1.4. Yes, I know there is a different screen you can install, and I'm kicking myself now for forgetting to get one with my last B&H order, but oddly enough that isn't even an option on many bodies.
neuroanatomist said:9VIII said:You only see 10-14 stops at any given time. That 20 stops of dynamic range is a post processed HDR image combining multiple exposures.
LOL. Sorry, but have you heard the expression 'teaching your grandmother to suck eggs'? Undergrad and doctoral degrees in Neuroscience, close to a decade of teaching it, and over two decades of research in the field.
While psychophysical experiments have demonstrated the capability to perceive disruptive images (e.g. a black frame in a video sequence) with dwell times as short as 12-14 ms, normal vision in effect 'sums' a rolling period of ~100 ms, i.e. when you 'see' an image, it's a 7-shot stack. So...human vision IS, among other things, "...post processed HDRimagevideo combining multiple exposures." We really do see that 20-stop range, due to a combination of slower optical mechanisms (sphincter and dilator pupillae muscles of the iris) and much faster physiological mechanisms (ganglion cell tuning and attention processing in the visual cortex).
Since we know your iris isn't closing and opening in fractions of a second like a camera iris, then all the dynamic range we care about in this context is the receptors.
Neuro, I don't doubt you, but It's only prudent to ask for a source other than your own words.
Upvote
0