Why isn't Canon working on DSLRs with higher dynamic range?

zlatko said:
DanG_UE said:
It seems that many people are interested in film in part because it captures light with a similar range to the human eye. Between that, the wide utilization of RAW, and just the general issue of needing HDR or some other technique to balance many scenes, why isn't Canon focusing some efforts on creating sensors able to capture at least closer to the 24 stops the human eye can see?

Film never had the dynamic range that you imagine. Film was very far from the human eye — and that was part of its charm. In any event, we don't know what Canon is working on.

Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows.

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.
 
Upvote 0
LetTheRightLensIn said:
And yet that is not true at all despite some claims. Just look at some scenes, and register both bright and dark parts at once which totally blow out current sensors.
I find plenty of scenes where an extra 2-3 stops would help a ton. These scenes also can be mapped pretty well onto current displays.

That's what I find. When I render a hi DR scene to a print, especially if it's displayed in subdued lighting, I have to compress the heck out of it by mostly lifting the darker areas so that there's something to actually see other than too-dark-to-bother-looking. That's not for all scenes or prints of course, but having the ability to lift those dark areas a lot without FPN interfering is really a nice feature of the Sony sensors..


Sporgon said:
Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows.

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.

I disagree with you, heartily! :)
when you look at a scene, you tend to look around the scene and the rapid adjustments your eye makes allow you to see and interpret a wide natural DR.
If you don't map that effect in a large print, at least to some extent, then it's like staring at the brightest part and not really seeing the detail in the darker areas. So if your eyeballs don't move, go ahead and shoot and print that way.
I produce images for people with articulated eyeballs. :)
 
Upvote 0
Aglet said:
Sporgon said:
Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows.

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.

I disagree with you, heartily! :)
when you look at a scene, you tend to look around the scene and the rapid adjustments your eye makes allow you to see and interpret a wide natural DR.
If you don't map that effect in a large print, at least to some extent, then it's like staring at the brightest part and not really seeing the detail in the darker areas. So if your eyeballs don't move, go ahead and shoot and print that way.
I produce images for people with articulated eyeballs. :)

This is correct in one sense. The eye is constantly processing, and has a refresh rate of at least 500 frames per second in normal lighting levels (under low light levels, it can be considerably slower, and under very bright levels it can be quite a bit faster.) That high refresh rate, more so than the movement of the eye, is what's responsible for our high moment-DR. We can see a lot more than 14 stops of DR in any given second, but that's because our brains have effectively HDR-tonemapped ~500 individual frames. :P

When it comes to print, your not entirely correct. I've done plenty of printing. You have to be VERY careful when tweaking shadows to lift enough detail that they don't look completely blocked, but not lift too much that you lose the contrast. The amazing thing about our vision is that while we see a huge dynamic range, what we see is still RICH with contrast. When it comes to photography, when we lift shadows, were compressing the original dynamic range of the image into a LOWER contrast outcome. With a D800, while technically you do have the ability to lift to your hearts content, doing so is not necessarily the best thing if your goal is to reproduce what your eyes saw. It's a balancing act between lifting the shadows enough to bring out some detail, but not so much that you wash out the contrast.

Canon cameras certainly do have more banding noise. However just because they have banding noise does not mean you have to print it. After lifting, you can run your print copies through one of a number of denoising tools these days that have debanding features. I use Topaz DeNose 5 and Nik Dfine 2 myself. Both can do wonders when it comes to removing banding. Topaz DeNoise 5 in particular is a Canon users best friend, as its debanding is second to none, and it has a dynamic range recovery feature. You can also easily use your standard photoshop masking layers to protect highlight and midtone regions of your images and only deband/denoise the shadows as well, and avoid softening higher frequency detail in regions that don't need any noise reduction at all.

This is a little bit of extra work, however you CAN recover a LOT of dynamic range from Canon RAW images. They use a bias offset, rather than changing the black point in-camera. As such, even though Canon's read noise floor is higher at low ISO than Nikon or Sony cameras, there are still a couple stops of recoverable detail interweaved WITHIN that noise. Once you deband...it's all there. You can easily get another stop and a half with debanding, and if your more meticulous and properly use masking, you can gain at least two stops. That largely negates the DR advantage that Nikon and Sony cameras have. You won't have quite the same degree of spatial resolution in the shadows as an Exmor-based camera, but our eyes don't pick up high frequency details all that well in the shadows like that anyway, so at least personally, I haven't found it to be a significant issue.

There are benefits to having more DR in camera. Not the least of which is a simplified workflow...you don't have to bother with debanding, and you have better spatial resolution in the shadows. That said, if you ignore Canon's downstream noise contributors, their SENSORS are still actually quite good...the fact that you can reduce the read noise and recover another stop or two of usable image detail means their sensors are just as capable as their competitors. Their problem is really eliminating the downstream noise contributors. The secondary amplifier, the ADC, and even the simple act of shipping an analog signal across an electronic bus. Canon can solve most of those problems by moving to an on-die CP-ADC sensor design, similar to Exmor. They have the technology to do just that as well...they have a CP-ADC patent. They also have a number of other patents that can reduce dark current, adjust the frequency of readout to produce lower noise images (at a slower frame rate, say) or support higher frame rates (for action photography). Canon has the patents to build a better, lower noise, high dynamic range camera. It's really just a question of whether they will put those patents to work soon, or later...or maybe even not at all. (I'm pretty sure they have had the CP-ADC patent at least since they released the 120mp 9.5fps APS-H prototype sensor...which was years ago now.)
 
Upvote 0
Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
The only direct reference they make to dynamic range is 10 stops.
 
Upvote 0
Sporgon said:
The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.

If the scene has less dynamic range than the sensor is capable of recording you don’t need more dynamic range, that’s true for a lot of situations. However, filter manufacturers sell loads of 1-3 stop GND filters so a couple more stops of dynamic range is useful to a lot people as well, even people who understand exposure, or should I say, especially to people who understand exposure.
 
Upvote 0
9VIII said:
Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
The only direct reference they make to dynamic range is 10 stops.

Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Make sure you bring a camera along. Meter the deepest shadows under the tree using aperture priority mode, and set the ISO to 100. Then meter the brightest part of the clouds. Compute the difference via the shutter speed (which should be the only setting that changes as you meter.) In my experience, that is a dynamic range of 16-17 stops at least, if not more. My eyes have had no trouble seeing bright white cloud detail simultaneously with seeing detail in the depth of the shadows under a large pine tree. I do mean simultaneously...you want to stand back far enough that you can see both generally within the central region of your eye, and be able to scan both the shadows and the highlights of the scene without having to move your eyes much. The sun should be behind you somewhere, otherwise your looking at significantly more dynamic range and your eyes WON'T be able to handle it.

Whatever 9VIII's books may say, this is a real world test. Compare what your eyes SEE with what your camera meters. You'll be surprised how much dynamic range there is in such a simple scene, and the fact that your eyes can pick it all up in a moment...well, to me, that means our vision is certainly more than 10 stops of dynamic range "at once", and more than even a D800. The thing about a neuroscience book, whatever they may say, it can only be a guess. They cannot actually measure the dynamic range of human vision, and at best they can only measure basic neural response to the human EYE, which is not the same thing as vision. The eye is the biological device that supports vision, but vision is more than the eye.
 
Upvote 0
The funny thing is Neuro was using that book as his reference for saying 20 stops, but I guess you have to take some of the data other than the author's words.

I wholeheartedly agree though, practical testing trumps textbooks. I have the same argument with people over and over concerning resolution (we don't have nearly enough).
 
Upvote 0
jrista said:
Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Could you post a picture of this scene? I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds. The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.
 
Upvote 0
Orangutan said:
jrista said:
Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Could you post a picture of this scene? I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds. The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.

You move your eyes, just not a lot. The point is the scene should generally be static...you shouldn't be looking in one direction for the shadows, then turning around 180 degrees for the highlights. The point is that, while our eyeballs themselves, our retinas and the neurochemical process that resolves a "frame", may only be capable of 5-6 stops of dynamic range, our "vision", the biochemical process in our brains that gives us sight, is working with FAR more information than what our eyes at any given moment process. It's got hundreds if not thousands of "frames" that it's processing , one after the other, in a kind of circular buffer. It's gathering up far more color and contrast information from all the frames in total than each frame has in and of itself. The microscopic but constant movements are what give us our high resolution vision...it's like superresolution, so we get HDR and superresolution at the same time, all in the span of a second or two.

My point is that if I look at the deep shadows under a large tree, then a moment later flick my eyes to a cloud, then a moment later flick back, I can see detail in both. There is no delay, there is no adjustment period. My visual perception is that I can see details in extremely bright highlights SIMULTANEOUSLY with seeing details in extremely dark shaded details. My "vision" is better than what my eyeballs themselves are capable of (which, really, last I checked, was only about 5-6 stops of dynamic range, and actually less color fidelity and resolution than what we actually "see" in our brains.) Our brains are doing a degree of processing that far outpaces any specific "device". Our vision is the combination of a device and a high powered HDR/Superresolution crunching computer that does this amazing thing all at once.
 
Upvote 0
Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.
 
Upvote 0
jrista said:
Orangutan said:
jrista said:
Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Could you post a picture of this scene? I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds. The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.

You move your eyes, just not a lot. The point is the scene should generally be static...you shouldn't be looking in one direction for the shadows, then turning around 180 degrees for the highlights. The point is that, while our eyeballs themselves, our retinas and the neurochemical process that resolves a "frame", may only be capable of 5-6 stops of dynamic range, our "vision", the biochemical process in our brains that gives us sight, is working with FAR more information than what our eyes at any given moment process.

Yes, that I'd believe. I think it's fair to say it's our brains that actually "see," -- our eyes just feed some raw info to the brain.
 
Upvote 0
Lee Jay said:
Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.

The resolution of 'the entire imaging surface' of the retina...sucks. The fovea centralis has the highest acuity, outside of that small, central area the acuity drops precipitously. An analogy might be an 18 MP FF sensor (24x36mm) where the central 3x3mm area delivers 9 MP of the final image, with the other 9 MP coming from the remaining 99% of the sensor. So, the fovea basically has a 100-fold higher resolution than the rest of the retina. There are no rods in the fovea, only cones.

In bright light, rhodopsin (the visual pigment in rods) is fully isomerized, meaning rods are fully saturated in bright light, and it takes several seconds for the photoactivatable form rhodopsin to begin to be regenerated (and many minutes for full regeneration). At light levels where rhodopsin is transducing photon input, cone opsins are not receiving sufficient light to signal. So, your statement that the combination of rods and cones leads to a higher DR is practically incorrect, since the functional activation of the rod vs. cone systems occurs at very different light levels – the two aren't active simultaneously. 'Variation (in DR) over the surface of the retina' is also not practically useful, since it's the fovea that delivers the high-acuity information.

Feel free to debate the point, but bear in mind that I taught neuroscience to medical and graduate students for 8 years, and prior to that I studied the isomerization of 11-cis to all-trans retinal using time-resolved resonance Raman spectroscopy (the isomerization takes ~6 femtoseconds, if you were curious...a 'shutter speed' of 1/166,000,000,000,000 s). 8)
 
Upvote 0
neuroanatomist said:
Feel free to debate the point, but bear in mind that I taught neuroscience to medical and graduate students for 8 years, and prior to that I studied the isomerization of 11-cis to all-trans retinal using time-resolved resonance Raman spectroscopy (the isomerization takes ~6 femtoseconds, if you were curious...a 'shutter speed' of 1/166,000,000,000,000 s). 8)

140225-nuke-it.jpg
 
Upvote 0
neuroanatomist said:
Lee Jay said:
Individual rods and cones have relatively poor dynamic range. However, the combination of both and the fact that sensitivity can vary from one place to another over the surface of the retina means that the combined DR of the entire imaging surface is quite good.

The resolution of 'the entire imaging surface' of the retina...sucks. The fovea centralis has the highest acuity, outside of that small, central area the acuity drops precipitously. An analogy might be an 18 MP FF sensor (24x36mm) where the central 3x3mm area delivers 9 MP of the final image, with the other 9 MP coming from the remaining 99% of the sensor. So, the fovea basically has a 100-fold higher resolution than the rest of the retina. There are no rods in the fovea, only cones.

In bright light, rhodopsin (the visual pigment in rods) is fully isomerized, meaning rods are fully saturated in bright light, and it takes several seconds for the photoactivatable form rhodopsin to begin to be regenerated (and many minutes for full regeneration). At light levels where rhodopsin is transducing photon input, cone opsins are not receiving sufficient light to signal. So, your statement that the combination of rods and cones leads to a higher DR is practically incorrect, since the functional activation of the rod vs. cone systems occurs at very different light levels – the two aren't active simultaneously. 'Variation (in DR) over the surface of the retina' is also not practically useful, since it's the fovea that delivers the high-acuity information.

Feel free to debate the point, but bear in mind that I taught neuroscience to medical and graduate students for 8 years, and prior to that I studied the isomerization of 11-cis to all-trans retinal using time-resolved resonance Raman spectroscopy (the isomerization takes ~6 femtoseconds, if you were curious...a 'shutter speed' of 1/166,000,000,000,000 s). 8)

I took neural anatomy from a very bright guy (Marvin Lutches at the University of Colorado), and we did some tests to show that both the variation in sensitivity across the retina and lateral inhibition (I suppose most here would call it on-sensor sharpening) were real and detectable while staring with one eye at one target.

The test I did on myself was to stare out a window, with one eye, while trying to read a sign in the foreground in my near-central vision (a couple degrees out - right next to the window from my perspective). The exposure difference between the window and the sign was 12 stops, and the sign was dark brown. I could easily distinguish details in the clouds while being able to read the sign at the same time.
 
Upvote 0
Lee Jay said:
I took neural anatomy from a very bright guy (Marvin Lutches at the University of Colorado), and we did some tests to show that both the variation in sensitivity across the retina and lateral inhibition (I suppose most here would call it on-sensor sharpening) were real and detectable while staring with one eye at one target.

The test I did on myself was to stare out a window, with one eye, while trying to read a sign in the foreground in my near-central vision (a couple degrees out - right next to the window from my perspective). The exposure difference between the window and the sign was 12 stops, and the sign was dark brown. I could easily distinguish details in the clouds while being able to read the sign at the same time.

I wasn't arguing against the relatively good DR of the human visual system, merely pointing out the fallacies in your explanation of the physiology underlying that DR.

Google doesn't turn up much about Professor Marvin Lutches beyond a page on dragonfly flight. I studied neuroanatomy under, and subsequently taught for, Marian Diamond. I even had the opportunity to analyze the dendritic spines of Albert Einstein's brain in her lab. Good times...
 
Upvote 0
Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there.

Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.

Not exactly a fun experiment. :)
 
Upvote 0
Darkmatter said:
Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there.

Your talking about a different kind of eye movement, but you are correct, our eyes are always adjusting.

Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail. But how observant an individual is, like many human things, falls onto a bell curve. The vast majority of people are barely observant of anything not directly happening to them, and even in the case of things happening to them, they still aren't particularly clearly aware of all the details. (Especially this day and age...the age of endless chaos, immeasurably hectic schedules, and the ubiquity of distractions...such as our phones.)

I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded. HOW it's recorded is what affects our memories. For someone who is attuned to their senses, they are evaluating and reevaluating the information passing into their memories more intensely than the average individual. The interesting thing about memory, it isn't just the act of exercising it that strengthens it...it's the act of evaluating and associating memories that TRUELY strengthens them. Observant individuals are more likely to be processing the information their senses pick up in a more complex manner, evaluating the information as it comes in, and associating that new information with old information, and creating a lot more associations between memories. A lot of what gets associated may be accidental...but in a person who has a highly structured memory, with a strong and diverse set of associations between memories, one single observation can create a powerful memory that is associated to dozens of other powerful memories. The more associations, the more of your entirety of memory that will get accessed by the brain, and therefor strengthened and enhanced, than when you have fewer associations.

I actually took a course on memory when I first started college some decade and a half ago. The original intent of the course was to help students study, and remember what they study. The course delved into the mechanics and biology of sensory input processing and memory. Memory is a multi-stage process. Not just short term and long term, but immediate term (the things you remember super vividly because they just happened, but this memory fades quickly, in seconds), short term, mid term, and long term (and even that is probably not really accurate, it's still mostly a generalization). Immediate term memory is an interesting thing...you only have a few "slots" for this kind of memory. Maybe 9-12 at most. As information goes in, old information in this part of your memory must go out. Ever have a situation where you were thinking about something, were distracted for just a moment, but the thing you were thinking about before is just....gone? You can't, for the life of you, remember what it was you were thinking about? That's the loss of an immediate term memory. It's not necessarily really gone...amazingly, our brains remember pretty much everything that goes into them, it's just that all the memories we create are not ASSOCIATED to other things that help us remember them. It's just that the thing you were thinking about was pushed out of your immediate-mode memory by that distraction. It filled up your immediate mode memory slots.

Your brain, to some degree, will automatically move information from one stage to the next, however without active intervention on your part, how those memories are created and what they are associated with may not be ideal, and may not be conducive to your ability to remember it later on. The most critical stage for you to take an active role in creating memories is when new information enters your immediate term memory. You have to actively think about it, actively associate it with useful other memories. Associations of like kind, and associations of context, can greatly improve your ability to recall a memory from longer term modes of memory storage. The more associations you create when creating new memories, the stronger those memories will be, and the longer they are likely to last (assuming they continue to be exercised). The strongest memories are those we exercise the most, and which have the strongest associations to other strong memories. Some of the weakest memories we have are when were simply not in a state of mind to take any control over the creation of our memories...such as, say, when a thug walks in and puts a gun in someone's face. Fear, fight or flight, activities kicked into gear by a strong surge of hormones can completely mess with our ability to actively think about what's going on, and instead...we react (largely out of 100% pure self preservation, in which case if we do form memories in such situations...they are unlikely to be about anyone but one's self, and if they are about something else, they aren't likely to be very reliable memoreis.)

It was one of the best courses I ever took. Combined with the fact that I'm hypersensitive to most sensory input (particularly sound, but sight and touch as well...smell not so much, but I had some problems with my nose years ago), the knowledge of how to actively work sensory input and properly use my memory has been one of the most beneficial things to come out of my time in college.

If you WANT to have a rich memory, it's really a matter of using it correctly. It's an active process as much as a passive one, if you choose to be an observant individual. If not...well, your recallable memory will be more like swiss cheese than a finely crafted neural network of memories and associations, and yes...the brain will try to fill in the gaps. Interestingly, when your brain fills in the gaps, it isn't working with nothing. As I mentioned before, we remember most of what goes in...it's just that most of the information is randomly dumped, unassociated or weakly associated to more random things. The brain knows that the information is there, it just doesn't have a good record of where the knowledge is. I don't think that really has to do with the way our eyes function...it has to do with how the brain processes and stores incoming information. Our eyes are simply a source of information, not the tool that's processing and storing that information.

Darkmatter said:
Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.

Not exactly a fun experiment. :)

You do move your eyes, a little. You can't really do the experiment without moving back and forth. The point is not to move your eyes a lot. If you look in one direction at a tree, then another direction at the cloud, your not actually working within the same "scene". In my case, I was crouched down, the tree in front of my, the cloud just to the right behind it. Your whole field of view takes in the entire scene at once. You can move your eyes just enough to center your 2° foveal spot on either the shadows or the cloud highlights, but don't move around, don't move your head, don't look in a different direction, as that would mess up the requirements of the test. So long as "the scene" doesn't change...so long as the whole original scene you pick stays within your field of view, and all you do is change what you point your foveal spot at, you'll be fine.

To be clear, a LOT is going on every second that you do this test. Your eyes are sucking in a ton of information in tiny fractions of a second, and shipping it to the visual cortex. Your visual cortex is processing all that information to increase resolution, color fidelity, dynamic range, etc. So long as you pick the right test scene, one which has brightly lit clouds and a deep shaded area, you should be fine. In my experience, when using my in-camera meter, the difference between the slowest and fastest shutter speed is about 16 stops or so. I wouldn't say that such a scene actually had a full 24 stops in it...that would be pretty extreme (that would be another eight DOUBLINGS of the kind of tonal range I am currently talking about...so probably not a chance). But I do believe it is more dynamic range than any camera I've ever used was capable of handling in a single frame.
 
Upvote 0
As a film shooter, I've read DanG_UE's comments with interest. But, I wouldn't be so quick to write off current digital sensors. While sensors can certainly be improved, when I go looking for it, I'm always surprised by the amount of detail that is hidden in highlights and shadows.

Rather than discussing the DR of the eye (as fascinating as that is), a more interesting experiment would be for the OP (or anyone) to upload a RAW image from a current Canon DSLR shot at ISO 100 where lack of DR is perceived to be a problem - ie an image that appears to have a similar amount of blown highlights and dark shadows and where exposure was set somewhere near the middle (is that a correct understanding of the problem?). We can all then take turns to see if these images can be fixed or whether there is a major problem with the current state of sensors. Getting a few people's input on this could also highlight which programs and post processing techniques work best for this.
 
Upvote 0
dilbert said:
jrista said:

I'm still waiting for your reply to me asking for a reference (you know, a URL) to something that supports your claim of the Sony a7s only having a 14bit ADC ...

B&H Photo's product page, under specifications:

http://www.bhphotovideo.com/c/product/1044728-REG/sony_ilce7s_b_alpha_a7s_mirrorless_digital.html

It took me about 3 seconds to find that. I just searched for "Sony A7s Bit Depth", and that was one of the first five links (the rest, for some reason, were all about the A77...)

It states:
 

Attachments

  • Sony A7s Bit Depth.jpg
    Sony A7s Bit Depth.jpg
    32.4 KB · Views: 802
Upvote 0