Better Dynamic Range in a Camera

Status
Not open for further replies.

Hector1970

Canon Rumors Premium
Mar 22, 2012
1,587
1,181
21,156
I own a Canon 5D Mark III which is a great camera. I still have my 500D which is still a very good camera which I still enjoy using. The 5D has better ISO and Focusing and lots more buttons. It has a bigger sensor which does help improve image quality and does nice things with depth of field. The ISO improvement is really great.
One thing that maybe is better (but not obviously better) is the dynamic range of the sensor.
I can't see a whole lot of improvement over a 500D (I may not be looking properly).
What would be a great leap forward would be a sensor that captures the light in a scene with all it's ranges like an eye would.
I've seen in places that some people think the human eye has a dynamic range of around 20 stops other saying 6.5.
I've also seen it said that the dynamic range of a camera (9-11) exceeds the dynamic range of print (6.5 stops).

I am not a techical expert in this area and I know the eye is quite a complex mechanism and the brain also tinkers with an image but:
When I look at a scene with alot of different light I can take it all in and I see a picture in my mind.
When I take a photograph of it certain areas are lighter or darker than I saw.
I then have to edit my image to something approximately to what I saw (or thought I saw).

How hard is it to make a sensor with a higher dynamic range?
Can it be translated into something you see straight off on a screen or a print?

I'm sure a company that could produce a camera that produces images more like how a human sees the scene as would really be onto a big winner.
What would it take to do that?
Is it better dynamic range ability or is it something else that's required?

Kind Regards
Fergal
http://www.flickr.com/photos/fergalocallaghan/
 
Very, very hard... almost impossible for now but I think that won't be the case in the near future. Sony sensors are now one of the best if not the best when it comes to good light DR. Right now, the grass is greener for landscape photographers using Nikon D800 compared to Canon 5D3. If you want more DR, go for Nikon. For me, I'll stick with Canon. I love Canon's lenses and ergonomics more than Nikon. The grass is always greener on the other side. :)
 
Upvote 0
There are various techniques for making the dynamic range "fit". Adding light to the shadows using a flash is one way. You can also shoot in RAW and recover some highlight detail in post. Grad ND filters etc. Personally I like that the dynamic range doesn't fit as it makes for interesting contrast and deep shadows. Remember black is meant to be black. And we don't always need to see whats in the shadows. You can expose for the highlights or overexpose slightly and pull back detail. Unless you are shooting directly into the sun you can usually come to some compromise.

About the eyes. I don't know the exact number of stops that we can "see". From what I understand the brain and eye work together and are constantly adjusting to the light so that it appears as if we can see the entire dynamic range at any given moment. But imagine you are in a dark room then suddenly go outdoors on a bright day. Suddenly the dynamic range drops and you can't see jack. So I guess the process is complex and adapts to various situations.

Knowing the limitations of the camera and working around it is part of the fun of photography! Each situation poses new challenges and if we could all fit the dynamic range in camera life would look really rather dull and flat. I'd rather have the high contrast which looks more natural!
 
Upvote 0
OK. I guess your style is different and thats cool. I've just learned to live with it. Just wanted to share some thoughts on the topic.

Question - don't you find though that most HDR images look very unreal? Why is that? Even the best ones have a certain look and those are aiming for a photorealistic look. Maybe high DR cameras would just look fake. Maybe the Japanese engineers already tried it and decided it sucked??

Just puttin it out there!

???
 
Upvote 0
Improving DR to 20 stops, in a camera that would sell to the public, is still not there yet.

DR and resolution are somewhat mutually exclusive altho recent advances in per-pixel performance are making measurable improvements. e.g. Toshiba-made sensor in D5200 has 24MP vs the 16MP in the D5100 and yet the d5200 is showing the same DR, per-pixel, as the d5100.

if the resolution had not increased by 50%, the DR could possibly have increased by 50% instead.

Since this type of sensor is already way ahead in the DR spec, the mfr has the option of increasing resolution while maintaining class-leading DR. And more MP printed on the box is a great marketing tool.

And yes, for artistic reasons, it's always better to have more of everything captured so that you can later choose how you want to present it by altering it to your tastes.

So for now, if you want to capture the most DR, use the best tool for the job and then you still have the option of exposure bracketing and stacking those images in software, whenever that's a viable option.
I personally don't care to stack bracketed exposures. I find i can get a photo-realistic HDR-toned image with a single shot from a very clean camera (like most current Nikon or Pentax bodies) with lots of DR. Just expose to maintain the hilite levels you want to keep and then recurve everything below it to your tastes, if that's your thing.
Some photographers really do not seem to like presenting images like this so it can be a point of strong contention.
 
Upvote 0
Our eyes cannot accommodate the entire wide DR a sunlit outdoor scene may provide, but they can, within limits, adapt to the local brightness of the area we may look at, in which case we can then perceive a somewhat wider DR.

When capturing this scene and preparing it for print or display, the limitations of the presentation medium require compressing the DR and performing localized contrast enhancement to provide the viewer with a semblance of the perception they may have if they were viewing the actual scene.

How much of this gets done depends on a number of factors. The size of the final print or display, the content of the image, the type of scene, and the artist's intent, are amongst the primary factors.
E.G. If the bright landscape included a small cave entrance in the distance, we'd never be able to adapt our eyes to see what's inside that cave, even if the camera could. So making those deep shadows more visible would be unnatural. But if we were much closer to it, so that if we were to look at the cave entrance, possibly while shielding our eyes from the ambient glare, we may be able to make out some of the shadow detail. Therefore it's not unreasonable to tone such an image to display that way.
It's all very subjective but HDR and tone-mapping and other effects CAN be applied judiciously to create a pleasing and still somewhat photo-realistic image.

see my samples, near the middle of this page, for a somewhat photo-realistic manner
www.canonrumors.com/forum/index.php?topic=8105.msg161888#msg161888

http://www.canonrumors.com/forum/index.php?action=dlattach;topic=8105.0;attach=23647;image

http://www.canonrumors.com/forum/index.php?action=dlattach;topic=8105.0;attach=23646;image

This isn't the best example but it presents another example of as-shot to tone-mapped. This one's overdone for emphasis, posted near the bottom of this page:
www.canonrumors.com/forum/index.php?topic=8065.msg154889#msg154889

http://www.canonrumors.com/forum/index.php?action=dlattach;topic=8065.0;attach=22981;image

http://www.canonrumors.com/forum/index.php?action=dlattach;topic=8065.0;attach=22980;image
 
Upvote 0
Our eyes can cover a wide dynamic range, but not all of it is in color. Rods and cones..... In good light we see colors with reasonably high resolution, and as it gets darker we loose color vision and see low-res black and white... Cameras now have a wider COLOR dynamic range than our eyes.
 
Upvote 0
For scenes with good light, there isn't a camera on the market that has insufficient dynamic range. In most situations where you might wish your camera had more dynamic range, the proper solution is almost always to fix the light. Generally, fixing the light for landscape photography means waiting for the magic hour. With most other types of photography, it means properly using flash and / or other light modifiers.

Sometimes, but rarely, the whole point of the exercise is to capture some sort of setting with extremes of light. Such light is not attractive in and of itself, and it is the harshness of the light which one is capturing. In those cases, the scene as you look at it will have lots of areas of reduced contrast, including shadows and highlights where you cannot discern details. Somebody mentioned the cave entrance at noon; in the real world, it would be a black hole, and the puffy clouds overhead would be too painful to look at to see detail. If you're shooting that scene, it's presumably because you want to capture the feeling of looking into a bottomless abyss while in the harsh light of day, and you'd expect the print to have a similar lack of detail in the cave and clouds. If, instead, you really do want to capture the detail in both, you should wait until sunset when the last rays of the Sun gently light up the inside of the cave, at which time that light will be well balanced with the colorful clouds. Or you should be painting the scene as you imagine it rather than trying to photograph it as it isn't.

The most challenging HDR shot I've ever done is here:

http://www.canonrumors.com/forum/index.php?topic=12617.0

And that really is very much what the scene looked like as you stood there. Yes, I could have gone all painterly and created something surreal and tonemapped that showed details that you couldn't see, but then it wouldn't look like what the scene actually looked like. You really could only just barely pick out the details in the shadowed parts of the Canyon, and, though glasses, the Sun still looked small and bright with a hole in it but lots of glare surrounding it. And the layers of the Canyon receding into the distance really did almost blend into the horizon, and the foreground really was that bright and contrasty.

If I wanted a painterly photograph of the Canyon, I wouldn't have shot into the Sun during an annular eclipse; I'd have camped out during thunderstorm season and hoped / waited for a day with a good sunset -- and I'd have been on the North Rim, not the South. I'd still have bracketed the exposure and still might have wound up blending a couple of them together, but the final image would largely have looked like the straight-out-of-the-camera middle exposure. If I did blend exposures, it'd most likely just be a simple pair separated by only a couple stops with a gradient blend, simulating a graduated neutral density filter.

About the only times I can think of when more single-shot dynamic range is of any significance is for digital fill flash for event photography in bad light. But, even then, either you're doing journalistic-style photography and you should be representing the scene as it is, or you're doing dynamic portrait photography (such as for a wedding) and you're being paid to make the light what it needs to be. In other words, either you should be letting the whites blow and crushing the blacks, or we're right back to fixing the light.

Really, when it comes right down to it, an extra two stops of dynamic range is nothing more than a single file that's the same as a +/- 1 stop bracket already composited together. Big whoop. Saves me the least important step in the creative process.

Cheers,

b&
 
Upvote 0
Maybe I'm sorry I asked this question. :)
The answers are great but making me even more confused.
I'm suppose I'm missing what's missing to get a photograph to have all the dynamic ranges I see with my eyes.
Is it impossible to translate it to a photograph?
Does it take a combination of photos to get a photo to resemble that?
It continues to astound me the technical knowledge people on this site have.
I read alot about photography but feel like a beginner compared to some of the experts here.
It's brilliant to have this source of knowledge.
 
Upvote 0
Hector1970 said:
I'm suppose I'm missing what's missing to get a photograph to have all the dynamic ranges I see with my eyes.
Is it impossible to translate it to a photograph?

"It depends."

If the scene itself has a large enough dynamic range -- such as the eclipse over the Grand Canyon I linked to in my previous post -- then, yes, it's basically impossible to translate it into a photograph. There's only so much difference between paper white and the darkest ink you can lay down, and that's nowhere near the difference between the Sun and the shadows at the bottom of the Grand Canyon. Not even remotely close. And, though there is often a greater difference between maximum white and maximum black on a computer display than there is in a print, it's not all that much more.

So, what you have to do is, ideally, reduce the dynamic range of the scene itself. With landscapes, that happens naturally during the "golden hour." With other types of photography, you do this by adding light (with flash or reflectors) to the dark parts and removing light (with shades and scrims) from the bright parts. Doing so isn't merely a skill, it's what photography is actually all about. The camera parts are secondary; controlling the light is what your job is.

In scenes with a greater dynamic range than you can display and / or capture with a single exposure, you're left with compromises.

At the capture end, you can often use a graduated neutral density filter, but if your subject is relatively static, you can do much better with multiple exposures. You can mimic a GND with a layer mask and the gradient tool, or do better with a layer mask and a large, soft brush, or you can go the tonemapped HDR route (which I personally find distasteful but some like). Or you can do a linear HDR development with Photoshop and load the resulting 32-bit TIFF back into Camera RAW and use its tools (fill light, highlight recovery, etc.) to tame the dynamic range rather effectively.

In cases when you have a single capture with more dynamic range than you can print, you're basically doing the same thing. You can develop twice (or more) and mask the two developments together as you would multiple exposures, or you can use any of the (sadly) popular single-image HDR tonemapping tools, or you can again play with the sliders in ACR.

Mostly, though, what you really need to do is experiment. A lot....

Cheers,

b&
 
Upvote 0
Hector1970 said:
.. what's missing to get a photograph to have all the dynamic ranges I see with my eyes.
Is it impossible to translate it to a photograph?
Does it take a combination of photos to get a photo to resemble that?
Current print and display methods cannot replicate the wide dynamic range of many scenes, especially sunlit ones. So it is essentially correct to say that it IS impossible to translate 20 stops worth of DR into a photograph... UNLESS you compress the photographed image to fit within the range of the final presentation medium. That can look flat and dull if not done correctly. It can look artificial and "painterly" even when it is done correctly altho that's likely when it's a bit overdone. This can be a very subjective method and worth experimenting with to see if it fits your tastes.
However, such methods are about the only options when you can't stick around to wait for golden hour outdoors and you aren't carrying a bunch of lighting equipment with you. Otherwise, controlling the light is the main method photographers have used for over a century to lighten the dark areas and otherwise reduce the DR of an image BEFORE capturing it with a camera. Properly lit images can look far more appealing than trying to replicate this effect in post processing. However, controlled lighting is not always an option and the use of software and a good quality raw file are the next option to use.
Adobe's Lightroom is a great way to experiment with your raw files. It can lighten shadows and apply digital gradient filters very easily and spares you from becoming a Photoshop pro if you don't have the time or money to invest in it.

There's at least one section here devoted to HDR, tho not necessarily the photo-realistic kind. Look around and explore the subject a bit, there's quite a few ways to approach this. some may be more suitable or appealing to you than others.

http://www.canonrumors.com/forum/index.php?board=68.0
 
Upvote 0
hjulenissen said:
If we are able to _reproduce_ those 24 stops, my guess is that it will look amazing.

Quite the opposite -- it would be painful.

Monitors today are already at the practical limits of dynamic range. That is, some of them you can make too bright to comfortably look at in a typical indoor environment.

If I were to accurately reproduce the dynamic range in that shot of the Canyon, you'd go just as blind from trying to look at the picture of the Sun as you would from looking at the real Sun. I simulated that by making the Sun and the surrounding sky be low contrast near paper white, difficult to make out -- because it was. In reality, though, there was far more dynamic separation between the ring of fire of the eclipsed Sun and the immediately adjacent sky than there would be with a Klieg light projecting from a coal mine.

Cheers,

b&
 
Upvote 0
hjulenissen said:
Is this what you feel at the edge of the grand canyon on a partially cloudy day? "Ouch, too much dynamic range"?

Yes, exactly. Why else do you think people squint, shade their eyes with their hands, or wear sunglasses or hats?

Where I am sitting right now, I can watch a bright snowy landscape out of my windows, or peek at my interior in the shade.

And I'll bet that you're squinting at the snow, and, after a bit of squinting, you can't see a bloody thing around you inside. Or else the glass is tinted.

There are certainly times when the point of art is to make people uncomfortable, but those are exceptions. And, even then, the discomfort is almost always toned down or somehow simulated or made distant. Very few people would be interested in a sunset photograph of a beautiful scene that made them squint and shade their eyes, as much as they might enjoy being in the actual setting where the photograph was made, painfully bright light and all.

Cheers,

b&
 
Upvote 0
Wow the answers are getting even better and more detailed.
When I look at a scene I often can see the bright and the dark and not lose detail in the bright or the dark.
When I take a photo of the scene depending on where it's metering it might make a photo too bright or too dark (losing detail in either case). It might also (if I meter correctly) do a kind of average where I lose some detail in the bright and some detail in the dark.
The dark bit I might dodge and the light bit I might burn (or use a lightroom filter).
Could a sensor have dark but retain all the detail (ie it's not black unless it is black) and bright light but also retain all the detail (it's bright but not completely white)?
Then there would be no need to dodge or burn or use filters.

I suppose in the end a photographer is trying to capture what they see (or at least what they think they see).
The photo is often under or overexposed or both.
In their chase of more expensive equipment (bodies, lenses, filters, software) they are trying to combat these possibilities.
In the morning or evening light the dynamic range is less and they can capture the scene with their sensor.
By midday this possibility may be gone.
Could it be possible to capture this by camera and lens alone by a magic sensor which captures the scene as an eye would?
By the answers so far it doesn't seem to be so easy to do.
Maybe the eye fools us because it is all the time processing the information to show us all the scene.
 
Upvote 0
Hector1970 said:
When I look at a scene I often can see the bright and the dark and not lose detail in the bright or the dark.
When I take a photo of the scene depending on where it's metering it might make a photo too bright or too dark (losing detail in either case). It might also (if I meter correctly) do a kind of average where I lose some detail in the bright and some detail in the dark.
The dark bit I might dodge and the light bit I might burn (or use a lightroom filter).
Could a sensor have dark but retain all the detail (ie it's not black unless it is black) and bright light but also retain all the detail (it's bright but not completely white)?

Let's put some numbers to the question to help things along.

We'll start with a carefully controlled environment -- a viewing booth with a standard D50 illuminant. That would be a full-spectrum daylight-balanced light source that's reasonably bright at the print. A light trap (a hollow black-lined box with a small hole at the top) would be 0. A piece of Teflon thread tape would be 100 (or close enough as makes no difference. A piece of high-quality fine art paper would be just barely discernibly darker, around 98 or 99. The darkest you could print on that paper would range from 10 or so to 20 or so, depending on the actual paper and printer.

That's basically what the L* value represents in the Lab color space -- except, of course, the definition is more precise than what I just gave.

Unlike RGB color spaces, the Lab space is open-ended. Most real-world scenes contain images with considerably more than an L* value of 100.

For example, right next to this standard light booth, let's place another one with the print raised so it's closer to the light source. It's the same piece of paper as the one in the standard booth, but it's now more brightly lit and so, to the observer, has an L* of more than 100. How much more depends on how much closer the paper is to the source, but let's pick an arbitrary number and say it has an L* of 150.

You're standing there with your high dynamic range camera and you take a picture of this very scene. Not a problem -- the camera captures it just fine.

But now what're you going to do with that file?

If you want to make a print, the stuff in the standard viewing booth is no trouble; with the proper workflow, within certain limits, your print can look identical. The picture of the bare paper gets no ink in your print, either, and the other parts get the same ink as in the original.

But how are you going to make the image of the paper in the non-standard booth be brighter than the paper in your print? That is, with a paper that has an appearance of L*=99 in standard viewing conditions, how are you going to make an image of something that has an appearance of L*=150, and still show the image in standard viewing conditions?

There are a few possibilities.

You could compress the captured dynamic range. The image of the paper in the brighter booth would get printed as L*=99 (you have no other choice), and everything else gets similarly scaled by 2/3. The image of the paper in the standard booth gets printed as L*=66, which is roughly Zone VI.

You could be more gentle in your compression. You could print the paper in the brighter booth at L*=99, print the paper in the standard booth at L*=90, and thus make the standard booth look very close to as it was but everything in the brighter booth would look very washed out.

You could go ahead even further and clip everything in the brighter booth that happened to wind up brighter than L*=99.

You could do some masking and render each booth to a normalized version, such that the white paper in the standard booth got rendered as L*=99 and the white paper in the bright booth also got rendered as L*=99...but what're you going to do in the space between the booths? And you're now left with the two booths looking the same when they were significantly different to your eye.

One obvious solution would be to not make a print of the scene but rather to show the scene on an illuminated display. Let's say that this display can go from L*=5 to L*=200. Now, there's no problem showing the full scene that originally went as high as L*=150. But now you might have a different problem...the viewer's eyes might be adapted to thinking that L*=200 is "full bright," and the L*=150 parts of the scene in the original give the appearance of being less bright than they actually were. You then might be tempted to scale everything the opposite direction, to make L*=150 map to L*=200...but now either you also have to raise the blacks and make them washed out, or you wind up stretching the dynamic range overall and creating more contrast than there originally was.

Now, instead of simply two viewing booths with slightly different light levels, imagine you're also including the lightbulb itself in your scene. That may well be L*=1000; how are you going to reproduce that?

So, sorry, hate to break it to all y'all, but we're never going to stop having problems with dynamic range in photography.

Indeed, it's the exact same problem as artists everywhere have always faced...and that's why artists are always talking about the light, why there exists such things as good light and bad light.

It's all about the light. It always has been, and it always will be.

Cheers,

b&
 
Upvote 0
Status
Not open for further replies.