EOS-1D X Mark II Claims of 15 Stops of DR [CR3]

Diltiazem, that's all logical and reasonable and reason to want more DR, especially if you're regularly in those situations. I always shoot raw and often do adjust exposures to the extent that DPP allows and I agree it can enhance the quality of a shot. The bald eagles in Haida Gwaii taught me quite a bit with the white head presenting challenges. Thanks for that.

Jack
 
Upvote 0
Jack Douglas said:
Lee Jay said:
heptagon said:
An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?

All of them, including prints.

It's not hard to tone map 25 stops of DR into a 6 stop capable display system. If you go to the Hubble Site, you'll see a lot of that sort of thing going on. Many of those images (but not all) have absolutely huge DR, all mapped into an 8-bit JPEG.

I don't have a firm handle on this so may be displaying ignorance. When the human eye views a scene where there is bright sun somewhat in the field of view and deep shadows as well, it does not perceive all the detail in the highest brights and the lowest darks. To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural". Is that what we're after, being able to distinguish detail in the darkest and brightest areas even though it doesn't represent reality?
This is just an innocent question. Isn't mapping just another way of saying compression?

Jack

Yes.

Example, this is about what the Orion Nebula looks like with a normal contrast curve:
hqdefault.jpg


This is the Hubble image:

1024px-Orion_Nebula_-_Hubble_2006_mosaic_18000.jpg


That bright area in the middle is way, way, way brighter than the dust that surrounds it. The Hubble Site folks have massively compressed probably 20+ stops of DR into an 8 bit JPEG. And it still looks fine. Great, even. Not all HDRs are so good. Many look very unnatural. Go look in the sunrises and sunsets thread for a few that are pretty bad.
 
Upvote 0
Jack Douglas said:
Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.

Jack

Actually, bad examples are way more common than the good ones.
 
Upvote 0
Diltiazem said:
Jack Douglas said:
Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.

Jack

Actually, bad examples are way more common than the good ones.

And very few of the good ones require a high DR sensor, since a quick 3-shot HDR can crush a single shot from the best Sony sensors as far as DR goes.
 
Upvote 0
jrista said:
rrcphoto said:
jrista said:
I don't think it is just as simple as moving the ADC units onto the sensor die. That allows increased parallelism, and lower ADC operating frequency. But that was not all that Sony Exmor or the CP-ADC Toshiba sensors did. They also improved other things, such as moving the clock generator to a remote area of the sensor to prevent it from introducing noise into the ADC units, by adding logic to tune each ADC unit to it's column and eliminate banding, by improving the signaling that drives all the circuitry, etc. Canon would have to do more than just move the ADC units onto the sensor die...but given recent patents, I think they have the technology to do what's necessary.

this gets lost in the noise. I don't think canon in one fell swoop is going to catch up to 2-3 generations of exmor sensors. I suspect this will take at least one more generation.

While they have the patents, it's alot to do in one generation, and those patents are all pretty new.

I agree that it probably won't happen in one fell swoop, however I disagree that it COULDN'T happen in one fell swoop. It happened like that before. ;)

The patents were recently granted, but if you look at the dates on most of them, they are not new by any means, two years old or older (I think one was from 2011 even.)

Canon has a bad habit of sitting on lucrative technology. I've never understood it, but they have had patents for some pretty kick-ass technology since 2008...they just haven't employed it. At least, not in their DSLRs...some of the technology did find it's way into their smaller form factor sensors for P&S cameras, and those were the most advanced sensors Canon ever manufactured, using a 180nm process, copper interconnects, etc. There is some kind of lethargy in Canon's larger sensor division that just keeps them from moving forward at none other than a snails pace.

Canon fans often ridicule Nikon, which has their own business problems. However that has never kept them from pushing the technology envelope on every front at all times. The D500 is, in my humble opinion, a phenomenal camera. It's a refinement of technology on every front, not just the ergonomics or just the AF or just the sensor...it packs a ton of high end functionality at every level. I hear all the arguments about Canon needing to continue delivering the excellent customer service and reliability they always have...but if a company that is struggling even more than Canon and has a much smaller budget for R&D can push the technology envelope ever farther, why can't Canon? Canon has BILLIONS to play with...

But isn't that the point? If the executives feel they're doing fine financially already, why would they bother innovating so much? I'm not defending that, by the way, but it seems like a reasonable reading of the situation. And would also explain why smaller/less profitable companies innovate more or faster (if indeed they do) - they feel they need to, in order to sell more.
 
Upvote 0
Lee Jay said:
Diltiazem said:
Actually, bad examples are way more common than the good ones.

And very few of the good ones require a high DR sensor, since a quick 3-shot HDR can crush a single shot from the best Sony sensors as far as DR goes.

On that note, is there an optimum stop distance between the shots for a 3-shot HDR? And if there is, is it same or different one for a 5 shot HDR?
 
Upvote 0
Lee Jay said:
Diltiazem said:
Jack Douglas said:
Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.

Jack

Actually, bad examples are way more common than the good ones.

And very few of the good ones require a high DR sensor, since a quick 3-shot HDR can crush a single shot from the best Sony sensors as far as DR goes.

Yes, that is one of the reasons why this feature didn't have any significant impact on the market. I often laugh when people lift 5 stops of shadows to show how good it is compared to Canon. In those examples one can clearly see how bad sony/nikon images look when you push shadows that much (although canon look worse).
 
Upvote 0
Lee Jay said:
Diltiazem said:
Jack Douglas said:
Lee Jay, I had already concluded that an example like you give is good reason for more DR. I guess it was the bad examples that kind of turned me off. Especially when the directional effect of the light source/s is lost - then the photo becomes visually confusing.

Jack

Actually, bad examples are way more common than the good ones.


And very few of the good ones require a high DR sensor, since a quick 3-shot HDR can crush a single shot from the best Sony sensors as far as DR goes.


I am not a big fan of hdr tone mapped type images and dont like the hassle of bracket processing. Sony sensors are not any better at preserving highlights than canon but they are really good at lifting underexposed images. That's a plus when shooting because you don't have to try and center your exposure or pick which end you want to preserve in a high dr scene. You just have to make sure you don't overexpose for the highlights. Its easy because you have a live histogram in the viewfinder and you can use zebras if you want. It would be nice to just center and shoot but that would take 19 or 20 stops I think.
 
Upvote 0
Lee Jay said:
That bright area in the middle is way, way, way brighter than the dust that surrounds it. The Hubble Site folks have massively compressed probably 20+ stops of DR into an 8 bit JPEG. And it still looks fine. Great, even. Not all HDRs are so good. Many look very unnatural. Go look in the sunrises and sunsets thread for a few that are pretty bad.

Sure it looks great, but in fairness none of us have a baseline for what it "should" look like (i.e. to the naked human eye), as opposed say to looking around outside. Heavily lifted shadows are often very obvious in photos of common scenes; they look neither like natural light nor an artificially lit scene.
 
Upvote 0
neuroanatomist said:
Jack Douglas said:
To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural".

shad·ow. ˈSHadō/ noun
1. a dark area or shape produced by a body coming between rays of light and a surface.

Your brain sort of expects shadows to be...dark. When they're not dark, and there's no obvious reason for that, it naturally looks unnatural. Naturally.

Dark does not mean noisy, though. Neither does dark mean black, or devoid of detail. By definition dark means "approaching black"...but not actually black. When it comes to whether DR matters...it's not the darkness that matters, it's how much noise is in that darkness, and how much that noise costs you useful detail.

It's also the fact that because of the nature of a RAW image, in their natural linear form they do not represent the world as we see it, and with a default camera curve they tend to compress both the highlights and the shadows too much. Our eyes see far more detail in shadows and highlights than a RAW rendered with the default settings of any RAW editor. Yes, shadows should be dark...but everything that is dark in a camera RAW image before it is processed is not necessarily supposed to be dark, or for that matter, a actual shadow.

When your storing more information than you can display (and unless your particularly lucky and unique to have a 10-bit display with a video card that can actually render 10-bit, and your using software that also supports 10-bit display...then your working with 8 bits, and what you see on screen is about 8 stops of dynamic range...anything more, and it will be crushed to black or clipped to white), your bound to, by default, render some of that information incorrectly. Hence the reason we need the ability to lift shadows. Same reason we need the ability to recover highlights, because when something starts to blow out to nearly pure white in a RAW, it's not representative of reality...in reality, things don't suddenly clip to hard white with harsh edges. Why should shadows be any different?

Canon users are used to having to keep shadows very dark (and personally I would say unnaturally dark in many cases), because to do otherwise means dealing with more noise. They don't push more because they either can't, or are simply unwilling to have that much noise in their image...especially if it has unnatural patterns in it. Once you work with data that has more dynamic range for a while, the NEED to keep shadows so crisply dark fades, and you begin to appreciate the flexibility that having lower read noise offers. You don't always have to lift your shadows. You won't always need to. However having the freedom to is extremely useful when you do need it.
 
Upvote 0
candc said:
I am not a big fan of hdr tone mapped type images and dont like the hassle of bracket processing.

I'm not usually either, but in Lightroom it's now just three buttons - control-shift-m and you have a new almost raw file without that nasty HDR tone mapping that many such tools produce. Then you can develop it like any other image, just with six stops cleaner shadows.
 
Upvote 0
Lee Jay said:
Jack Douglas said:
Lee Jay said:
heptagon said:
An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?

All of them, including prints.

It's not hard to tone map 25 stops of DR into a 6 stop capable display system. If you go to the Hubble Site, you'll see a lot of that sort of thing going on. Many of those images (but not all) have absolutely huge DR, all mapped into an 8-bit JPEG.

I don't have a firm handle on this so may be displaying ignorance. When the human eye views a scene where there is bright sun somewhat in the field of view and deep shadows as well, it does not perceive all the detail in the highest brights and the lowest darks. To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural". Is that what we're after, being able to distinguish detail in the darkest and brightest areas even though it doesn't represent reality?
This is just an innocent question. Isn't mapping just another way of saying compression?

Jack

Yes.

Example, this is about what the Orion Nebula looks like with a normal contrast curve:
hqdefault.jpg


This is the Hubble image:

1024px-Orion_Nebula_-_Hubble_2006_mosaic_18000.jpg


That bright area in the middle is way, way, way brighter than the dust that surrounds it. The Hubble Site folks have massively compressed probably 20+ stops of DR into an 8 bit JPEG. And it still looks fine. Great, even. Not all HDRs are so good. Many look very unnatural. Go look in the sunrises and sunsets thread for a few that are pretty bad.

Jack, to add to what Lee Jay has said, this is my own Orion Sword image...(and for all the effort I put into it, I think I can say one of the best you'll find (outside of the Hubble image, of course) ;)):



http://www.astrobin.com/full/142576/F/

This is an HDR composite. I took a number of sets of data, including 210s, 120s, 60s, 30s, 15s, 10s and 5s sets of subs. (I had intended to take 240s subs...I accidentally took 210s, hence the break in the linearity of the exposure times.) To actually capture the central stars (the Trapezium or "Trap" as we call it), I had to keep dropping my exposure time down to a mere 5 seconds per sub. Even then...note how much brighter the core of Orion Nebula is in my image? The Trap itself is still overexposed, I wasn't able to accurately reproduce all of the major stars in there (there are four major stars in the trap, and a number of other smaller ones) at 5s. I could have gone down to 2.5s, maybe even 1.25s subs, to better resolve those stars without the heavy clipping in them. Down to 5s, I expanded my dynamic range to 17 stops, and if I'd gone down to 1.25s that would have expanded it to 19 stops. Throw in the integration of 40x210s subs, which reduces noise by over a factor of 6x, and you have even more dynamic range. So, this was a truly high dynamic range image...probably 18-20 stops from the trap to the darkest background sky.

I manually linear-fit each integration to each other, starting with the 10s to the 5s, then the 15s to the fit 10s, etc. Once all the integrations were fit, I combined each subsequent set into a combined image using an exponential transfer curve formula in PixelMath in PixInsight to produce a single high dynamic range image.

Now, if you look at Orion Nebula with a pair of binoculars or a decently large telescope, you will only see the large central nebula, including the bluish rim...but you won't see all the rest of the outer dust. You would need an extremely large telescope to start seeing some of that outer dust, and a gargantuan telescope to see it all (albeit more faintly than I have depicted here for sure.)

But...what is reality, here? Because we cannot normally see the outer dust around Orion Nebula, is it improper and incorrect and lying to reveal it? This is one of my favorite regions of the night sky. Has been since I was 7 or 8 as a kid. Orion Nebula was the first nebula I think I ever saw through a telescope. I've wanted to see it FOR REAL for my entire life. I've wanted to know what's really out there in space in the constellation of Orion my whole life. This image represent's what's really there. It isn't what we see when we look through a telescope...what we see when we look through a telescope is a pale soft gray shadow of the reality of the object up there in space, 1350 light years away. I didn't want to just reproduce what I could see every night during winter by pointing a telescope up at the sky...I wanted to reproduce everything, all of it, top to bottom, brightest to faintest, as much of the intriguing structure and detail of the region as I could.

And damn if it wasn't a HELL of a LOT of work. I reprocessed this data three times. I spent two solid weekends the third time, and I developed some of my own processing techniques in the end to bring out all the structures and all the nuances of color that I could. It would have been a lot easier to do if my camera had more dynamic range. The most tedious part of the process was linear fitting everything, identifying the blending regions, and tuning the exponential transfer curve formula for each deeper integration to blend them properly. Took a lot of time. Lot of meticulous and tedious measurement. The 5D III has 11 stops of DR. If I'd had 13.8 stops, it would have required fewer integrations to capture the entire dynamic range of the whole object. If I'd had 15 stops, I'd have required even fewer integrations...maybe only two or three.

Reality is more than what we see. Sometimes we don't see all of what really is there. Sometimes the goal isn't to exactly reproduce reality (and other times, it's to reproduce it more accurately.) Sometimes having a piece of technology that is more capable than our humble little eyes can let you reveal something that people rarely see. Sometimes it can let you realize a childhood dream.
 
Upvote 0
sanj said:
Jack Douglas said:
Lee Jay said:
heptagon said:
An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?

All of them, including prints.

It's not hard to tone map 25 stops of DR into a 6 stop capable display system. If you go to the Hubble Site, you'll see a lot of that sort of thing going on. Many of those images (but not all) have absolutely huge DR, all mapped into an 8-bit JPEG.

I don't have a firm handle on this so may be displaying ignorance. When the human eye views a scene where there is bright sun somewhat in the field of view and deep shadows as well, it does not perceive all the detail in the highest brights and the lowest darks. To me, looking at some HDR photos that were bragged up, I felt they just looked "unnatural". Is that what we're after, being able to distinguish detail in the darkest and brightest areas even though it doesn't represent reality?
This is just an innocent question. Isn't mapping just another way of saying compression?

Jack

Jack my understanding is that our eyes look at one thing at a time so when at a sunset when we look at the bright clouds our eyes adjust to show details and when we look at the darkness under the trees our eyes adjust. We can do this very rapidly and in effect have a very hight DR built in.

I am not an expert by I hope what I will say make sense: As an analogy to image sensor, our eye can do selective ISO, given a scene with bright and dark portion. our eye increase ISO on dark portion such that we can see desired details, at the same time decrease ISO on bright areas to see details on bright area.

Try to take photo with this similar scene, on d image, if u can see detail on dark portion, highlight will be overexpose, when u see details on highlight portion, no details on dark area. but when u see the actual scene with your eyes. You see both details.
 
Upvote 0
My introduction to high dynamic range occurred one November evening as I was leaving work - the moon was just over the blast furnace's flare stack with its blue flame visible in the night sky (carbon monoxide flames are invisible in daylight). Add the little lights around the furnace, the wispy clouds, touch of snow on the ore piles, and it was beautiful. I rushed back to my office for the best camera available. Not even close to capturing the scene. So, I went home and learned some things.
Yes you need to capture the data. A high dynamic range sensor can sometimes get it all in one shot, but there are many scenes that need more than fifteen (I feel, but have no proof, that the moon frequently features in scenes with more than 15 stops of information). Therefore, bracketing is still needed.
All of this focus on sensor capability misses the other steps in the process. Jrista has just told us that he's had to develop techniques in post to process his beauties. We also have to be mindful of the display(s). But, I can sit in my living room at noon while reading a book and glance over the top of the page to identify the bird in the tree. One shot, all of the information compressed into the display in my head. I think that sensor and firmware system has been available for a million years or so.
Maybe some of this SENSOR DR!!!!! energy should be directed to software coupled with displays. After the shot, why can't it be as easy as just looking? Shouldn't it be hard to make it look unnatural?
BTW the "high dynamic range" of our vision is accomplished, in part, by in-retina COMPRESSION. Dynamic, pixel level neutral density filtering could be the thing we've been waiting for.
 
Upvote 0
I can honestly say I understand the issues reasonably well now - thanks. Obviously, some folk could always do with more DR no matter how much Canon dished out. I'm closer to the group that says they shoot most often at ISOs where DR is similar across the brands. But hey, if Canon can give more I wouldn't complain unless it was at the expense of something more important to me personally.

I really want better IQ with lower noise in the ISO 2000 to 6400 range.

Jack
 
Upvote 0
Lee Jay said:
heptagon said:
An image which is exposed "correctly" such that nothing has to be pushed or pulled does not need high DR. Do you know any display device that can display more than 10 stops of DR?

All of them, including prints.

It's not hard to tone map 25 stops of DR into a 6 stop capable display system. If you go to the Hubble Site, you'll see a lot of that sort of thing going on. Many of those images (but not all) have absolutely huge DR, all mapped into an 8-bit JPEG.

How is tonemapping 25 stops DR into an 8 bit JPEG different from pushing and pulling like crazy?


Your answer to my question is like:

Q: Do you know a stair with 1000 steps?

A: With 6 decimal digits you can represent 1 million different numbers.
 
Upvote 0