Who said Canon cameras suck?!?

Status
Not open for further replies.
Fishnose said:
Nishi Drew said:
D800 complaints? Sure,

NO, the D800 has bloody marvellous AF. Fast and accurate. This rumor is a leftover of the QC problem they had with left focus alignment in the beginning. It's gone now. Get over it.
NO, it does very well indeed with high ISOs. Not compared to a 1Dx of course, but that's not a reasonable comparison, is it.
NO, the big files don't slow things down (unless you have a shitty computer or you're a sports photographer)

As to the OP's question - 'Who said Canon cameras suck?' - Well it sure wasn't me. They're excellent cameras. Get over it.

that's the hyperbole talking, to bring this down to earth here's a quote from a wedding forum, a nikon owner advising another nikon owner -

"As an owner of the D800, it never comes out at weddings, I stick to the D3s. As Brady said, it is just far too slow of a shooter. Plus, the files sizes are too much of a hassle to drag around and edit. A 16 bit one layer tiff is 289.2 mb per file. I would be looking at a D700, D3 or D3s."
 
Upvote 0
Chuck Alaimo said:
Nishi Drew said:
D800 complaints? Sure,

NO, the D800 has bloody marvellous AF. Fast and accurate. This rumor is a leftover of the QC problem they had with left focus alignment in the beginning. It's gone now. Get over it.
NO, it does very well indeed with high ISOs. Not compared to a 1Dx of course, but that's not a reasonable comparison, is it.
NO, the big files don't slow things down (unless you have a shitty computer or you're a sports photographer)

As to the OP's question - 'Who said Canon cameras suck?' - Well it sure wasn't me. They're excellent cameras. Get over it.

that's the hyperbole talking, to bring this down to earth here's a quote from a wedding forum, a nikon owner advising another nikon owner -

"As an owner of the D800, it never comes out at weddings, I stick to the D3s. As Brady said, it is just far too slow of a shooter. Plus, the files sizes are too much of a hassle to drag around and edit. A 16 bit one layer tiff is 289.2 mb per file. I would be looking at a D700, D3 or D3s."

I think file size is not a problem for nowadays computer system. For wedding you need to shoot many photos in low light. I doubt D800 can handle low light as well as 5D3. I believe many wedding photographers would like use 5D3, not D800. However, if you are not taking photos in low light, D800 do give more advantage than 5D3.
 
Upvote 0
Mikael Risedal said:
NO ONE says that Canon suck, but the read noise , pattern noise and banding should not be there if Canon has a modern sensor tech linje.
I do not understand why you are so upset, see the truth and stop denying that Canon's sensors are not up to date and Canon produces sensors in the old 180nm tech machines when others use 110nm or less and use column vise ADC at the sensor edge.

It's called a patent, and it's owned by Sony ;D

It's kind of hard to work around. Ask a Samsung executive. And when you do, make sure you're holding an iPhone.
 
Upvote 0
Re: Who said Canon sensors suck?!?

neuroanatomist said:
LetTheRightLensIn said:
anyway i have sooooooo many shots to edit and am growing tired of this thread, so i will go

Lair, liar, pants on fire.



Am I serious this time? Hmmmm...better have a fire extinguisher handy and look down - right now - just in case.

Dang it! This time I thought you were joking and now I'm suffering second degree burns. :(
I'm always wrong!
 
Upvote 0
studio1972 said:

NO, the big files don't slow things down (unless you have a shitty computer or you're a sports photographer)

That's just silly, as somebody who processes thousands of images per weak, I can assure you, image size makes a big difference, and if you don't need more than about 20MP, double that is a waste.
[/quote]

D600 is already out. That might help Nikon users with a lot of photos to process. :)
 
Upvote 0
I for one find this discussion both entertaining and useful in aquiring knowledge that I did not previously have.

I have a question: Am I understanding the statements in this thread correctly if I say that you cannot increase the DR in your image because it is a hardware limit? The reason for my question is that I have been looking at videos @ youtube for good tips on black and white conversion in photoshop, and in several of these videos they claim that you can increase the DR by using layers and tweaking Levels and Curves....
 
Upvote 0
Quasimodo said:
I for one find this discussion both entertaining and useful in aquiring knowledge that I did not previously have.

I have a question: Am I understanding the statements in this thread correctly if I say that you cannot increase the DR in your image because it is a hardware limit? The reason for my question is that I have been looking at videos @ youtube for good tips on black and white conversion in photoshop, and in several of these videos they claim that you can increase the DR by using layers and tweaking Levels and Curves....

If you use raw - no. you get what you get.
If you use jpg - yes, because jpeg by definition has smaller DR than raw and you have to squeeze more information into smaller DR of jpeg file.
Jpeg has 8bit per channel 2^8=256 gradations of each color, 256*256*256=16,7 millions color combinations, raw has 12, 14 or even 16 bits per channel. Of course not all of DR of raw file is used (depends on the scene), but it is much bigger anyway.
 
Upvote 0
Re: Who said Canon sensors suck?!?

neuroanatomist said:
Same thing with a 14.4 DR from a 14-bit ADC. WTF, that's impossible.

I don't see why that's impossible -- range is determined by bounds, not cardinality. If I have 2 bits, I can represent levels as high as 6 and as low as 0 by mapping 0->0, 1->2, 2->4, 3->6. That's a linear map, and the "dynamic range" (log2(6)/log2(1)) is about 2.6. It does "miss" the levels 1,3 and 5, but then if I were to throw in an extra bit I would still miss the levels 0.5, 1.5, etc.

Is there something about the physical equipment that mandates that toggling the lowest order bit of the ADC changes the measured output level by exactly "1" and not for example 0.9 or 1.1 ?

To put this another way -- if I use a sensor with the same sensitivity characteristics but I use a 13 bit ADC, can I still represent the lowest and highest output level of the sensor ? If I do this, isn't the dynamic range of the sensor unchanged ? (it might affect other performance characteristics but, it seems to me, not DR)
 
Upvote 0
Re: Who said Canon sensors suck?!?

jrista said:
You sample multiple inputs, average their values, and produce a better output pixel (in the destination space) that contains rich information. Even with downscaling though, it doesn't take a particularl intelligent mind to realize you can't generate more than TWICE THE LUMINANCE RANGE (1.2 stops worth) in a downsampled image from a source image that only contains 13.2 stops to start with.

Why not ? How much precisely can you generate ? If you have a little over 4x as many pixels (as the destination), what multiple does that reduce noise by ?[edit: if your measure of noise is standard error, and your noise is gaussian, I'd expect it to be inversely proportional to sqrt(N), so in this case I'd expect an extra stop or so for 4x as many megapixels given those assumptions]

btw, I hope you're not suggesting that DxO's primary results should be reported on a per pixel basis (what would that do to SNR measurements of high megapixel cameras ? Would the 5DIII get better results than a 5DC on a per pixel basis for example ?)
 
Upvote 0
What you can do in software doesn't matter. Dynamic range benefits what you do in-camera. It doesn't matter if you can use clever software algorithms to massage the 13.2 stops of DR in an original image to fabricate artificial data to extract 14.0, 14.4, or 16 stops of "digital DR" (which is not the same thing as hardware sensor DR). I'll try to demonstrate again, maybe someone will get it this time.

"I am composing a landscape scene on-scene, in-camera. I meter the brightest and darkest parts of my scene, and its 14.4 stops exactly! HA! I GOT 'DIS! I compose my scene with the D800's live view, and fiddle with my exposure trying to get the histogram to fit entirely between the extreme left edge and the extreme right edge. Yet, for the life of me, I CAN'T. Either my histogram rides up the right edge a bit (the highlights), or it rides up the left edge a bit (the shadows). This is really annoying. DXO said this stupid camera could capture 14.4 stops of DR!! Why can't I capture this entire scene in a single shot?!?!?!!!1!!11 I didn't bring any ND filters because this is the uberawesomedonkeyshitcameraoftheyearpureawesomeness!!!!!"

The twit trying to capture a landscape with 14.4 stops of DR in a single shot CAN NOT because the sensor is only capable of 13.2 stops of DR! The twit of a landscape photographer is trying to capture 1.2 stops (2.4x as much light) in a single shot and his camera simply isn't capable of doing so. He could take two shots, offset +/- 2 EV and combine them in post with HDR, but there is no other way his camera is going to capture 14.4 stops of DR.

THAT ^^^^^ UP THERE ^^^^^ IS MY POINT about the D800. It is not a 14.4 stop camera. It is a 13.2 stop camera. You can move levels around in post to your hearts content, dither and expand the LEVELS YOU HAVE. But if you don't capture certain shadow or highlight detail TO START WITH....you CAN'T CREATE IT LATER. All your doing is averaging and dithering the 13.2 stops you actually captured to SIMULATE more DR. Ironically, that doesn't really do anyone any good, since computer screens are, at most, capable of about 10 stops of DR (assuming you have a super-awesome 10-bit RGB LED display), and usually only capable of about 8 stops of DR (if you have a nice high end 8-bit display), and for those of you unlucky enough to have an average $100 LCD screen, your probably stuck with only 6 stops of DR. Print is even more limited. An average fine art or canvas print might have 5 or 6 stops. A print on a high dMax gloss paper might have as much as 7 stops of DR.

There is little benefit to "digital DR" that is higher than the sensor's native DR. Your not gaining any information you didn't start out with, your simply redistributing the information you have in a different way by, say, downscaling with a clever algorithm to maximize shadow DR. But if you didn't record shadow detail higher than pure black to start with, no amount of software wizardry will make that black detail anything other than black. And even if you do redistribute detail within the shadows, midtones, or highlights...if your image has 14 stops of DR you can't actually SEE IT. Not on a screen. Not in print. You have to compress it, merge those many stops into fewer stops, and thus LOSE detail, to view it on a computer screen or in print.



In my original example that started this thread...my camera DID record the information I recovered. I am not, have not, and will not claim that my 7D is capable of anything more than 11.12 stops of DR, because that's what the sensor gets (at least according to DXO.) My original post was simply noting that one can make the BEST USE of that hardware DR but exposing to the right. Canon cameras offer a lot of highlight exposure latitude, and based on my accidental overexposure of a dragonfly, I've learned you can not only ETTR a little...you can ETTER a LOT with a modern Canon camera (i.e. 7D, 5D III, 1D IV, 1D X). You can really pack in the highlights and recover a tremendous amount of information in post.

However the same facts of reality regarding hardware DR that exist for the D800 also exist for the 7D. DXO Mark lists their "Print DR" for the 7D at 11.73 stops. Same as with the D800 above, if I try to photograph a landscape with 11.73 stops of DR, I'm going to either block the shadows a small amount, or blow some of the highlights a small amount. No way around that. I am going to have to compromise on about 2/3rds of a stop one way or another.
 
Upvote 0
cliffwang said:
Chuck Alaimo said:
Nishi Drew said:
D800 complaints? Sure,

NO, the D800 has bloody marvellous AF. Fast and accurate. This rumor is a leftover of the QC problem they had with left focus alignment in the beginning. It's gone now. Get over it.
NO, it does very well indeed with high ISOs. Not compared to a 1Dx of course, but that's not a reasonable comparison, is it.
NO, the big files don't slow things down (unless you have a shitty computer or you're a sports photographer)

As to the OP's question - 'Who said Canon cameras suck?' - Well it sure wasn't me. They're excellent cameras. Get over it.

that's the hyperbole talking, to bring this down to earth here's a quote from a wedding forum, a nikon owner advising another nikon owner -

"As an owner of the D800, it never comes out at weddings, I stick to the D3s. As Brady said, it is just far too slow of a shooter. Plus, the files sizes are too much of a hassle to drag around and edit. A 16 bit one layer tiff is 289.2 mb per file. I would be looking at a D700, D3 or D3s."

I think file size is not a problem for nowadays computer system. For wedding you need to shoot many photos in low light. I doubt D800 can handle low light as well as 5D3. I believe many wedding photographers would like use 5D3, not D800. However, if you are not taking photos in low light, D800 do give more advantage than 5D3.
Note, the quote I was using there was from a nikon user, who shoots weddings and owns a d800, this person was giving upgrade advice to another wedding shooter. I personally believe that the size of d800 files would be hard to work with when your trying to move quickly through 3000 shots, I also own a 5d3 and for wedding work its freaking fantastic. Again though, you may feel at that file sizes are not too big, the point of the matter is that NIKON users feel the files are too big for wedding work --- again as the quote says "As an owner of the D800, it never comes out at weddings, I stick to the D3." And likewise that's where the mk3 shines.
 
Upvote 0
Chuck Alaimo said:
cliffwang said:
Chuck Alaimo said:
Nishi Drew said:
D800 complaints? Sure,

NO, the D800 has bloody marvellous AF. Fast and accurate. This rumor is a leftover of the QC problem they had with left focus alignment in the beginning. It's gone now. Get over it.
NO, it does very well indeed with high ISOs. Not compared to a 1Dx of course, but that's not a reasonable comparison, is it.
NO, the big files don't slow things down (unless you have a shitty computer or you're a sports photographer)

As to the OP's question - 'Who said Canon cameras suck?' - Well it sure wasn't me. They're excellent cameras. Get over it.

that's the hyperbole talking, to bring this down to earth here's a quote from a wedding forum, a nikon owner advising another nikon owner -

"As an owner of the D800, it never comes out at weddings, I stick to the D3s. As Brady said, it is just far too slow of a shooter. Plus, the files sizes are too much of a hassle to drag around and edit. A 16 bit one layer tiff is 289.2 mb per file. I would be looking at a D700, D3 or D3s."

I think file size is not a problem for nowadays computer system. For wedding you need to shoot many photos in low light. I doubt D800 can handle low light as well as 5D3. I believe many wedding photographers would like use 5D3, not D800. However, if you are not taking photos in low light, D800 do give more advantage than 5D3.
Note, the quote I was using there was from a nikon user, who shoots weddings and owns a d800, this person was giving upgrade advice to another wedding shooter. I personally believe that the size of d800 files would be hard to work with when your trying to move quickly through 3000 shots, I also own a 5d3 and for wedding work its freaking fantastic. Again though, you may feel at that file sizes are not too big, the point of the matter is that NIKON users feel the files are too big for wedding work --- again as the quote says "As an owner of the D800, it never comes out at weddings, I stick to the D3." And likewise that's where the mk3 shines.
I won't say the file size is not a problem if you point out people need to deal with thousands of shots regularly. Moreover, picking D800 is a big mistake for a wedding photographer IMO. I think that's nothing need to arguer here. 5D3 is good for high ISO, and D800 is good for low ISO. The argumentation is about DR.
 
Upvote 0
hjulenissen said:
Quasimodo said:
I have a question: Am I understanding the statements in this thread correctly if I say that you cannot increase the DR in your image because it is a hardware limit? The reason for my question is that I have been looking at videos @ youtube for good tips on black and white conversion in photoshop, and in several of these videos they claim that you can increase the DR by using layers and tweaking Levels and Curves....
DR means different things to different people.

If the highlights are clipped, and the shadow details are buried in noise, then information is lost. If a pair of pixels "should" have been [256, 257] but were clipped to [256, 256], then information is lost, and no clever software (or clever photoshop operator) can reliably know if the true values were [256, 257] or [256, 256].

And if you would see how big is the difference in numbers of photons aquired by a sensor between values converted later to numbers 256 and 257, you would get a headache :)
 
Upvote 0
Mikael Risedal said:
Same thing with a 14.4 DR from a 14-bit ADC. WTF, that's impossible.

Not at all, Nikon D3x has a column wise 12 bit ADC and the readings from the sensor are more than ones .

Actually, thats incorrect. The Nikon D3x has a 14-bit ACD. This is a quote directly from DXO's own review of the D3x:

Key sensor characteristics
The Nikon D3X features a very high resolution full-frame format CMOS sensor with 24.6Mpix. The D3X and the Sony A900 are the only two cameras at that level of resolution within the professional D-SLR category, but the D3X features a 14-bit Analog/Digital (A/D) converter which, as shown below, plays a significant role in boosting its capture performance when compared to the Sony A900 (with only a 12-bit A/D converter).

The D3X’s lower ISO setting (down to ISO 78) compared to other Nikons certainly help as some dxomark metrics such as dynamic range are considered at lowest ISO values.

Key performance factors
The D3X sensor shows exceptional dynamic range with a max DR of 13.7 bits in “Print” mode. And it is quite an amazing performance compared to all other cameras with similar sensor technologies (Canon 1Ds MKIII, 5D MKII, Sony A900). Interestingly, the Nikon D3X sensor does follow the theoretical rule of plus 1 f-Stop of dynamic range when ISO sensitivity is divided by 2. Compared to A900, dynamic range is better by about 1 stop across the whole ISO range (50-6400).

It is the first DSLR CMOS sensor with more than 12-bit effective depth of information! Its 14-bit A/D makes a difference and we can expect exceptional tone scale reproduction with such a camera.
Further the D3X’s Color Depth is also exceptional, with a maximum value of 24.7 bits of color discrimination performance (in normalized “Print” mode). As the D3X color responses are pretty close to those of the D3, we can expect the same high quality color rendering.

The Nikon D3X has some limitations at high ISO sensitivity, and its dynamic range is lower than, for instance, the Canon 5D MKII for speeds above 800 (manufacturer value).

Overall, however, Nikon D3X definitely takes the lead within its category and will be remembered as the first camera clearly demonstrating more than 12-bit effective depth of information.

Nikon D3X DxOMark review - January 15, 2009
Reference: http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/DxOMark-review-for-the-Nikon-D3X

@Mikael: You, sir, really need to start getting your facts strait. You keep publishing information that is either inaccurate, misleading, or flat out wrong. That's not helping your arguments at all, and its just regurgitating a lot of the misleading and inaccurate data DXO piles up all over these forums. Please get your facts strait first, then post.
 
Upvote 0
jrista…. I really want to thank you for this topic and the other fascinating posts over the last days regarding DR, the debate has I think on the whole been excellent. I don’t even begin to understand the science being discussed hell I don’t even know what ADC stands for but I do get your point « Reply #95 on: Today at 01:33:15 PM » you don’t need to be a scientist to understand that.

Anyway to my question(s) I’ve read many times about exposing to the right but have been reluctant to try it as I would prefer if anything to expose to the left to lift my shutter speed and reduce camera shake. I’m just amazed by the example you posted though so I want to try this for myself. I understand that once a highlight is clipped it’s gone so therefore simply setting exposure for a dark area and letting the highlights take care of themselves (over expose) isn’t enough. Is the easiest way to do this in a fast changing situation to bracket say 0, +1, +2 or +1, +2, +3 and use the highest non-clipped file, is the histogram the best way to do it or is there another way?
Also when you talk about amazing highlight recovery in Lightroom 4.1 with -4 EV exposure correction and 60% highlight recovery what would be the equivalent options in say DPP or Photoshop?
 
Upvote 0
So the Mark III has less DR than the D800. Does this really affect everyone THAT much? Seems to be really exaggerated. I guess if a scene demands it, I like to do an HDR. I try to process it as minimally as possible, with the goal of only trying to reproduce what my eyes saw during that scene. This seems to work well for me. It just seems everyone arguing about a few digits of DR is pointless. IDK maybe i just dont get it.

Have enjoyed reading jrista's posts.
 
Upvote 0
zim said:
jrista…. I really want to thank you for this topic and the other fascinating posts over the last days regarding DR, the debate has I think on the whole been excellent. I don’t even begin to understand the science being discussed hell I don’t even know what ADC stands for but I do get your point « Reply #95 on: Today at 01:33:15 PM » you don’t need to be a scientist to understand that.

Thanks. I've been trying to explain that since the D800 came out, and no one seemed to understand my point. Hopefully that little narrative gets it across now.

zim said:
Anyway to my question(s) I’ve read many times about exposing to the right but have been reluctant to try it as I would prefer if anything to expose to the left to lift my shutter speed and reduce camera shake. I’m just amazed by the example you posted though so I want to try this for myself. I understand that once a highlight is clipped it’s gone so therefore simply setting exposure for a dark area and letting the highlights take care of themselves (over expose) isn’t enough. Is the easiest way to do this in a fast changing situation to bracket say 0, +1, +2 or +1, +2, +3 and use the highest non-clipped file, is the histogram the best way to do it or is there another way?

Well, if you really want to start ETTR, its best to experiment for a while. Whatever it is that you photograph most, experiment a lot and learn where your highlights blow out. You could try to just take a bunch of shots each time you photograph something, with EV 0, +1, +2, +3, +4, etc. But that is really time consuming to do as a matter of practice. It WILL be helpful to do that when you first start, as you will simply need a variety of samples to figure out where...for the kinds of things you photograph, your highlights really to tend to blow the highlights (such that they are unrecoverable). You should also use the in-camera highlight warning feature. That feature in Canon cameras is based on the JPEG previews, so it will usually start the blinkies a little before you actually do overexpose so much that you can't recover. I've learned that with RAW, you can usually handle at least a small amount of highlight warning blinks when previewing your photos in-camera. In some cases, a LOT of the photo may blink (as was the case with my original sample image of the dragonfly.) You really can't know ahead of time if you've actually blown or not. If I think I have, I pull back some...maybe 1/3rd to 2/3rd of a stop.

Another excellent tool is the in-camera histogram. USE THIS! Its really your best tool. When you photograph something, check the histogram. That should really be a matter of practice, actually. You'll start to get a feel for what the histogram means, and when it indicates you've blown out your highlights. Again, the histogram is based on a JPEG conversion of the photo taken, rather than the RAW, so it won't be 100% accurate. You can usually get away with a little bit of the histogram riding up the right-side edge of the histogram display. How much it can ride up will depend on the camera, the scene, and the overall key of the photo (high key, low key, etc.)

It will take time and experimentation, but you'll eventually just get a "feel" for what your camera is telling you, and you'll start to intuitively know when you have or have not actually blown your highlights from the in-camera preview with highlight warning and the in-camera histogram. Also, if you ever feel that you've gone too far, you should always pull exposure down a bit, by at least 1/3rd of a stop, and take another shot. If you are photographing action that only occurs once, its better not to push ETTR that far. You can still expose to the right, but you don't want to go so far that your histogram is riding the right edge. Its better to keep the histogram a couple pixels away from the right edge at least, and probably a little more than that. The key difference between shadows and highlights is that with shadows, you just didn't capture enough, but lacking shadow doesn't mean your photograph is unusable. You can always lift shadows, sometimes a lot, and even if there is noise, FPN, banding, whatever...there are ways to clean that up and get good shadow detail. On the flip side, if you blow your highlights...they are gone, for good. You can't recover them, and if you overexpose enough, you might just blow more than highlights. So when you aren't sure you'll be able to re-take a shot, play it safe. Either just expose normally, or if you are comfortable with your ETTR skills, just ETTR less...give those highlights some physical headroom on the sensor. You usually only need 1/3rd to 2/3rds of a stop, but if you have a lot of bright daylight pounding down on a baseball player in a white jersey, you might want to drop exposure by a whole stop or so.

(Note: The true benefit of the D800 is not really that it doesn't have any noise...its that you don't have to spend time cleaning that noise up. On the flip side, having shadow noise doesn't mean your photograph is throw-away or that you can't recover shadows...it just means you DO have to spend time cleaning up all the noisy junk in the shadows before your photo is finally acceptable. ;)

zim said:
Also when you talk about amazing highlight recovery in Lightroom 4.1 with -4 EV exposure correction and 60% highlight recovery what would be the equivalent options in say DPP or Photoshop?

Photoshop, yes...since that uses ACR, which ultimately uses exactly the same RAW processing engine as Lightroom. As for DPP, I couldn't say. It uses a different RAW processing engine, developed by Canon. Technically speaking, I would kind of expect Canon's own RAW processor to produce better results...although that's not always proven true. I think DPP is able to extract more DR out of the average .CR2 file, however its demosaicing algorithm is somewhat wanting (it tends to leave jagged edges and color artifacts around, where as ACR/LR's demosicing algorithm is AHDD-based and produces very clean results.)

DPP might be able to do even greater wonders with exposure recovery, and work even greater magic than -4 EV recovery, if you can put up with the demosaicing.
 
Upvote 0
One problem appears is if you are allready doing work-arounds for other problems (focus stacking, stitching,...). Having several "layers" of time-consuming, error-prone work-arounds can detract from the experience of photography, and a loss of "good shots".

I get what you are saying. I guess for me, i know that if i am going to go out and do some landscape shots, that some of them may require me doing some bracketed shots, or focus stacking like you mentioned. In that case, its usually not a time crunch, because my subject matter really isnt going anywhere. And for me, I dont mind post processing some of those things.

I guess its just a situational thing.
 
Upvote 0
jrista said:
What you can do in software doesn't matter. Dynamic range benefits what you do in-camera. It doesn't matter if you can use clever software algorithms to massage the 13.2 stops of DR in an original image to fabricate artificial data to extract 14.0, 14.4, or 16 stops of "digital DR" (which is not the same thing as hardware sensor DR). I'll try to demonstrate again, maybe someone will get it this time.

"I am composing a landscape scene on-scene, in-camera. I meter the brightest and darkest parts of my scene, and its 14.4 stops exactly! HA! I GOT 'DIS! I compose my scene with the D800's live view, and fiddle with my exposure trying to get the histogram to fit entirely between the extreme left edge and the extreme right edge. Yet, for the life of me, I CAN'T. Either my histogram rides up the right edge a bit (the highlights), or it rides up the left edge a bit (the shadows). This is really annoying. DXO said this stupid camera could capture 14.4 stops of DR!! Why can't I capture this entire scene in a single shot?!?!?!!!1!!11 I didn't bring any ND filters because this is the uberawesomedonkeyshitcameraoftheyearpureawesomeness!!!!!"

The twit trying to capture a landscape with 14.4 stops of DR in a single shot CAN NOT because the sensor is only capable of 13.2 stops of DR! The twit of a landscape photographer is trying to capture 1.2 stops (2.4x as much light) in a single shot and his camera simply isn't capable of doing so. He could take two shots, offset +/- 2 EV and combine them in post with HDR, but there is no other way his camera is going to capture 14.4 stops of DR.

THAT ^^^^^ UP THERE ^^^^^ IS MY POINT about the D800. It is not a 14.4 stop camera. It is a 13.2 stop camera. You can move levels around in post to your hearts content, dither and expand the LEVELS YOU HAVE. But if you don't capture certain shadow or highlight detail TO START WITH....you CAN'T CREATE IT LATER. All your doing is averaging and dithering the 13.2 stops you actually captured to SIMULATE more DR. Ironically, that doesn't really do anyone any good, since computer screens are, at most, capable of about 10 stops of DR (assuming you have a super-awesome 10-bit RGB LED display), and usually only capable of about 8 stops of DR (if you have a nice high end 8-bit display), and for those of you unlucky enough to have an average $100 LCD screen, your probably stuck with only 6 stops of DR. Print is even more limited. An average fine art or canvas print might have 5 or 6 stops. A print on a high dMax gloss paper might have as much as 7 stops of DR.

There is little benefit to "digital DR" that is higher than the sensor's native DR. Your not gaining any information you didn't start out with, your simply redistributing the information you have in a different way by, say, downscaling with a clever algorithm to maximize shadow DR. But if you didn't record shadow detail higher than pure black to start with, no amount of software wizardry will make that black detail anything other than black. And even if you do redistribute detail within the shadows, midtones, or highlights...if your image has 14 stops of DR you can't actually SEE IT. Not on a screen. Not in print. You have to compress it, merge those many stops into fewer stops, and thus LOSE detail, to view it on a computer screen or in print.

Again, and I agree with Zim, I have learned a lot of this discussion about hardware capabilities, screen capabilities, as well as print capabilities. I am waiting for my Pixma Pro 1 printer, and because of this latest information from you, I will go straight to check the stops it has. So my uninformed question yielded even more information :)

Thank you.
 
Upvote 0
Status
Not open for further replies.