Any thoughts on how the 5d3 will compare on dxo mark to the Nikon D800?

Status
Not open for further replies.
LetTheRightLensIn said:
straub said:
LetTheRightLensIn said:
that is the difference and if you care more about highlights then you expose less and save more, if you care more about shadows you expose longer and lose more highlights

Yes, but this hasn't got anything to do with the issue I'm talking about. Even in an optimally "exposed-to-the-right" capture, the very bottom end of DR will be quantized beyond repair. Which is why the "12-stop DR vs 14-stop DR on a 14-bit signal" path argument is pointless.

LetTheRightLensIn said:
and don't forget that DxO "print" numbers are based on a normalization to 8MP so you can things like 14.5 stops on a 14bit camera

Not really. Anything above 14 stops in 14-bit signal path is zero. *If* the ADCs use dithering, *some* light below EV(-14) might register into the output, but in the end you will never know if it's a EV(-14.4) or EV(-13.7).

yes, really, normalization means you can effectively get higher numbers at a certain scale

and use the cams yourself and compare and tell me you don't see a large, noticeable, usable difference

Your missing the point. Downscaling can only make more effective use what is already there or eliminate information, not create additional information. You can improve noise characteristics by averaging noise from multiple pixels (elimination of information), and you can improve the detail of each pixel relative to each other by multisampling (make more effective use.)

To increase DR, you would have to fabricate information, since it does not exist to start with. You can certainly do that...you can reduce contrast, which will "stretch" tones across a greater range, however if you stretch too far, you'll get "holes" between bits of real information that have to be filled in with generated relative values. However, if you do that with Nikon images to produce a "print" that has 14.4 stops of DR, you could certainly do the same thing with Canon images to produce a "print" that has at least 12.5 stops of DR, and if you really wanted to push the envelope, you could easily massage a "print" image to have any level of contrast you want stretched across as great a dynamic range as you want. Fabricating information in post, however, won't prevent you from clipping highlights when you literally don't have enough DR to capture a scene with wide dynamic range...so even if you can somehow finagle 14.4 stops of print DR, it won't help you in-camera.

DXO has shown their colors, and I think Mt. Spokane is entirely correct in his assessment: Nikon is a paid member of DXO, where as Canon is not, so it's no surprise that Canon cameras fair so poorly against Nikon cameras, when a few simple side-by-side eyeballed comparisons indicate that outside of resolution and ISO differences, both cameras produce stellar IQ, and differ not in the context of real-world photography.
 
Upvote 0
jrista, you speak affirmatively, but your analysis isn't correct. Additive processes increase signal linearly but noise logarithmically, because noise exhibits destructive interference with itself while signal exhibits constructive interference. Thus the increase in DR from downscaling.

These are complex subjects, and I wish I knew enough about the specific domain to apply the math affirmatively, which I don't. But please don't mistake suspect math for absolute truth, any more than you might mistake a suspect photograph for it.
 
Upvote 0
peederj said:
jrista, you speak affirmatively, but your analysis isn't correct. Additive processes increase signal linearly but noise logarithmically, because noise exhibits destructive interference with itself while signal exhibits constructive interference. Thus the increase in DR from downscaling.

That is true if the DR is limited by noise. However, I don't believe additive, or any processing for that matter can increase the final DR beyond the initial measurement limitations (14 bits/stops)? i.e. if your measured original range is limited to [a,b], then any additive (linear) processing will result in range of [N*a, N*b] for some N, which is exacly as many stops as the original range.
 
Upvote 0
Woody said:
Doesn't matter. In terms of DXOmark ranking or sensor scores, they will still trail the D800 by a MASSIVE margin. ;D

It also doesn't matter since the DXO scoring system is pathologically inept. Check D800 vs D4 for example; D800 wins the Landscape category due to extra stop of DR at ISO 100. As mentioned previously, this amounts to one extra level of brightness information (on top of the 16382 which the D4 already offers). Past ISO 400, the D4 is consistently one stop better. Yet somehow D800 scores way more in this category.
 
Upvote 0
briansquibb said:
The D800 may be better at low iso - but is it going to be so much better that you could tell on a 36 x 24?

If not - is it relevant?

resolution-wise, not so much at normal viewing distances altho would be nice to have more real captured pixels to print.
For most mainstream kind of images, there is no big advantage to using the D800 IMO, even at 36x24" prints. I'd honestly prefer to use my Canon's for familiarity sake and the glass I have access to.

BUT.. when it comes to some of the high DR range scenes I shoot - AB so freakin' LUTELY! :)

The D800's lower noise low ISO files will allow me to push dark areas up to the levels I want without having to concern myself about the noise becoming a visual distraction. And I'd say that's an issue even with small prints like 18x12" or even down to 12x8" depending on camera (banding) and processing factors.

You can see some examples of the critical areas I'd run into as I described in this post last nite;

www.canonrumors.com/forum/index.php/topic,5101.0.html


And that's not even a high DR scene. I'm thinking more about the storm systems I like to shoot in fast changing lighting. I have to allow a stop or more at the top end to not blow out textural detail on a suddenly sunlit bit of cloud, leaving me less to work with at the dark end. Any body that can give me clean shadows under these conditions that I can process up later, without clipping the highlites to get it, is the tool I want to use for this kind of shot. My old 40D could do this, none of the semi-pro Canon bodies I've used since do it as nicely. A new little D5100 is going to get a workout this spring in this very kind of shooting. If it does well, it's big bro could get ordered.

If I'm takin' pictures of the old folks or some event, this kind of extreme post-processing is not required or wanted, thus minimizing the impact of a technically better (at low iso) sensor.

The technically better sensor is of value to those few of us who push the limits of said sensors, and post-processing.

And multi-exposure HDR is not always a viable option when you're in the middle of some fast weather, the scenery isn't sitting still. :D
 
Upvote 0
randplaty said:
Great discussion. Very interesting. Can somebody tell me in layman terms why 14 bits can only hold 14 stops of DR?

It's just they have chosen a linear encoding scheme. You could have a (partial) 14 stops range of light represented in 2 bits, but you'd be losing a lot of information doing so, or in 16 bits, but two of those bits would never be used (on the most significant end) or would just carry random noise (on the least significant end).

Now if enough customers don't understand dynamic range, but a few misguided opinion leaders start crowing about it, companies will be tempted to use those 16 bits even though two are just noise, as "marketing bits" to claim as if they have more dynamic range then they do. Others will argue you need 16 bits to represent 14 stops, because of quantization distortion. But dither solves that problem. The number of simultaneously capturable stops is the valuable number, and yes more is absolutely better there.

So 14 bits is all you need for 14 stops, and since both are logarithmic scales...each stop is a doubling of light, and each bit in base 2 (binary) doubles the potential size of the numerical representation, we have a 1:1 encoding scheme used.
 
Upvote 0
peederj said:
randplaty said:
Great discussion. Very interesting. Can somebody tell me in layman terms why 14 bits can only hold 14 stops of DR?

It's just they have chosen a linear encoding scheme. You could have a (partial) 14 stops range of light represented in 2 bits, but you'd be losing a lot of information doing so, or in 16 bits, but two of those bits would never be used (on the most significant end) or would just carry random noise (on the least significant end).

Now if enough customers don't understand dynamic range, but a few misguided opinion leaders start crowing about it, companies will be tempted to use those 16 bits even though two are just noise, as "marketing bits" to claim as if they have more dynamic range then they do. Others will argue you need 16 bits to represent 14 stops, because of quantization distortion. But dither solves that problem. The number of simultaneously capturable stops is the valuable number, and yes more is absolutely better there.

So 14 bits is all you need for 14 stops, and since both are logarithmic scales...each stop is a doubling of light, and each bit in base 2 (binary) doubles the potential size of the numerical representation, we have a 1:1 encoding scheme used.

I don't understand it either.

For example, you have a sensor capable to capture maximum 2 stops of light, but with variety of gradations (i.e. billions of intermediate light intensities). So if you use 2 bits per color channel conversion you'll be able to capture only 4 possible states for each color in the CMOS RGB pattern, which are:

  • 0% of specific color intensity corresponding to 00
  • 33% of specific color intensity corresponding to 01
  • 66% of specific color intensity corresponding to 10
  • 100% of specific color intensity corresponding to 11

Wouldn't it be better to use at least 65 536 possible states (16 bits) for color luminance conversion in this case?

FYI. 14 bits correspond 16 384 possible states for each of three colors.

P.S. Please correct me if I'm wrong :)
 
Upvote 0
straub said:
Woody said:
Doesn't matter. In terms of DXOmark ranking or sensor scores, they will still trail the D800 by a MASSIVE margin. ;D

It also doesn't matter since the DXO scoring system is pathologically inept. Check D800 vs D4 for example; D800 wins the Landscape category due to extra stop of DR at ISO 100. As mentioned previously, this amounts to one extra level of brightness information (on top of the 16382 which the D4 already offers). Past ISO 400, the D4 is consistently one stop better. Yet somehow D800 scores way more in this category.

Err, it matters a lot if you shoot landscapes. Who da hell cares about high ISO since it clips your dr so much in all cameras.
 
Upvote 0
psolberg said:
Err, it matters a lot if you shoot landscapes. Who da hell cares about high ISO since it clips your dr so much in all cameras.

According to their DR data, D4 can deliver 16382 levels of brightness (~13 stops), and D800 16384. Do you think those two extra levels will make your landscape photo pop?
 
Upvote 0
dilbert said:
nightbreath said:
I don't understand it either.

For example, you have a sensor capable to capture maximum 2 stops of light, but with variety of gradations (i.e. billions of intermediate light intensities). So if you use 2 bits per color channel conversion you'll be able to capture only 4 possible states for each color in the CMOS RGB pattern, which are:

  • 0% of specific color intensity corresponding to 00
  • 33% of specific color intensity corresponding to 01
  • 66% of specific color intensity corresponding to 10
  • 100% of specific color intensity corresponding to 11

Wouldn't it be better to use at least 65 536 possible states (16 bits) for color luminance conversion in this case?

FYI. 14 bits correspond 16 384 possible states for each of three colors.

This depends on the ADC (Analogue-Digital Converter.)

I understand this. But why 14 bits supposed to be enough for sensor capable to capture 14 stops of light? Is it limitation of the CMOS sensor that is not able to pass more accurate data and makes having more bits per channel redundant? Or is it limitation on each pixel size where amount of light gathered by each pixel is not enough to make reliable results across all ISOs for 16 bits? Or is it just a marketing move to limit ADC to 14-bit with purpose of enabling fast burst speeds?
 
Upvote 0
Let's remember that light is quantized, and brightness can be thought of as photons per second. Right? And so your potential precision goes down as your brightness level goes down, and an encoding scheme that mirrors that fact becomes the best overall trade off between precision and data complexity.

Floating point ADCs are certainly possible but people who know a whole lot more about these things than I have not seen the use of putting them into production. And I don't see it either.

Otherwise, adding bits to an encoding scheme that exceed the resolving ability of the photocell is wasteful and can only serve as a cheap marketing gimmick. Warning of which is my purpose writing in this thread.
 
Upvote 0
jrista said:
Your missing the point. Downscaling can only make more effective use what is already there or eliminate information, not create additional information. You can improve noise characteristics by averaging noise from multiple pixels (elimination of information), and you can improve the detail of each pixel relative to each other by multisampling (make more effective use.)

To increase DR, you would have to fabricate information, since it does not exist to start with. You can certainly do that...you can reduce contrast, which will "stretch" tones across a greater range, however if you stretch too far, you'll get "holes" between bits of real information that have to be filled in with generated relative values. However, if you do that with Nikon images to produce a "print" that has 14.4 stops of DR, you could certainly do the same thing with Canon images to produce a "print" that has at least 12.5 stops of DR, and if you really wanted to push the envelope, you could easily massage a "print" image to have any level of contrast you want stretched across as great a dynamic range as you want. Fabricating information in post, however, won't prevent you from clipping highlights when you literally don't have enough DR to capture a scene with wide dynamic range...so even if you can somehow finagle 14.4 stops of print DR, it won't help you in-camera.

DXO has shown their colors, and I think Mt. Spokane is entirely correct in his assessment: Nikon is a paid member of DXO, where as Canon is not, so it's no surprise that Canon cameras fair so poorly against Nikon cameras, when a few simple side-by-side eyeballed comparisons indicate that outside of resolution and ISO differences, both cameras produce stellar IQ, and differ not in the context of real-world photography.

They are getting the extra bits not by magic but by spatial averaging so they get fractional bit contributions from surrounding pixels. Of course as you filter that high frequency noise you also filter the high frequency detail. So, sure if you want to maximize the resolution advantage of the D800 over the 5D3 and then you wonder how the dynamic range or what not will compare between the two *when doing that*, then you compare using the 100% view numbers, but that is just for curiosities sake to see what you'd get when taking full advantage of the extra resolution.

But it isn't fair to then say that some higher MP camera or what not only has so much better than some lower resolution camera in terms of noise or DR since the 100% view is treating noise of different frequencies as if they were the same frequency, not fair to the larger MP camera, since the lower MP cam has already automatically filtered out that high frequency noise, you want to compare them at the same power. You need to do that otherwise you penalize the camera with more MP for having the potential to get a higher frequency look at things, completely not fair. You could otherwise get a 2MP camera using Canon D30 sensor technology appearing to perform much better than a 22MP 5D3.
 
Upvote 0
Status
Not open for further replies.