Any thoughts on how the 5d3 will compare on dxo mark to the Nikon D800?

Status
Not open for further replies.
Yasmin said:
briansquibb said:
Personally I am not convinced about the DxO measurements - they dont pass the common sense tests

I tend to agree. Their website shows Nikon d700 rated at 80 and 5D Mark II at 79?

DXO's tests are very consistent in terms of the numbers they pump out. They should never really be taken as "real-world" results, as they are not. The only real value to DXO numbers is their consistency, which makes it easy to compare the raw, low-level capabilities of any camera, regardless of make or model. Thats handy...to a degree. Just make sure to salt generously in comparisons like the one above, and in general, DPR is probably a MUCH better measure of real-world performance than DXO.
 
Upvote 0
Concerning the DXO banter I will say I really like their lens tests. I don't much look at the overall "DXO mark" but the individual field maps at various f stops are very useful for me. Obviously a sample of one, but still reveals a good indication of what to expect. So props to them.
 
Upvote 0
Here are two samples images of black cars, one shot with a 5DC and the other with a 7D. Both images were taken with the same 70-200 f/4L, on bright sunny days, within 45 minutes of sunset. So the variables in light quality and lens quality is next to zero. Both images are out-of-camera raws converted to jpegs with no post processing.

According to DxOMark, the 7D has 11.7 stops of DR, while the 5DC has 11.1 stops. Call me crazy, but the the 5D seems to pull out substantially greater shadow detail, especially in the foliage. IMHO, the difference is so obvious I don't even have to label which image came from the 5D, and which came from the 7D. I very much prefer the color and contrast of the 5D's images, but again according to DxOMark, the overall sensor scores between both bodies is very similar (66 vs. 71).

CTSV-13-1.jpg


_MG_6748.jpg
 
Upvote 0
My feeling is the combined DXO is not useful ( What should the individual weighting of each test be?) but that the individual tests, done consistently, give valuable technical info to add to field experience testing with the various bodies. Nikon has really made strides in that test and in real life with low light performance - we can safely accept that. Also relative to the the aggregate score, I don't know (and they don't know) what number of "points" difference are statistically meaningful.
 
Upvote 0
skitron said:
Concerning the DXO banter I will say I really like their lens tests. I don't much look at the overall "DXO mark" but the individual field maps at various f stops are very useful for me. Obviously a sample of one, but still reveals a good indication of what to expect. So props to them.

I agree. Indeed I found out that the crop sensor lenses are much better from Canon than from Nikon. I didn't compare how the high end glass works for Canon and Nikon. So now I know that my APS-C sensor has much less DR, a little less color depth, and much more noise for a given ISO than the crop Nikon's sensor (which is actually manufactured by Sony), but at least the resolution given by corresponding lenses is much better for Canon than Nikon.
 
Upvote 0
briansquibb said:
Personally I am not convinced about the DxO measurements - they dont pass the common sense tests

DxOMark rates the Canon 70-200mm f/2.8 L IS version 1 better than the version 2, while I have yet to encounter a photographer who thinks version 1 is better. Therefore, I think photographic quality is only to a limited extent determined by these numbers and measurements.
 
Upvote 0
V8Beast said:
Here are two samples images of black cars, one shot with a 5DC and the other with a 7D. Both images were taken with the same 70-200 f/4L, on bright sunny days, within 45 minutes of sunset. So the variables in light quality and lens quality is next to zero. Both images are out-of-camera raws converted to jpegs with no post processing.

According to DxOMark, the 7D has 11.7 stops of DR, while the 5DC has 11.1 stops. Call me crazy, but the the 5D seems to pull out substantially greater shadow detail, especially in the foliage. IMHO, the difference is so obvious I don't even have to label which image came from the 5D, and which came from the 7D. I very much prefer the color and contrast of the 5D's images, but again according to DxOMark, the overall sensor scores between both bodies is very similar (66 vs. 71).

CTSV-13-1.jpg


_MG_6748.jpg

I think this is right here demonstrates exactly why DXO has something to offer. Your opinion here is simply that...your opinion. If you actually held a poll about those two photographs, I would be willing to bet that you would NOT get a 90%/10% ratio, where most people could tell just by looking at those photographs which was which. I would bet such a poll would end up closer to a 60%/40% ratio. That wouldn't be entirely because how each of us sees is subjective, but also due to the differences in computer screens, computer screen calibrations, etc. To me, those photos look relatively similar, however I have a calibrated screen tuned for post-processing photographs for final print. Because I print and judge my print qualities from how things look on-screen, the blacks in those photos look pretty even-keel. I'd be willing to bet, however, that one of them would indeed stand out as having "better" blacks if I viewed it with the screens I have at work, as they are calibrated for an entirely different purpose, and are a bit lower contrast (which would enhance shadow details.)

There is also the simple point that we don't know for sure how dark the deepest shadows are in the leaves of the trees of the 5DC shot. They may look "better" simply because they are not nearly as deep as the ones the 7D had to work with. That may be the case with all the 5DC shadows, where as the 7D may have had to deal with deeper shadows everywhere. You can't really make an objective comparison with two entirely different shots like that...you don't know for sure exactly how the shadows of each shot compare. You need a consistent, calibrated photographic source to properly measure the differences (even if they are "useless differences"), and that would be an area where DXO excels.

DXO mark publishes low-level measurements run through a standard set of mathematical formulas. While their numbers may seem odd, I find them valuable at times if for no other purpose than to demonstrate that physical hardware specifications make a picture not. The best example are DXO's MF camera ratings, which generally appear rather crummy compared to the latest and greatest from Sony, Nikon, and Canon. Empirically, modern-day digital MF sucks (regardless of niche.) Practically, they are still the best money can buy (by a long shot) for the niches they service.
 
Upvote 0
Mt Spokane Photography said:
If Nikon were a big customer of yours, and Canon was not, what would you do?

I sure wouldn't give Canon better ratings. The DXO mark has weightings assigned to different sensor chacteristics, and is easy to rig. If one had something to gain by it, that is.

http://www.dxo.com/us/image_quality/customers2

Fair point. I actually think they DO unfairly weight their "print DR" numbers. I think that becomes blatantly obvious with the D800's 14.4 stops of DR, which is 0.4 stops beyond what is theoretically possible with a 14-bit sensor. (Since every bit is a doubling of the numerical space of the one before it, mathematically, it would be impossible to actually achieve 14.4 stops of DR, as it would require a 15th bit of numeric precision (something we know is not the case)...unless you are doing something rather unscrupulous.)

(I have also often wondered why they call it "print DR"...when in reality, an actual print would be far more limited in terms of DR than a camera...to around 5-7 stops in most cases on all but the absolute BEST papers, supporting the highest dMax imaginable, with the most cutting edge of pigment ink technologies.)
 
Upvote 0
jrista said:
I think this is right here demonstrates exactly why DXO has something to offer. Your opinion here is simply that...your opinion. If you actually held a poll about those two photographs, I would be willing to bet that you would NOT get a 90%/10% ratio, where most people could tell just by looking at those photographs which was which. I would bet such a poll would end up closer to a 60%/40% ratio.

It's interesting how different people see the same set of images differently. It's funny that you bring up a "poll," because I did just that after these shots were taken. I sent these two images out to a half-dozen colleagues of mine to see if they were able to determine which image came from each camera. Every single one of them correctly identified which shot came from which body, and unanimously agreed that the image shot with the 5D looked substantially better. Keep in mind theses are professional automotive photographers, each with decades of experience, that routinely scrutinize images like this. So you're right. The ratio wasn't 90/10. It was more like 100% of respondents that agreed with my assessment :)

In fairness, viewing shrunk down images on a message board do hide some of the obvious differences. If you're really bored, I'd be happy to email you both jpegs, but I ain't sending the damn raws :)

On a somewhat related note, clients generally have no idea what kind of equipment their contributors shoot with. All they know is the quality of the images you submit. That said, my editor immediately complained about how the images shot with the 7D lacked contrast and shadow detail with blown-out highlights. This despite my best efforts to address these issues in post.

That wouldn't be entirely because how each of us sees is subjective, but also due to the differences in computer screens, computer screen calibrations, etc. To me, those photos look relatively similar, however I have a calibrated screen tuned for post-processing photographs for final print. Because I print and judge my print qualities from how things look on-screen, the blacks in those photos look pretty even-keel. I'd be willing to bet, however, that one of them would indeed stand out as having "better" blacks if I viewed it with the screens I have at work, as they are calibrated for an entirely different purpose, and are a bit lower contrast (which would enhance shadow details.)

The difference is obvious on a $h!tty monitor as well :)

There is also the simple point that we don't know for sure how dark the deepest shadows are in the leaves of the trees of the 5DC shot. They may look "better" simply because they are not nearly as deep as the ones the 7D had to work with. That may be the case with all the 5DC shadows, where as the 7D may have had to deal with deeper shadows everywhere. You can't really make an objective comparison with two entirely different shots like that...you don't know for sure exactly how the shadows of each shot compare. You need a consistent, calibrated photographic source to properly measure the differences (even if they are "useless differences"), and that would be an area where DXO excels.

You make a good point. This is by no means a scientific test, and it would never stand up in a lab. However, it was never meant to be a scientific test. It just so turns out I had a a car to shoot, my 5D took a dump, so I busted out the 7D as a backup, using it in the same manner with the same technique in which I always shoot. You can question the difference in background lighting in the foliage between the two shots, but you weren't there :) All I can tell you is that, in terms of the backgrounds, the image captured with the 5D looks MUCH more like what I saw through the viewfinder that the image captured with the 7D. There were all kinds of beautifully backlit green pine needles in the 7D's viewfinder, but none of that showed up in the captured image.

DXO mark publishes low-level measurements run through a standard set of mathematical formulas. While their numbers may seem odd, I find them valuable at times if for no other purpose than to demonstrate that physical hardware specifications make a picture not. The best example are DXO's MF camera ratings, which generally appear rather crummy compared to the latest and greatest from Sony, Nikon, and Canon. Empirically, modern-day digital MF sucks (regardless of niche.) Practically, they are still the best money can buy (by a long shot) for the niches they service.

I'm not saying this to be a smart@ss, but have you shot with multiple bodies at length in order to asses how DxOMark's ratings stand up to your own personal observations? I find that sometimes their rankings seem legit, while at others they're completely off. For instance, I shot with a 20D for a long time before moving up to a 5D. The IQ of the 7D reminded me a lot of the 20D, and sure enough, both bodies rank similarly on DxOMark ratings. I'd say their rankings of the 5D, 1DII, 1DsII, and 1DsIII seem somewhat useful when compared to my personal experiences with those bodies as well. That said, according to DxO the 20D and 7D aren't that far off IQ wise compared to the 5D, but I'd beg to differ. Your results may vary :)
 
Upvote 0
@V8Beast: I guess my point got lost in all the rest. ??? To keep things simple:

* DXOMark-type results generally represent empirical tests that compare hardware in a statistically accurate manner.

* DPReview-type results generally represent real-world tests that compare hardware in a practical manner.

* Personal observation results generally represent opinions that may or may not jive with the opinions of others.

All three of the above forms of evaluation are valuable. Even personal observations are very valuable, as under close scrutiny most photographers can tell the minor differences between cameras. (How they interpret those differences, and whether they choose to see one camera or another as better, is where subjectivity comes into play.) Real-world tests provide value in that they allow us to perform practical comparisons of gear in situations that we can relate to. Empirical tests provide value in that they allow us to evaluate information well beyond what may be practical, real, or even meaningful.

Everyone has their opinions about DXOMark, DPR, and subjective opinion, and not everyone uses ALL of the sources of information available to formulate their own opinions. I'm just saying that while DXOMark's results may seem odd, may jive with each other at times and wildly contradict each other at other times, most of their data is still consistent and empirical in relation to themselves, and that has value. I wouldn't recommend using their numbers as a sole source of information, though...they DO tend to be rather odd at times. ;)
 
Upvote 0
jrista said:
@V8Beast: I guess my point got lost in all the rest.

I wouldn't say that. I just found your assessment that 90 percent of observers wouldn't be able to tell the difference between the two posted images to be quite bold :) That said, my test subjects weren't exactly your typical observers. I'd say that their eyes are well trained :) It's quite possible that if presented the two images in question, the general public would have a much more difficult time distinguishing any differences between them.

I wouldn't recommend using their numbers as a sole source of information, though...they DO tend to be rather odd at times. ;)

That's the point that seems to get lost in all the e-hysteria. The DxO stuff is useful at times, but some people seem to think it' s the be all, end all authority for judging image quality. How can you possibly attempt to objectively judge a medium (photography) that's so inherently subjective? You don't need a lab test to determine whether or not you like the images a camera produces.
 
Upvote 0
peederj said:
Oh Ok I'm now interpreting you as criticizing the fact it's a linear 14 bit fixed point encoding scheme coming off the ADCs. Well there's absolutely nothing wrong with linear encoding if it encompasses the full dynamic range of the photocell. Using a floating point encoding scheme would usually be done to throw information away, i.e. compress the data that was less interesting. Since we are interested in every gradation of the visible light spectrum equally, using linear encoding is the high end way to do it, not a compromise.

I'm not saying there is anything wrong with it, I'm just saying that to make a big deal about a 2-stop advantage with this technology is wrong :-) The two stops don't help much with the most common use case for DR, which is to recover shadow detail.
 
Upvote 0
V8Beast said:
For those who aren't as technically inclined in things electronic, what does this mean in plain English :)?

It simply means that for the supposed extra two stops of DR of the D800, the only brightness values that can actually be resolved are EV(-12 1/2), EV(-13) and EV(-14). Anything between EV(-12) and EV(-14) is quantized into one of these three EVs, so there is hardly any detail left, and certainly no gradations. Check https://en.wikipedia.org/wiki/File:Neighborhood_watch_bw.png, that is pretty much what is captured between EV(-13) and EV(-14).

From a use case perspective of using the extra DR to recover shadow detail, the 2-stop advantage in this context is nonexistant. And as such, using it as a metric for a sensor "score" is simply wrong IMO.
 
Upvote 0
straub said:
peederj said:
Oh Ok I'm now interpreting you as criticizing the fact it's a linear 14 bit fixed point encoding scheme coming off the ADCs. Well there's absolutely nothing wrong with linear encoding if it encompasses the full dynamic range of the photocell. Using a floating point encoding scheme would usually be done to throw information away, i.e. compress the data that was less interesting. Since we are interested in every gradation of the visible light spectrum equally, using linear encoding is the high end way to do it, not a compromise.

I'm not saying there is anything wrong with it, I'm just saying that to make a big deal about a 2-stop advantage with this technology is wrong :-) The two stops don't help much with the most common use case for DR, which is to recover shadow detail.

I would STRONGY dispute that the most common use case for DR is to recover shadow detail. Canon purposely caters their DR to the highlights, as its only in the highlights where you can literally CLIP information and prevent any recovery at all (you literally can't "clip" at the black end such that detail is unrecoverable...you can only compress blacks together and possibly mash blacks into the noise floor, but even then, you usually can still recover something, and with dark frames, you have the potential to recover a lot.) Every additional bit of DR doubles the number of luminance steps you can achieve...and they are pro-actively allocated to HIGHLIGHTS FIRST, then to darker tones. Most of the blather that ensued on this forum shortly after the 5D III announcement was people complaining about the bottom 2-3 stops of DR, which generally account for maybe 20 or so distinct levels? Highlights have thousands of levels allocated to them, and the more the better from a raw theoretical standpoint. Dynamic range is NOT primarily allocated to the shadows, particularly in Canon cameras (the 5D line itself is a supreme example of why...as it's extremely popular with wedding photographers, who without question need highlight range more than shadow range for bright white wedding dresses, shiny brides maids and the like.)

The arguments for shadow DR primarily come from landscape photographers and a few more niche markets, who tend to shoot scenes with dynamic ranges that (sometimes far) exceed the range of a camera. Regardless of how much DR you have, unless you are lucky enough to find a sunrise or sunset at exactly the right time where you only need about two extra stops, your going to have to compromise on shadows vs. highlights, and the best way to compensate is with graduated neutral density filters (even if you have 14 total stops of usable DR.) Landscape photographers, however, do not make up the vast majority of actual or potential 5D III users.
 
Upvote 0
jrista said:
Every additional bit of DR doubles the number of luminance steps you can achieve...and they are pro-actively allocated to HIGHLIGHTS FIRST, then to darker tones.

Yes, and because of this I assumed a 12-stop DR (i.e. 5D3) to have a brightness range of [4,16383] and 14-stop (i.e. D800) a range of [0,16383], in which case I considered the highlight range to be equal. My understanding is that the extra stops that the D800 offer are primarily in the bottom end of the DR due to lower read noise etc.

jrista said:
Most of the blather that ensued on this forum shortly after the 5D III announcement was people complaining about the bottom 2-3 stops of DR, which generally account for maybe 20 or so distinct levels?

This was my original point, people (and apparently DXO) are making a big deal of the extra 2 stops (4 levels in total, 3 stops bottom DR would be 8 levels), when in the real world it doesn't matter at all since the brightness data is quantized beyond repair. I'm not usually very good in making my point clearly :-)
 
Upvote 0
Status
Not open for further replies.