Any thoughts on how the 5d3 will compare on dxo mark to the Nikon D800?

Status
Not open for further replies.
For more depth of field use the wide end of your zoom range.
If you are someone who shoots jpeg then maybe the 5DMKIII it has very good jpeg at high iso straight out of the camera.
If you shoot raw the D800 is just as good as the 5DMKIII .
Frame rate is not going to make any difference.
Which camera can autofocus best is going to matter most my guess is the D800 will be better wait for the reviews to find out for sure.
 
Upvote 0
Arun said:
Well, the tests are not rubbish, you just have to know how to interpret them.

The problem is how to normalize measurements between sensors of different sizes and different pixel densities. DxOMark has a standard way of doing it.

The thing to remember is that one gains dynamic range, and noise performance as one downsamples. Further, since the DxO ISO score is based on the highest ISO at which 9 eV of dynamic range and a suitable signal-to-noise ratio both exist, and as you downsample you gain both dynamic range and signal-to-noise, the ISO score also grows correspondingly.

For instance, the D800 in the fullframe (FX) mode will have higher DxOMark scores (by a stop or so) than the very same D800 in the crop (DX) mode.

DxOMark could have chosen some other single size, say 16 Megapixels, that changes all of the scores by a fixed amount. But in effect, DxOMark is trying to answer the question, how would cameras compare at a fixed print size.

I think the measure that pixel peepers want is, how would cameras compare if each pixel was printed at a fixed size (so that a high megapixel camera would have a larger printout than a low megapixel one). That is also a legitimate measure to ask for, but that is not what DxOMark provides.

I couldn't agree more with your last paragraph. I do value the dxo score because I print to a target size based on customer needs and not based on a fixed pixel size. If I did the later, I'd end up selling an 8x10 that won't fit on the 8x10 frame the customer has ;D For me, and I suspect for most people (which is why DXO does what they do), we target a size and want to know what different cameras can do at that size. The choice may be different each time, but as you say, the results are shifted by a similar amount and thus no cameras swap places.

There is a lot of talk about doing per-pixel comparisons, but I find those more entertaining than useful because I don't sell 100% crops of a jpg at PC monitor resolution, no customer has ever asked for a 100% crop before buying, and I know for a fact that whatever I see on the screen will never look the same printed since it depends on the printer quality, printing method, medium quality and type, final output size, viewing distance, viewing conditions, etc. So when I see people obsessing over the shadows under a rock at ISO12800, and not even think about if the image had to be taken like that, or what the final display medium will be, I can't help but to laugh in disbelief. ;D

I just hope my eyes survive the flood of bad pictures taken with bad light ISO51K just because they could. 8)
 
Upvote 0
straub said:
I sincerely don't get the fuss about 12/13/14 stops of DR. From what I've read, both D800 and 5D3 use 14-bit ADCs.

With 14-bit ADCs, anything above ~10 stops is pretty much useless. An EV 10 stops from saturation (value 16383) registers as value 15 (without taking noise into account). That leaves a grand total of 15 luminance values to represent *all* the extra DR above that, and specifically 8 values for the next stop of DR.

Now, if the cameras happen to use 16-bit ADCs, then they've got two extra stops of usable DR.

IMO SNR at 18% is far more important as far as IQ goes.

The 14.4 stops numbers and such are from DxO normalizing to 8MP photos. So the 5D2 is listed for 8MP norm at say 11.8 but just measuring it, as is, off the camera, it is more like 11.2. Also you don't appear to be doing the number right.
 
Upvote 0
peederj said:
Your analysis does not jive with my understanding of DSP at all. I am not an expert at camera sensors so I won't provide an alternative explanation, but I do not believe your math is properly applied here practically.

My understanding, and it's been nearly a decade since I did this stuff in school is that you've got 14 bits therefore 16,384 possible brightness levels.

Each stop of brightness has twice as much light as the previous.

So the top stop in your exposure will have 8192 levels in it, stop 2 will have 4096, then 2048, then 1024 then stop 5 will have 512 levels. You can see where this is going, but stop 10 will only have 8 levels in it - while there is value to those levels being smooth, there is a pretty fundamental problem when it comes to using them to extract detail.

Now if dxo are downsampling the image then that helps some. If you've got 4 pixels going into one, then I think (if I recall correctly) that gives you an extra 2 bits of usable range.

This is where HDR shines. If you do a +2, 0, -2 HDR bracket - you have 8192 levels in stop 1, 4096 levels in stop 2, 10240 levels in stop 3, 5120 levels in stop 4, 10752 levels in stop 5, 5376 levels in stop 6, 2688 levels in stop 7, 1344 levels in stop 8, 672 levels in stop 9, 336 levels in stop 10, 168 levels in stop 11 and so on...

Search for ETTR for more discussion on this.
 
Upvote 0
grahamsz said:
but stop 10 will only have 8 levels in it - while there is value to those levels being smooth, there is a pretty fundamental problem when it comes to using them to extract detail

This is also an area where I'm a little skeptical that we can trust raw's rawness.

For that 10th stop, the camera will be reading out lumience values like

0,2,2,6,2,7,7,4,7,3,7,4,5,5,4,3,2,0,6,0,4,2,5,3

Since people aren't really using that range to find detail, I can see it being really tempting for a camera designer to clip them or apply some kind of noise reduction or binning on the data coming off the camera. That would give nice clean blacks and people do like those.
 
Upvote 0
Again I am not an expert on how the Math applies to camera sensors specifically, but I suggest studying the concept of dither and how, because of it, a 14-bit ADC is completely adequate to fully represent a sensor photocell that has 14 stops of dynamic range. Making it a 16-bit ADC will recover no additional information whatsoever, but will just carry additional noise.

And the additional noise the 16-bit ADC will convey will neither harm, nor help, image quality. The conveyance of it may slow down readout and processing though.

HDR is better thought of as extending headroom. It doesn't lower the noise floor of a system. It raises the roof. It will not give more precision to values within the base dynamic range.

Digital blanking is another matter, and would be a NR strategy, with plenty of pitfalls.
 
Upvote 0
straub said:
I sincerely don't get the fuss about 12/13/14 stops of DR. From what I've read, both D800 and 5D3 use 14-bit ADCs.

With 14-bit ADCs, anything above ~10 stops is pretty much useless. An EV 10 stops from saturation (value 16383) registers as value 15 (without taking noise into account). That leaves a grand total of 15 luminance values to represent *all* the extra DR above that, and specifically 8 values for the next stop of DR.

Now, if the cameras happen to use 16-bit ADCs, then they've got two extra stops of usable DR.

IMO SNR at 18% is far more important as far as IQ goes.
Not sure about the 5DMKIII but the D800 uses 16bit processing & outputs at 14 bits this is copied from the brochure.
14-bit A/D conversion and 16-bit image processing for
rich tones and natural colors
Tonal gradation is where an image transforms from simply
representing life to taking on a life of its own. The D800 does
exactly that, with cutting-edge image processing that injects
vital energy into your images. Black is rendered as pitch
black, and shadow details are subtle and rich. Even under
harsh, high-contrast light, where some cameras can fail, the
D800’s gradation remains smooth with abundant detail and
tone all the way up the scale to pure white.
 
Upvote 0
Ew said:
FPS is definitely key when shooting kids going bonkers inside w/ poor lighting. This is why I went for the 7D vs the mk2. 7D w/ 28 1.8 @ 2.8, 1600 iOS has been the work horse. Looking forward to 5d3 for more ISP and cleaner images.

That or valium. Put it in their sugar cookies and you can shoot them with a pinhole camera if you want.
 
Upvote 0
grahamsz said:
This is also an area where I'm a little skeptical that we can trust raw's rawness.

For that 10th stop, the camera will be reading out lumience values like

0,2,2,6,2,7,7,4,7,3,7,4,5,5,4,3,2,0,6,0,4,2,5,3

Since people aren't really using that range to find detail, I can see it being really tempting for a camera designer to clip them or apply some kind of noise reduction or binning on the data coming off the camera. That would give nice clean blacks and people do like those.

Lens-cap-on black frame experiments with my 5d2 seem to show that the RAW file does contains that "10th stop junk", and it is the recoverable; but normally the software (at least the Adobe software that I use) does default the black level to a value like 5. You have to be a little perverse in order to see let that noise in.
 
Upvote 0
peederj said:
Again I am not an expert on how the Math applies to camera sensors specifically, but I suggest studying the concept of dither and how, because of it, a 14-bit ADC is completely adequate to fully represent a sensor photocell that has 14 stops of dynamic range. Making it a 16-bit ADC will recover no additional information whatsoever, but will just carry additional noise.

Do you have some sources I can read on that? An accurate 16 bit ADC will always be better than an accurate 14 bit one (assuming they can operate at the same sample rate) . Now I suspect we can't actually build one or can't get enough usable data from the sensor to render the last couple of bits useful.

You pretty much get one stop of range for each bit in your ADC, but only if you consider 1 bit an adequate amount of depth for the last stop.
 
Upvote 0
http://en.wikipedia.org/wiki/Dither

So yes a 14 bit ADC can adequately represent a 14 stop photocell, and the accuracy is going to be limited by the photocell's noise floor (unless it is actually more than 14 stops, measured in dB from the noise floor to clipping. HDR raises the point of clipping but can't do anything about the noise floor which is often based on physical limits like Brownian motion etc.

Arun said:
Lens-cap-on black frame experiments with my 5d2 seem to show that the RAW file does contains that "10th stop junk", and it is the recoverable; but normally the software (at least the Adobe software that I use) does default the black level to a value like 5. You have to be a little perverse in order to see let that noise in.

So reading that these systems use 14bit ADC but 16bit DSP, it's the DSP stage that is the likely culprit. The ADC will represent the state of the photocell very well, but doing DSP without introducing distortion and noise requires a deeper numerical space to work within. Otherwise you either have what's called lossy truncation, which you seem to indicate with your "default black level" report, or have to dither each calculation which brings up your noise level logarithmically. So the sensors and ADCs are likely not the rate limit on quality here, it's the fact that doing, say, 32 bit floating point DSP efficiently requires all the power and heat you see on your laptop. Though that is getting better with all the R&D being poured into cellphones and tablets.

I don't know much about the architecture of camera implementations but I just caution interpreting a quoted figure like "14 bit ADC vs 16 bit ADC". There are what's known as "marketing bits" that just hold noise but make people think they are getting something better. Effective dynamic range from sensor to memory card is a more useful statistic...I'm guessing the DSP stage is the opportunity for improvement given what I'm reading.
 
Upvote 0
peederj said:
http://en.wikipedia.org/wiki/Dither

So yes a 14 bit ADC can adequately represent a 14 stop photocell, and the accuracy is going to be limited by the photocell's noise floor (unless it is actually more than 14 stops, measured in dB from the noise floor to clipping. HDR raises the point of clipping but can't do anything about the noise floor which is often based on physical limits like Brownian motion etc.

I know what dithering is when it pertains to display. I've taken enough computer graphics courses to know the basics of dithering an image.

Where I'm drawing a complete blank is where you suggest that the sensor is somehow able to dither the incoming image. I haven't designed a semiconductor more complex than a single fet or npn, but I honestly can't imagine how you'd dither an image at capture time. Moreover I can't imagine why you'd do it at that point, it seems like the digic would be the place to do it.
 
Upvote 0
It's done at the stage of analog to digital conversion as a means of eliminating quantization distortion and representing the full dynamic range within the bitspace. At least, so I would guess. It may also be used elsewhere in the system, it's a generally good thing when dealing with any quantized DSP.

If they aren't using dithered ADCs, then you are right 14bits will not cut it for 14 stops...but if that were the case, we simply wouldn't have the dynamic range they are delivering. We would have more like 10 bits which you are claiming, but really, I think those bits are being lost at the DSP stage and not the ADC stage. All my posts on this subject have been addressed to the person that claimed the 5d3 and D800 were weak because they only had 14 bit ADCs, and I wanted to argue (though I am not an expert on photo sensors!) that that argument was likely false.
 
Upvote 0
Why is everyone taking pictures In the dark?

Photography is about capturing the light. I think it's nuts that so many "camera testers" care so much about these insane ISO levels. What were you photographing a few years ago when high ISO was terrible? Or when you had 100 Speed film loaded in your camera?

I think it's all a bit out of hand. And I don't want to hear the line "pros need these better tools blah blah blah..."

Pros have been capturing images for generations! It's all marketing now. ALL of these cameras are capable of capturing amazing images. Enjoy the camera you choose, and learn it's strengths and weaknesses.
 
Upvote 0
briansquibb said:
What is the limit of sRgb then?

In what context? (There weren't any quotes there, so I'm not sure where this conversation continues from.)

Generally speaking, sRGB is a color space, which doesn't so much as change the number of representable colors...rather it changes the saturation and luminosity extents and white & black point of colors when modeled within that color space. You'll always have 24bits (8-bit RGB) or 48bits (16-bit RGB) of integer precision for each pixel, however with say AdobeRGB or ProPhotoRGB, the appearance of those colors when rendered may differ in comparison to sRGB, despite, technically, being "the same" color. In larger color spaces, a fully saturated "red" may appear more red and more saturated than in sRGB (and whether you have the ability to actually observe that would depend on whether your viewing device supports a gamut larger than sRGB itself! :P)

Image Color Management (ICM) converts color information from one color space to another in L*a*b* (Lab) space, and colors are usually represented as high precision floating point numbers when doing so...so the number of mathematically representable colors is essentially "infinite". When converting back out of Lab to RGB, you may lose precision, and depending on the distribution of specific floating point color values in Lab, discrete color values in RGB may coalesce or end up clipped (sometimes depends on rendering intent, such as Absolute, Relative, or Perceptual.) Hence the reason why its useful to keep photos in the widest gamut (color space) possible until you actually have a reason to convert to sRGB (a smaller gamut.)
 
Upvote 0
pdirestajr said:
Why is everyone taking pictures In the dark?

Photography is about capturing the light. I think it's nuts that so many "camera testers" care so much about these insane ISO levels. What were you photographing a few years ago when high ISO was terrible? Or when you had 100 Speed film loaded in your camera?

I think it's all a bit out of hand. And I don't want to hear the line "pros need these better tools blah blah blah..."

Pros have been capturing images for generations! It's all marketing now. ALL of these cameras are capable of capturing amazing images. Enjoy the camera you choose, and learn it's strengths and weaknesses.


I just gave somebody the highlight of the month,
http://www.canonrumors.com/forum/index.php?topic=4936.msg97931;topicseen#new

and you get the post of the month.

I just got mine this very night!
7022549007_0c6aea98b3.jpg
 
Upvote 0
pdirestajr said:
Why is everyone taking pictures In the dark?

Photography is about capturing the light. I think it's nuts that so many "camera testers" care so much about these insane ISO levels. What were you photographing a few years ago when high ISO was terrible? Or when you had 100 Speed film loaded in your camera?

I think it's all a bit out of hand. And I don't want to hear the line "pros need these better tools blah blah blah..."

Pros have been capturing images for generations! It's all marketing now. ALL of these cameras are capable of capturing amazing images. Enjoy the camera you choose, and learn it's strengths and weaknesses.


It's not just those who need hi ISO abilities to shoot in low available light.

It's also important to capture a good quality "dark" along with light and to do it with fidelity. Much as you might compare the silences within a musical passage to a loud crescendo. Do you want to hear hiss instead of silence? Or worse yet, a quiet but distracting hum or high-pitched tone? (the latter being the equivalent of data from most Canon bodies)

A system that can capture the detail in the light areas and the detail in dark areas in one shot, without adding anything artificial (e.g. distracting pattern noise) will give the photographer/artist much more to work with.

Not all of us need or want that ability, some do. Especially those who may be printing poster-size and larger.
 
Upvote 0
peederj said:
It's done at the stage of analog to digital conversion as a means of eliminating quantization distortion and representing the full dynamic range within the bitspace. At least, so I would guess. It may also be used elsewhere in the system, it's a generally good thing when dealing with any quantized DSP.

If they aren't using dithered ADCs, then you are right 14bits will not cut it for 14 stops...but if that were the case, we simply wouldn't have the dynamic range they are delivering. We would have more like 10 bits which you are claiming, but really, I think those bits are being lost at the DSP stage and not the ADC stage. All my posts on this subject have been addressed to the person that claimed the 5d3 and D800 were weak because they only had 14 bit ADCs, and I wanted to argue (though I am not an expert on photo sensors!) that that argument was likely false.

Found a decent summary here

http://www.analog.com/library/analogDialogue/archives/40-02/adc_noise.html

That's actually quite interesting stuff, although i'm not sure how applicable it is in a camera setting
 
Upvote 0
Status
Not open for further replies.