Tool - D810 vs. 5D Mk3

Etienne said:
The 5DIII with 35 f/2 IS is a great combo, both for video and photos. For video, the IS is so good that the shots sometimes look like they are on a tripod. Can't do that with Sony or Nikon, and f/2 gives great low light performance as well as shallow DOF.

Not sure why you say 'can't do that with Sony or Nikon'...

Have you ever tried Sony's 'Active' image stabilization? Granted I believe it only works with certain lenses, but the combination of optical stabilization on the lens with digital image stabilization (from accelerometer data, IIRC) leads to such steady shots you wouldn't believe they were shot on anything other than a Steadicam!

Furthermore, Canon's not alone in getting very well image stabilized lenses in their lineup. Although the 16-35mm f/4L IS is quite stellar (rated to 4 stops IS by CIPA standards, which are pretty stringent, so it really is that good), so's the VR on the Nikon 70-200mm f/4 (also rated to 4 stops CIPA). It's incredibly steady. Oh and the Fuji's 50-140mm has the best IS I've ever seen; rated at 5 stops CIPA, it's so good it's a little unsettling.

Actually, I see IS getting better from generation to generation. Generally, the newer the lens, the better the IS. So with Canon putting out so many stellar lenses of late, yeah there may be an advantage there. And no one will argue that Canon doesn't have an incredible lens family. Just that, e.g., for me that doesn't matter much. B/c there are excellent alternatives in other brands. For example, Sigma now makes the best 35mm and 50mm AF primes, hands down, and is brand agnostic. The Nikon 70-200 f/4 VR is much better than the Canon 70-200 f/4L IS, which has serious left/right softness issues that vary from shot to shot b/c of the IS element (even the Sony FE 70-200 f/4 doesn't have this issue). Etc. etc.

And if we're talking about video, the Sony A7S will do far better with those Canon lenses than a 5DIII would... and will still use the IS on those lenses with the Metabones adapter. Btw the VR on the Sony 16-35 is also incredible.

My point being - there are very credible alternatives from other brands that make such generalizations misleading.
 
Upvote 0
LetTheRightLensIn said:
FEBS said:
sarangiman said:
dtaylor said:
sarangiman said:
Normalized difference is 11.7 vs 14.8 for the D810...

And that's where any knowledgeable person stops reading. >14 stops...from a linear 14-bit ADC...kind of impossible ;)
Nope, that's where an unknowledgeable person stops reading, one who doesn't understand the math behind resampling.

Or a person who thinks he understands. How could you get more then 14stops out of a 14bit ADC? No math can solve this. Yes you might interpolate, expect what the value might be, but that is not correct. 14 bits means really only 14 bits max theoretical, and practical it will be lower. There is no logic and no math to find the REAL values outside the sampled values.

No, the other guy was right.

(And they are not saying you get more than 14bits when taking full advantage of the resolution the sensor is capable of, just at 8MP of detail equivalent (which is what they use as the standard to compare all cameras at).)

So the 11.7 and 14.8 are based on the fact that they do recalc the dynamic range into a 8mp detail equivalent? That means, that higher resolution will always have a advantage, just by the fact that the D810 has 36mp and the 5D3 has 22mp. So, even if the sensors are performing the same, only difference of mp, then the highest mp will have the highest normalized figure.

That does say nothing to my opinion. The highest horsepower/weight does only tell me that they have the best comparision over there, but not that it would be the best engine or car. That's just the same in my opinion for the recalculation to 8mp.

However, I fully agree that the D810 is a very nice camera that Nikon placed on the market. But please, don't use figures like that to compare those cameras.
 
Upvote 0
FEBS said:
So the 11.7 and 14.8 are based on the fact that they do recalc the dynamic range into a 8mp detail equivalent? That means, that higher resolution will always have a advantage, just by the fact that the D810 has 36mp and the 5D3 has 22mp. So, even if the sensors are performing the same, only difference of mp, then the highest mp will have the highest normalized figure.

That does say nothing to my opinion. The highest horsepower/weight does only tell me that they have the best comparision over there, but not that it would be the best engine or car. That's just the same in my opinion for the recalculation to 8mp.

However, I fully agree that the D810 is a very nice camera that Nikon placed on the market. But please, don't use figures like that to compare those cameras.

No, you misunderstand.

You say 'even if the sensors perform the same... then the highest MP sensor will have the highest normalized figure.'

Yes, that's right. And it's absolutely valid.

Think about what it means for a higher resolution sensor - of the same size - to have the 'same (pixel-level) performance' (before normalization). A higher res sensor necessarily has smaller pixels, which means each pixel has a smaller FWC (full-well capacity). For that pixel to have the same pixel-level DR, it'd have to have lower read noise than the lower res sensor's pixels.

Therefore, it's no surprise it also has higher normalized DR.
 
Upvote 0
sarangiman said:
FEBS said:
So the 11.7 and 14.8 are based on the fact that they do recalc the dynamic range into a 8mp detail equivalent? That means, that higher resolution will always have a advantage, just by the fact that the D810 has 36mp and the 5D3 has 22mp. So, even if the sensors are performing the same, only difference of mp, then the highest mp will have the highest normalized figure.

That does say nothing to my opinion. The highest horsepower/weight does only tell me that they have the best comparision over there, but not that it would be the best engine or car. That's just the same in my opinion for the recalculation to 8mp.

However, I fully agree that the D810 is a very nice camera that Nikon placed on the market. But please, don't use figures like that to compare those cameras.

No, you misunderstand.

You say 'even if the sensors perform the same... then the highest MP sensor will have the highest normalized figure.'

Yes, that's right. And it's absolutely valid.

Think about what it means for a higher resolution sensor - of the same size - to have the 'same (pixel-level) performance' (before normalization). A higher res sensor necessarily has smaller pixels, which means each pixel has a smaller FWC (full-well capacity). For that pixel to have the same pixel-level DR, it'd have to have lower read noise than the lower res sensor's pixels.

Therefore, it's no surprise it also has higher normalized DR.

But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.

*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.

Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.

Noise and banding is what truthfuly diffentiates the current sensors, and that difference is nowhere near this mythical 3.1 stops of "DR". People that regularly use or work files from both know the differences are in the shadows and are closer to two stops, Canon files can be lifted 3 stops in the shadows with very high quality results, Exmor files can be lifted closer to five stops in the shadows but by the intrinsic nature of gamma curves lose a lot of tonality if you need to do that.

As was evident in a recent post here with A7R RAW files available, large areas of 5 stop lifted shadow detail holds almost no tonality which mitigates the usefulness of the capability. That doesn't mean Canon shouldn't have it, it just means that when we do have it don't expect to get the same results from a 'normal' exposure and an underexposed image that is then lifted to 'normal', tonality does not work like that, and that was demonstrated in another thread here recently too.
 
Upvote 0
privatebydesign said:
But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.

*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.
...
Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.
...

Assuming you view the picture at a normal viewing distance, a normal DSLR will have more pixels than your eye can resolve. So each photo receptor in your eye will receive light from multiple pixels, summed with a weight proportional by how much of their area emits light that reaches that photo receptor. Which is precisely how bilinear downsampling works. So yes you will be able to perceive more DR from the higher MP sensor, unless you pixel peep at 100% which is the only case where per pixel DR is of actual interest in a picture.
 
Upvote 0
sarangiman said:
No, you misunderstand.

You say 'even if the sensors perform the same... then the highest MP sensor will have the highest normalized figure.'

Yes, that's right. And it's absolutely valid.

Think about what it means for a higher resolution sensor - of the same size - to have the 'same (pixel-level) performance' (before normalization). A higher res sensor necessarily has smaller pixels, which means each pixel has a smaller FWC (full-well capacity). For that pixel to have the same pixel-level DR, it'd have to have lower read noise than the lower res sensor's pixels.

Therefore, it's no surprise it also has higher normalized DR.

I do understand what you mean. However what's the meaning of a higher normalized DR? Where and how can I see that in my photo? It's just the same as having the most horsepower/weight factor, but which doesn't tell me anything of the performance of a car on a track.

That's the reason I have a problem with such a meaningless figure. I do completely follow Privatebydesign his remark for this: "Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR."

The DR of a sensor is in basic created by the quantity of bits used in the AD converter. The rest of the camera (optics, light measurement,..) can decrease this max DR of a lower value. But talking about a normalized DR is totally meaningless for the photo.
 
Upvote 0
msm said:
privatebydesign said:
But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.

*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.
...
Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.
...

Assuming you view the picture at a normal viewing distance, a normal DSLR will have more pixels than your eye can resolve. So each photo receptor in your eye will receive light from multiple pixels, summed with a weight proportional by how much of their area emits light that reaches that photo receptor. Which is precisely how bilinear downsampling works. So yes you will be able to perceive more DR from the higher MP sensor, unless you pixel peep at 100% which is the only case where per pixel DR is of actual interest in a picture.

Are you sure you can see this? It's 2^14 * 2^14 * 2^14 = 16384 * 16384 * 16384 = 4.398.046.511.104 possible combinations over the RGB colors together. You know that printing and showing photo's on monitors has a lot less possible combinations. So theoretical, I can follow you that you can increase the DR by adding multiple pixels together. But how can I see that in my photo?

Pixel peeping, or taking a crop at 100% is frequently done in the practice, and as you mentioned, then we don't see any increase in DR. So why using a normalized value to compare something that can't be seen in the practice?
 
Upvote 0
msm said:
privatebydesign said:
But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.

*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.
...
Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.
...

Assuming you view the picture at a normal viewing distance, a normal DSLR will have more pixels than your eye can resolve. So each photo receptor in your eye will receive light from multiple pixels, summed with a weight proportional by how much of their area emits light that reaches that photo receptor. Which is precisely how bilinear downsampling works. So yes you will be able to perceive more DR from the higher MP sensor, unless you pixel peep at 100% which is the only case where per pixel DR is of actual interest in a picture.

I, in my ignorance, disagree.

The brightest bright and the darkest dark will have almost identical values (indeed we have to rely on an output medium and in that case they will be identical), where a downsampled image might achieve increased IQ is in tonality because the averaging, assuming the displaying medium can also differentiate and display the subtleties and your eye can perceive them, will have a greater number of possible combinations.

What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

To me, in my simple ways, a measure of a cameras DR is the difference in luminosity values between how dark and bright objects can be in the scene and it still record detail in those brights and darks, two pixels with a charge capacity the same have the same DR potential (taking all other things like read noise etc into account) regardless of their size. How is that wrong?
 
Upvote 0
FEBS said:
..
So theoretical, I can follow you that you can increase the DR by adding multiple pixels together. But how can I see that in my photo?

By viewing your image from a greater distance so you no longer can see each individual pixel, then the smallest object your eye can resolve will consist of multiple pixels (and more of them in the case of the higher mp sensor) and the sum of which can hold many more of those RGB combinations than a single pixel can. The normalized DXO DR score is based on an assumption of your eye being able to resolve 8 megapixels.
 
Upvote 0
msm said:
FEBS said:
..
So theoretical, I can follow you that you can increase the DR by adding multiple pixels together. But how can I see that in my photo?

By viewing your image from a greater distance so you no longer can see each individual pixel, then the smallest object your eye can resolve will consist of multiple pixels (and more of them in the case of the higher mp sensor) and the sum of which can hold many more of those RGB combinations than a single pixel can. The normalized DXO DR score is based on an assumption of your eye being able to resolve 8 megapixels.

No, DXO DR "score" is based on an 8"x12" print at 300ppi. It has nothing to do with pixel visibility or human acuity. 8MP is an entirely randomly chosen number, because at the time 'all' cameras had 8MP. They could just as easily go up, or down. I can assure you the eye cannot resolve a pixel in an 8"x12" print at normal viewing distance (normally considered to be the diagonal of the print), even before the printer algorithm introduces its dithering. You need a very good magnifying glass to see the micro dots of layered ink on a print.

 
Upvote 0
privatebydesign said:
msm said:
FEBS said:
..
So theoretical, I can follow you that you can increase the DR by adding multiple pixels together. But how can I see that in my photo?

By viewing your image from a greater distance so you no longer can see each individual pixel, then the smallest object your eye can resolve will consist of multiple pixels (and more of them in the case of the higher mp sensor) and the sum of which can hold many more of those RGB combinations than a single pixel can. The normalized DXO DR score is based on an assumption of your eye being able to resolve 8 megapixels.

No, DXO DR "score" is based on an 8"x12" print at 300ppi. It has nothing to do with pixel visibility or human acuity. 8MP is an entirely randomly chosen number, because at the time 'all' cameras had 8MP. They could just as easily go up, or down. I can assure you the eye cannot resolve a pixel in an 8"x12" print at normal viewing distance (normally considered to be the diagonal of the print), even before the printer algorithm introduces its dithering. You need a very good magnifying glass to see the micro dots of layered ink on a print.


At least implicitly they also make assumptions about viewing distance. Viewed from 5m distance the viewed DR would obviously be higher.

Also the apple retina displays are around 300PPI and some are around the size of that print so doubt you need that good magnifying glass.
 
Upvote 0
msm said:
privatebydesign said:
msm said:
FEBS said:
..
So theoretical, I can follow you that you can increase the DR by adding multiple pixels together. But how can I see that in my photo?

By viewing your image from a greater distance so you no longer can see each individual pixel, then the smallest object your eye can resolve will consist of multiple pixels (and more of them in the case of the higher mp sensor) and the sum of which can hold many more of those RGB combinations than a single pixel can. The normalized DXO DR score is based on an assumption of your eye being able to resolve 8 megapixels.

No, DXO DR "score" is based on an 8"x12" print at 300ppi. It has nothing to do with pixel visibility or human acuity. 8MP is an entirely randomly chosen number, because at the time 'all' cameras had 8MP. They could just as easily go up, or down. I can assure you the eye cannot resolve a pixel in an 8"x12" print at normal viewing distance (normally considered to be the diagonal of the print), even before the printer algorithm introduces its dithering. You need a very good magnifying glass to see the micro dots of layered ink on a print.

At least implicitly they also make assumptions about viewing distance. Viewed from 5m distance the viewed DR would obviously be higher.

Also the apple retina displays are around 300PPI and some are around the size of that print so doubt you need that good magnifying glass.

No it wouldn't. The difference between the lightest light and darkest dark would be identical so the DR would be the same.

The largest and smallest values are constant, there is, and can be, no change in DR.

The averaging you are talking about does not result in 'more DR' it results in a greater number of values within that same range, or put another way, greater tonality that is so small in increments it is beyond our eyes capacity to differentiate.


You could update that to include the contrast ratios of screens too, but that is still fixed.
 
Upvote 0
privatebydesign said:
The largest and smallest values are constant, there is, and can be, no change in DR.

The averaging you are talking about does not result in 'more DR' it results in a greater number of values within that same range, or put another way, greater tonality that is so small in increments it is beyond our eyes capacity to differentiate.


Except in photography there is always noise and the smallest observable quantity depends on the amount of noise you have. DXO uses the following definition of DR:

Dynamic range is defined as the ratio between the highest and lowest gray luminance a sensor can capture. However, the lowest gray luminance makes sense only if it is not drowned by noise, thus this lower boundary is defined as the gray luminance for which the SNR is larger than 1. The dynamic range is a ratio of gray luminance; it has no defined unit per se, but it can be expressed in Ev, or f-stops.

See more here:
http://www.dxomark.com/About/In-depth-measurements/Measurements/Noise

Anyways, I think it is a waste of my time to discuss these things on this forum so I'll leave this discussion now. A quick look at the other countless threads on the subject should give a hint of why.
 
Upvote 0
privatebydesign said:
What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

No, sensor DR is the difference between the brightest bright it can record (where it clips) and the darkest dark that is not lost in noise.

privatebydesign said:
But surely, if each pixel has the same well capacity, even though the smaller one performs 'better' for its size, the range of light they can both accurately record is the same, therefore the 'true DR' of the sensor* is the same, for instance the highlights will be blown at the same photon numbers.

*True DR would be the difference in light levels between a pixel that only registers black, to when it is full such that one more photon will not register.

Here you're not considering that when you downsize, you average pixels, which increases SNR for the area of pixels averaged. And you do know that areas with SNR < 1 can reach SNR = 1 with enough averaging, right? Therefore, darker tones can be pulled up to SNR = 1, and therefore calculated DR can increase.

privatebydesign said:
Normalization is a nice way of comparing different things, but it doesn't reflect true DR recording capacity, and truthfully shouldn't be labeled DR. This is one of the many reasons there is such a difference of opinion between people who love tests and equations, and people who look at the differences in images.

Nonsense. There are those that can do both: love the math, and correlate the science/math to image quality differences. There's a reason for controlled tests - when done right, they reflect real-world differences in actual images.

Those in sensor design know this.

privatebydesign said:
Noise and banding is what truthfuly diffentiates the current sensors, and that difference is nowhere near this mythical 3.1 stops of "DR". People that regularly use or work files from both know the differences are in the shadows and are closer to two stops, Canon files can be lifted 3 stops in the shadows with very high quality results, Exmor files can be lifted closer to five stops in the shadows but by the intrinsic nature of gamma curves lose a lot of tonality if you need to do that.

Again, no. Do the proper side by side, and it's not a 'mythical' difference. But you have to know how to do the test right. I.e. don't confuse photon shot noise in an exposure 3 EV under for sensor noise.

The respectable Bill Claff's data or a higher SNR cutoff shows a difference of 2.5 EV. So now we're arguing about a half a stop?

The point is that there are almost no tones in the 14-bit D810 file that can't be used b/c of read noise. If you can't use them, it's b/c you didn't collect enough light to begin with down there. That's impressive, b/c it means the only way you can really get anything better is to use a bigger sensor. For the same reason that high ISO performance would increase with a larger sensor - collecting more light.

Furthermore, I've said time and again - it's not about 'how many stops you can push'. It's about what particular tones in the 14-bit file you can and can't work with. You cannot simplify it to 'Exmor can pushed X stops and Canon can be pushed Y stops'. That's just dead wrong, if you're trying to be rigorous or quantitative, anyway.

privatebydesign said:
As was evident in a recent post here with A7R RAW files available, large areas of 5 stop lifted shadow detail holds almost no tonality which mitigates the usefulness of the capability. That doesn't mean Canon shouldn't have it, it just means that when we do have it don't expect to get the same results from a 'normal' exposure and an underexposed image that is then lifted to 'normal', tonality does not work like that, and that was demonstrated in another thread here recently too.

Ok, but that has to do with photon shot noise. It's the same reason some people find extremely high ISO shots unacceptable. Because tones are made with too little light. Same with tones down in the depths of the 14-bit file. They're made with too little light.

So basically what you're arguing now is that you want a DR measure with a higher SNR cutoff. That's fine, but just realize what it is you're actually saying.

FEBS said:
Pixel peeping, or taking a crop at 100% is frequently done in the practice, and as you mentioned, then we don't see any increase in DR. So why using a normalized value to compare something that can't be seen in the practice?

B/c it's not fair to show this:

D600_vs_D800-ScreenDR.png


... when in reality actual visual comparisons of DR will not place the D800 behind the D600, so the following, normalized comparison is more accurate:

D600_vs_D800-PrintDR.png


Again, not sure how we could make it any clearer - you normalize to simulate a comparison at the same viewing size. Downsampling decreases noise, which increases SNR, which means lower tones make it up to your SNR cutoff for DR, which means DR has to increase.

No one's arguing anything about the absolute number and whether or not it reflects exactly the DR someone may actually find usable.

But you have to normalize for comparisons.
 
Upvote 0
privatebydesign said:
No it wouldn't. The difference between the lightest light and darkest dark would be identical so the DR would be the same.


The largest and smallest values are constant, there is, and can be, no change in DR.

The averaging you are talking about does not result in 'more DR' it results in a greater number of values within that same range, or put another way, greater tonality that is so small in increments it is beyond our eyes capacity to differentiate.

This is where you're going wrong.

The 'smallest possible value' is defined by the signal that is just above your SNR threshold (just above the noise floor).

Averaging increases SNR, so darker signals now make it up to your SNR threshold. So you've now increased your range of usable signals or tones, and therefore DR has increased.

Do a side-by-side with the D600 and D800 at equal viewing sizes. Do you really think the D800 has worse DR?

Perhaps a good exercise would be for you to actually measure DR by doing some SNR analyses from wedge shots yourself before you so confidently talk about this stuff? I'm being serious, not trying to be rude. You seem to almost have a grasp of this stuff, and feel you would finally 'get it' if you did some analyses yourself.
 
Upvote 0
msm said:
So yes you will be able to perceive more DR from the higher MP sensor, unless you pixel peep at 100% which is the only case where per pixel DR is of actual interest in a picture.

Shoot a transmission step wedge. Note the number of gray squares. Keep down sampling it until a black square turns gray. Or, if you prefer, make a really big print and keep backing away from it until black squares turn gray ;)

Down sampling does not change the range of tones you have. It allows you to reduce the impact of noise and thereby better detect finer detail that is composed of the lowest tones. In that sense you are extending the usable range a bit. But that is all.

Note: noise can be so severe that it obscures the last patch or two of gray in such a test, whereby downsampling would reveal them. But there's not any where near that much noise at base ISO on any of these cameras.
 
Upvote 0
privatebydesign said:
The brightest bright and the darkest dark will have almost identical values (indeed we have to rely on an output medium and in that case they will be identical), where a downsampled image might achieve increased IQ is in tonality because the averaging, assuming the displaying medium can also differentiate and display the subtleties and your eye can perceive them, will have a greater number of possible combinations.

What it won't do is have a brighter bright or darker dark, and surely that is the measure of DR, not how many divisions that same range is divided into?

You are correct.
 
Upvote 0
sarangiman said:
Here you're not considering that when you downsize, you average pixels, which increases SNR for the area of pixels averaged.

And you do know that areas with SNR < 1 can reach SNR = 1 with enough averaging, right? Therefore, darker tones can be pulled up to SNR = 1, and therefore calculated DR can increase.

You are confusing signal (tone variations across 2D space) with dynamic range (the brightest and darkest tones that can be recorded). So is DxO.

Down sampling lets you confidently say that yes, in this tiny region of 2D space we really did detect a tone variation and not just noise fluctuations. It does not mean you recorded a lower min tone.

In the transmission step wedge example I always throw out the signal...the squares in the wedge...is so large to begin with that only extreme noise could obscure it. Therefore you get a true idea of the range of tones that can be recorded.

Furthermore, I've said time and again - it's not about 'how many stops you can push'. It's about what particular tones in the 14-bit file you can and can't work with. You cannot simplify it to 'Exmor can pushed X stops and Canon can be pushed Y stops'. That's just dead wrong, if you're trying to be rigorous or quantitative, anyway.

It's over simplified, but it works for most people/situations.
 
Upvote 0
sarangiman said:
Perhaps a good exercise would be for you to actually measure DR by doing some SNR analyses from wedge shots yourself before you so confidently talk about this stuff? I'm being serious, not trying to be rude. You seem to almost have a grasp of this stuff, and feel you would finally 'get it' if you did some analyses yourself.

I'm being serious when I say that every single person at DxO needs to shoot a transmission step wedge and then print it at different sizes and observe (as opposed to running it through a black box algorithm they designed before trying this test).

It would clarify some things for them, and we might end up with a usable model of DR from their existing database of measurements.
 
Upvote 0
dtaylor said:
Shoot a transmission step wedge. Note the number of gray squares. Keep down sampling it until a black square turns gray. Or, if you prefer, make a really big print and keep backing away from it until black squares turn gray ;)

Down sampling does not change the range of tones you have. It allows you to reduce the impact of noise and thereby better detect finer detail that is composed of the lowest tones. In that sense you are extending the usable range a bit. But that is all.

Yes it does. It changes the range of usable tones by making darker tones more usable.

You yourself said it: 'reduce the impact of noise'. Reducing noise = increase in SNR, which can lead to an increase in DR.

It'd help if you properly understood what dynamic range is, and how it's calculated, before you went around misinforming people here.

dtaylor said:
Note: noise can be so severe that it obscures the last patch or two of gray in such a test, whereby downsampling would reveal them. But there's not any where near that much noise at base ISO on any of these cameras.

Clearly you've never shot an actual 13 stop wedge with a Canon DSLR, if you haven't seen any unusable patches with so much noise that SNR drops below 1 or 2. It's not even the last one or two patches - the last ten or so patches drop below SNR = 2 for a Canon 5D III, at the pixel level. Normalized to 8MP it's a little better.

And that's the whole point of normalization. It even helps your beloved Canon sensors. :)
 
Upvote 0