DxOMark vs. Reality

Status
Not open for further replies.
tron said:
I can understand the improvement in noise when downsampling but can someone prove that downsampling an image (to 8Mpix for example) improves DR?

I ask this because I believe in practice what is burned in the highlights has been lost forever and when there are shadow areas I cannot image how a "dark" pixel will benefit from its neighbour equally dark ones.

Dynamic range is the number of stops between "saturation point" and blackpoint. Saturation point is where the highlights get "burned out". This doesn't change. Black point is the point at which SNR is 0db (that is, signal to noise ratio is 1). Downsampling reduces noise, so the SNR at what used to be the blackpoint goes up (to 5db for example) and the new blackpoint after downsampling is some way down from that.
 
Upvote 0

tron

CR Pro
Nov 8, 2011
5,223
1,616
elflord said:
tron said:
I can understand the improvement in noise when downsampling but can someone prove that downsampling an image (to 8Mpix for example) improves DR?

I ask this because I believe in practice what is burned in the highlights has been lost forever and when there are shadow areas I cannot image how a "dark" pixel will benefit from its neighbour equally dark ones.

Dynamic range is the number of stops between "saturation point" and blackpoint. Saturation point is where the highlights get "burned out". This doesn't change. Black point is the point at which SNR is 0db (that is, signal to noise ratio is 1). Downsampling reduces noise, so the SNR at what used to be the blackpoint goes up (to 5db for example) and the new blackpoint after downsampling is some way down from that.
Thanks!
 
Upvote 0
elflord said:
Dynamic range is the number of stops between "saturation point" and blackpoint. Saturation point is where the highlights get "burned out". This doesn't change. Black point is the point at which SNR is 0db (that is, signal to noise ratio is 1). Downsampling reduces noise, so the SNR at what used to be the blackpoint goes up (to 5db for example) and the new blackpoint after downsampling is some way down from that.
I didn't quite catch this. Could you expand your answer on my example? Here's how I understand downsampling:

Assume that we have 4 neighboring pixels that represent in reality 4 black squares, but in our RAW file (due to noise impact) they have next brightness levels:
1 0
0 3

So, when I downsample I get 1 pixel that has brightness level 1 (for example, if I downsample by next formula: (1+0+0+3)/4 ).

It means that before downsampling we were looking at image with average noise level equal to 1 and we do the same after downsampling.

So what am I doing wrong compared to all of you that have noise level reduced after downsammpling?

P.S. Sorry for asking silly questions, i'd just like to understand :)
 
Upvote 0
well_dunno said:
Those of you who defend DxO scores, why do you have Canon gear?
Because Canon offers the better package. It's not all about the sensor. It's the combination of the sensor, AF, ergonomics, processor and lenses. Similar to cars, where one company offers the better engine, but another company offers the better car. The best engine won't do the job if the car "sucks" ;)
 
Upvote 0
BXL said:
well_dunno said:
Those of you who defend DxO scores, why do you have Canon gear?
Because Canon offers the better package. It's not all about the sensor. It's the combination of the sensor, AF, ergonomics, processor and lenses. Similar to cars, where one company offers the better engine, but another company offers the better car. The best engine won't do the job if the car "sucks" ;)
You wouldn't go offroad with a racecar or wear slick tires in the mud.

Every job needs the right tool.
 
Upvote 0
nightbreath said:
elflord said:
Dynamic range is the number of stops between "saturation point" and blackpoint. Saturation point is where the highlights get "burned out". This doesn't change. Black point is the point at which SNR is 0db (that is, signal to noise ratio is 1). Downsampling reduces noise, so the SNR at what used to be the blackpoint goes up (to 5db for example) and the new blackpoint after downsampling is some way down from that.
I didn't quite catch this. Could you expand your answer on my example? Here's how I understand downsampling:

Assume that we have 4 neighboring pixels that represent in reality 4 black squares, but in our RAW file (due to noise impact) they have next brightness levels:
1 0
0 3

So, when I downsample I get 1 pixel that has brightness level 1 (for example, if I downsample by next formula: (1+0+0+3)/4 ).

It means that before downsampling we were looking at image with average noise level equal to 1 and we do the same after downsampling.

So what am I doing wrong compared to all of you that have noise level reduced after downsammpling?

P.S. Sorry for asking silly questions, i'd just like to understand :)

There are a couple of issues with the above example. First, the right measure for error is RMS, not arithmetic mean (that is, if you take two measurements, +1 and -1 for a variable whose true value is 0, the "error" is certainly not 0).
In your example, the RMS error is

sqrt( 1/4( 32 + 12 + 02 + 02) ) = 1.6.

After you downsample, you get an RMS error of 1 -- that is, sqrt( (1-0)2/ 1 ).

Second the example is a bit unusual because you generally don't expect the error terms to be entirely of the same sign, and by averaging the signal the positive and negative errors cancel each other out.

If you have (for example) a "true" value of 10 and readings 6,3,9,17 (these I just randomly sampled from a normal distribution with mean 10, standard deviation 5). We average these and we get 9 (an error of 1), whereas before we had an error of 5 --

sqrt[ ((10-6)2+(10-3)2)+(10-9)2+(10-17)2))/4 ].

In general, the expected value of our error drops by a factor of sqrt(N) -- so in my above example, the expected error in the beginning is the standard deviation of our distribution, 5. After we average four pixels, our expected error is 2.5 (5/sqrt(4)).
 
Upvote 0
elflord said:
There are a couple of issues with the above example. First, the right measure for error is RMS, not arithmetic mean (that is, if you take two measurements, +1 and -1 for a variable whose true value is 0, the "error" is certainly not 0).
In your example, the RMS error is

sqrt( 1/4( 32 + 12 + 02 + 02) ) = 1.6.

After you downsample, you get an RMS error of 1 -- that is, sqrt( (1-0)2/ 1 ).

Second the example is a bit unusual because you generally don't expect the error terms to be entirely of the same sign, and by averaging the signal the positive and negative errors cancel each other out.

If you have (for example) a "true" value of 10 and readings 6,3,9,17 (these I just randomly sampled from a normal distribution with mean 10, standard deviation 5). We average these and we get 9 (an error of 1), whereas before we had an error of 5 --

sqrt[ ((10-6)2+(10-3)2)+(10-9)2+(10-17)2))/4 ].

In general, the expected value of our error drops by a factor of sqrt(N) -- so in my above example, the expected error in the beginning is the standard deviation of our distribution, 5. After we average four pixels, our expected error is 2.5 (5/sqrt(4)).
Got it now. So it pretty much describes why you cannot normalize without taking into account environmental parameters. DXO is doing something similar to comparing a bee and an airplane from bee's perspective. In the end we just get artificial results not possible in real world.
 
Upvote 0
Mikael Risedal said:
BXL said:
well_dunno said:
Those of you who defend DxO scores, why do you have Canon gear?
Because Canon offers the better package. It's not all about the sensor. It's the combination of the sensor, AF, ergonomics, processor and lenses. Similar to cars, where one company offers the better engine, but another company offers the better car. The best engine won't do the job if the car "sucks" ;)
There are different brand fanatic and the arguments regarding why something suddenly not important anymore (when "their" brand suddenly is not the best ) and this might take quite comical proportions ... Some very active writers, among others here at Canonrumors's are direct rabid - this despite the notice that they are intelligent enough to see the connections, there is an instinctive religious barrier that prevents them to see what's right in front of their eyes ... :)

Oh the irony......................
 
Upvote 0
Mikael Risedal said:
BXL said:
well_dunno said:
Those of you who defend DxO scores, why do you have Canon gear?
Because Canon offers the better package. It's not all about the sensor. It's the combination of the sensor, AF, ergonomics, processor and lenses. Similar to cars, where one company offers the better engine, but another company offers the better car. The best engine won't do the job if the car "sucks" ;)
There are different brand fanatic and the arguments regarding why something suddenly not important anymore (when "their" brand suddenly is not the best ) and this might take quite comical proportions ... Some very active writers, among others here at Canonrumors's are direct rabid - this despite the notice that they are intelligent enough to see the connections, there is an instinctive religious barrier that prevents them to see what's right in front of their eyes ... :)

Religion ? What are you talking about mate ? To the vast majority, the biggest size they'll go is the 23" diagonal of their computer screen. Who is printing billboards ? If the dxo ratings were saying that my Canon gear is better than the competition, I for one would not give a damn. Conversely, I don't give a damn with the opposite. If your purchase decision is based on dxo ratings, please be my guest. But maybe one day you'll have to dump all your gear because dxo found out that the iPhone has a much better sensor ! For sure, dxo won't make me change a micron from my path. Religion or not.
 
Upvote 0
nightbreath said:
Got it now. So it pretty much describes why you cannot normalize without taking into account environmental parameters. DXO is doing something similar to comparing a bee and an airplane from bee's perspective. In the end we just get artificial results not possible in real world.
I think you should explain how you conclude this from Elfiord's post - it does not follow at all.

The down-sampled figures are a perfectly reasonable way of comparing sensors. They correspond to what you can achieve if you print or view your photograph out at the same physical size and print DPI. From the numbers, it looks as if DXO actually measure this after performing down-sampling, rather than just applying a theoretical perfect adjustment. This means that the down-sampled numbers inherently include any 'environmental parameters' in the result.

No one argued against downsampling when the comparison was 5DII to D700...
 
Upvote 0
MarkII said:
nightbreath said:
Got it now. So it pretty much describes why you cannot normalize without taking into account environmental parameters. DXO is doing something similar to comparing a bee and an airplane from bee's perspective. In the end we just get artificial results not possible in real world.
I think you should explain how you conclude this from Elfiord's post - it does not follow at all.

The down-sampled figures are a perfectly reasonable way of comparing sensors. They correspond to what you can achieve if you print or view your photograph out at the same physical size and print DPI. From the numbers, it looks as if DXO actually measure this after performing down-sampling, rather than just applying a theoretical perfect adjustment. This means that the down-sampled numbers inherently include any 'environmental parameters' in the result.

No one argued against downsampling when the comparison was 5DII to D700...
I was not brand-specific and I'm not defending 5D Mk II vs. D700 scorings. As well as others I didn't have a clue what to look at when I first got to the DXO web-site and wanted to choose my next camera. That's why I am kind of frustrated now, because I kno what to look for.

As for environmental parameters, I was referring to ADC bit depth, as it is clear that by downsampling you just create data that was not in the image before downsampling (more than 14 stops of data in 14 bit depth image). I.e. if your measurements include noise floor evaluation that gives different results when you compare current image with other images, you just won't make an impression of a trusted source.

Why not come up with an algorithm that measures the whole image, not each pixel individually? In this case you don't need to downscale at all.

P.S. Please correct me where I'm wrong ;)
 
Upvote 0

infared

Kodak Brownie!
Jul 19, 2011
1,416
16
The thread's first post has a lot of unproven speculation....but I do not disagree with the basic sentiment. DxO is NOT a place this photographer would go for meaningful information when researching cameras and lenses for purchase.. I do not know the reason for their wild fluctuations from what I know to be reality, but they are what they are....and to me what they are is a SUSPECT source, that often has stated opinions that are not in line with the conclusions of a pool of other reliable and consistent sources. DxO Mark backs up their remarks, in my opinion, with suspect "scientific" data. Sometimes thier "data" is in line with the general consensus of my other respected sources, many times not. I do know that specific products from manufacturers can vary from item to item..but DxO mark does not present their information with that variation. They present their information as "fact".
I do not waste my time with ANYTHING that DxO Mark has to say because I have found many of their "opinions" to be less than valuable. I feel that anyone with experience and intelligence would do the same.
There are just so many other consistently reliable sources out there to get my information from that increase my awareness and knowledge of photographic tools.
 
Upvote 0
I occasionally read their reviews but take them with a huge grain of salt....they continue to do things that make me think they are unorganized and don't have their "stuff" together. I have a few other reviewers that I prefer.

The latest thing I noticed was when I checked to see if they had done a review on the 135L, this was in September. They didn't have a review but they did have a note saying they would be posting a review of the 135L in October. So I check back in November, no review, but a note saying they'll be reviewing it in November...So I forgot about it until now, I just checked, they still haven't reviewed the lens, and the note has been changed to...(copied from their website)

This product will be tested and reviewed on December on dxomark.com. Stay tuned by subscribing to our newsletter.

So it's December 15th and no review...anyone want to bet that it won't be done in December? Where I come from, if you say one thing and do another, again, and again....it just isn't good, you lose credibility.

Also, another typo where they write "on December". Their website is full of typos.

IMO, they are overrated as a "review" website.
 
Upvote 0
infared said:
The thread's first post has a lot of unproven speculation....but I do not disagree with the basic sentiment. DxO is NOT a place this photographer would go for meaningful information when researching cameras and lenses for purchase.. I do not know the reason for their wild fluctuations from what I know to be reality, but they are what they are....and to me what they are is a SUSPECT source, that often has stated opinions that are not in line with the conclusions of a pool of other reliable and consistent sources. DxO Mark backs up their remarks, in my opinion, with suspect "scientific" data.

DXOmark are THE industry leader in sensor benchmarking. There is nothing "suspect" about their sensor measurements. The minutiae of sensor benchmarking has been debated here, and the overwhelming conclusion of the discussions is that the measurements for their sensor benchmarks are sound. There is some nitpicking about the way those measurements are aggregated but that's about it.

I personally would hold off buying a camera body until DxO have tested it. I don't have nearly as much confidence in other sources for testing sensors.

Lenses are a separate issue, there are other sources that do a better job of testing lenses.
 
Upvote 0
Z

Zlatko

Guest
elflord said:
infared said:
The thread's first post has a lot of unproven speculation....but I do not disagree with the basic sentiment. DxO is NOT a place this photographer would go for meaningful information when researching cameras and lenses for purchase.. I do not know the reason for their wild fluctuations from what I know to be reality, but they are what they are....and to me what they are is a SUSPECT source, that often has stated opinions that are not in line with the conclusions of a pool of other reliable and consistent sources. DxO Mark backs up their remarks, in my opinion, with suspect "scientific" data.
DXOmark are THE industry leader in sensor benchmarking. There is nothing "suspect" about their sensor measurements. The minutiae of sensor benchmarking has been debated here, and the overwhelming conclusion of the discussions is that the measurements for their sensor benchmarks are sound. There is some nitpicking about the way those measurements are aggregated but that's about it.

I personally would hold off buying a camera body until DxO have tested it. I don't have nearly as much confidence in other sources for testing sensors.

According to this article on Luminous Landscape today —
http://www.luminous-landscape.com/essays/dxomark_sensor_for_benchmarking_cameras2.shtml
— the overall Camera Sensor score is biased toward single shot HDR at low ISO settings — a capability we never had in the past and which we may infrequently need.

So they may be the industry leader, and they may be scientific, but it seems their big headline-grabbing Camera Sensor score may not be very meaningful to a lot of photography.

We also learn that their Portrait score is misnamed. "Essentially it measures choma noise in the dark parts of a low-ISO image" — which may be relevant to some photographers, and not at all relevant to others. This explains why two cameras can have the same Portrait scores and yet one can be clearly better for actual portraits.
 
Upvote 0
Zlatko said:
So they may be the industry leader, and they may be scientific, but it seems their big headline-grabbing Camera Sensor score may not be very meaningful to a lot of photography.

This would be true of any single score. If they gave all the weight to high ISO performance, you would see medium format backs getting lower scores than point and shoots.

Thankfully, they don't just publish the single score -- they publish the three use case scores, and all the measurements. Their website also makes it easy to plot measurements of two different cameras on the same axes, so that when a new camera gets a surprisingly high or low score, it's easy to determine why.

The luminous landscape article is overwhelmingly positive. If the most serious criticism is nitpicking over choices of naming, that's a pretty positive review.
 
Upvote 0
Jul 21, 2010
31,223
13,087
elflord said:
There is some nitpicking about the way those measurements are aggregated but that's about it.

Subaru Legacy, Overall Score = 92
BMW 760Li xDrive, Overall Score = 84

Preposterous? Well...the Overall Score is based on a weighted composite of two Use Case Scores, Winter Utility and Summer Utility. Those are based, respectively, on accurate and reliable Measurements of the ability of just the left rear wheel to push the car up a 20-degree incline, and the towing capacity. But those details are just nitpicking. The Overall Scores clearly show that the Subaru is better.

::)
 
Upvote 0
Status
Not open for further replies.