Mikael Risedal said:
Maui5150 said:
DxOMark scores are junk and meaningless.
Please point why and how the DXO
sensor measurement is faulty.
Note that Maui150 stated 'scores', not 'measurements'. IMO, their sensor Measurements are fine, it's their Sensor
Scores, that are faulty.
First, those scores are derived from the measurements in an incompletely disclosed manner - it's a 'weighted average' but weighted how? DxO themselves have stated, "
The DxOMark Camera Sensor score is under normal conditions a weighted average of noise, dynamic range and color sensitivity information. But some non-linearities are deliberately included in the algorithm to avoid a clear weakness in one area from being masked by a strength in other areas," (
source). An analogy might be the Dow Jones Industrial Average, which is a price-weighted index - what if the Dow decided to give five of those 30 companies greater weight in the index, but didn't tell us which companies were the chosen five, or whether it was the same five from day to day? If they did that, the DJIA would have little utility as an index, although we'd still know the closing share prices of the 30 index companies, much like the DxOMark Scores have little utility as a sensor benchmark, although we know the results of the individual DxOMark measurements.
Second, the Overall Score is biased toward performance at base ISO. Two of the three subscores (Landscape and Portrait) consider performance only at base ISO, rather than considering performance of those metrics across the range of available ISOs for the sensor being tested. Not all amplifiers are created equal, and DxO's measurements clearly show that when comparing two sensors, while one may have better DR and color depth at base ISO (e.g. 100), the other may have better DR and color depth at ISO 3200. However, only base ISO contributes to those subscores. That's a bias in the subscores, and thus, it's carried forward into the overall score.
Now...put those two together. An Overall Score derived from three subscores with two considering only base ISO, meaning a 2:1 bias in favor of base ISO performance. An Overall Score which is a weighted average of three subscores with unknown weightings, affected by intentionally-selected but nondisclosed nonlinearities. Do the weightings and undisclosed skewings of the algorithm correct for that potential 2:1 bias, or make it worse? We don't know.
So, while it's possible to look at the specific measurements in their data (and I applaud DxO for publishing those data), and while it's possible to post those measurement data over and over again, it doesn't change the fact that DxOMark's Overall
Score and Use-Case
Scores are derived from those data in an ambiguous and undisclosed manner, and
that makes them faulty.