Their Scores are biased, in both disclosed and undisclosed ways. Their Sensor Score is weighted toward ISO 100 (2 of 3 metrics are used only at ISO 100 despite being measured throughout the range), and they state the overall score is a 'weighted average' of the three subscores, but don't reveal the weighting. Their Lens Score is based on performance in 150 lux illumination (like a dimly lit warehouse), so a lens will score higher when tested on a body with better high-ISO performance (so, how is it a 'lens score'?); similarly, when comparing two lenses, a lens that's worse on all the optical measures (sharpness, CA, etc.) can get a higher Score than an optically superior lens, based on the bodies on which they're tested, again based on that 150 lux bias. That bias also means transmission is disproportionately weighted - the 50/1.8 II gets a higher Score than the 600/4L IS II for that reason.
They lost a lot of credibility when they tested the Canon 70-200/2.8L IS II and concluded the original/MkI version was better - that disagreed with everyone who'd used or tested both, and when called on that, they said there was no mistake. But, about a year later they quietly updated their tests of the MkII and now it shows better performance than the original. I suspect they've also botched the testing of the 17-40L - they 'show' it to be just about as sharp in the extreme corners as the center wide open (it's mush in the corners at f/4), and wide open it shows as sharper than the 16-35L II stopped down to f/8 (totally false).
With the exception of errors like the above, their Measurements are useful. But their Scores are biased (so I call them Biased Scores = BS, aka bovine scat). For sensors, they're not applicable across the range of uses, and for lenses the Scores aren't even mainly based on the measurements.
A secondary issue is that review/comparison sites like Snapsort use the DxOMark Sensor Scores, without linking to the underlying measurements.