« on: April 25, 2014, 02:50:40 PM »
Here's my take on reviews and testing: What passes for "testing" in most reviews and internet reports is not really testing in the strictest sense. Statistically speaking, the sample sizes are far too small to draw meaningful conclusions with a reliable degree of veracity. Nor are test conditions sufficiently configuration managed to alleviate or mitigate outside sources of measurement error or data variability. In the cases of Sigma, Tokina, Zeiss, etc., the testing should include multiple samples mounted to multiple bodies of the major available models of camera. Further, the configurations of the camera bodies should be recorded, managed, and synced to a standard for each lens/model level 1 configuration. In other words, it would not be valid to test multiple bodies of a camera that have different settings, even if the lens is the same one used across that round of tests.
Obviously no layperson has access to this level of equipment in order to provide a comprehensive account of how a lens truly performs. The best that can be said for any given "result" reported in various reviews is that on that day, with that camera set the way it was set, with that very lens, this was the result. Often we don't even know enough about the conditions of that event to draw valid conclusions.
Some have noted that when we see trends across multiple reviewers that we can use that as evidence. Strictly speaking, that is not the case without a significant amount of analysis of the events under consideration along the lines of what I outline above. Just because two entities report issues, the results are not necessarily directly correllated unless the conditions under test were held exactly the same. Put another way, we're back to anecdotal evidence. Correlation is not causation.
That isn't to invalidate what was observed. In fact, it probably points to a need for more in depth and controlled testing in order to produce results from which a true root cause analysis can be conducted.
Another missing ingredient from most tests is a DIRECT control group. Oh...this reviewer has recorded results for this lens and the OEM lens you say? Once again, that comparison is only really valid if both lenses were tested under the exact conditions with the exact, serialized configurations. You want to say, in this case, that this Sigma's focus precision is worse than, say, the EF 50mmL f/1.2? You had better have tested both lenses across a statistically relevant sample set of each lens, and across a statistically relevant sample size of each model of camera tested, and under very strict configurations both to the camera, target setup, support, light values, etc.
Others here will disagree with what I wrote, but what I'm really saying is that we should ask the HARD questions about anything we're reading, especially if we're inclined to base our equipment investments on the data and conclusions.
The only entity I'm aware of with the access to enough population of lenses, cameras, and valid, calibrated test equipment is LensRentals. When Roger reports trends in test results for a given lens or body, I will generally place a greater faith in the applicability of the result as indicative of the true qualities of the equipment in question. But Roger isn't in the business of reviews or equipment testing. His tests are conducted against known baselines and intended to return the equipment to serviceable conditions. This means certain aspects of even his testing are not recorded or even necessary for his mission. So even his information must be understood as not strictly indicative of the absolute properties of a piece of equipment. He's said as much in one of his blog posts.
In other communities I participate in, we have established relationships with various members of the manufacturers such that engineers (in some cases the LEAD engineer) come and share their data with us. They participate in the forums to the point that they even allow us to question their data, results, conclusions, etc. Sometimes the data agrees with our outside anecdotes or even controlled testing. Sometimes not. I think it would be great if we had Canon, Sigma, Tokina, etc. engineers participate here or at least somewhere. Chuck is a good start, but truthfully, he gets beat up A LOT whenever I've seen him appear. He's also a tech rep, not an actual engineer. And my sense on this forum, so far, is that some here would not be able to play like grown ups. That happened to one manufacturer on one of the forums I'm talking about and they left the discussion and forum altogether. It was a loss to the community based on a few jackasses who could not respectfully discuss disagreements. Getting the various reviewers to participate in these discussions from time to time would be valuable as well. I, for one, would ask folks like Bryan some hard questions about their data and methods. Respectfully. Not to poke black eyes at manufacturers or review/testers, but to discover and discuss any holes relative to the data and conclusions. Over time, respectful discussions can benefit the whole community in getting better in their area of the sandbox. We know more, they build better products and provide more open data.
Anyway...I'm not saying that the various reviews are all garbage. Rather, I'm imploring people to understand what they are really seeing, and the limitations and assumptions made through the process when the review is produced. They are good data points in the case of several of the well-known sites. But they are not gospel. I, for one, am a long way from pronouncing the Sigma 50mm f/1.4 an AF disaster. It's on my list of equipment acquisitions over the next few months.