FoCal Database for Lens Quality of Focus

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,882
Reikan Focal collects the results automatically from the huge number of calibrations we do, and we can compare our "Quality of Focus (QoF)" data with the range of values found by other users.

http://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/

It's a very useful guide. to the performance of our copies of lenses with the rest out there and how different bodies react to different lenses. I think the QoF is a measure of the acutance of a lens, measuring the sharpness of a black-white transition. Here is a table for the telephotos I have used on various bodies over the years. The ranges given seem to fit in with the trends I find for my own lenses. Fortunately, my expensive primes are all above the average ranges. My 100-400mm II, which Lensrentals finds to be very consistent over many copies tends to be in the average ranges, which you would expect.

The comparative values are regularly updated as I can see for some lens-camera combinations that were not covered until very recently. FoCal is providing an independent database over many copies. It's quite a resource.
 

Attachments

  • Focal_QoF.Statistics.jpg
    Focal_QoF.Statistics.jpg
    568.8 KB · Views: 243
My subscription is too old, I'd have to pay again to get access. It would be interesting, I just wonder how truethful the findings are. If the number is derived by the sharpness of the black to white transition, the results would be affected by the quality of the printed target, how much light is used, the distance from which the test was run. If you look at the 100-400 on the chart, the lowest resolution body (5Diii) scored highest, and they fall off in order of increasing resolution despite the 5D iv having the newest af system available. As resolution increases, sharpness suffers more with poor technique such as slow shutter due to lighting and clicking ok in FoCal too soon after the manual change of AFMA value on the body.

The QoF might be relavent when considering just one body, but I don't think it is a good comparison of body/lens combinations.
 
Upvote 0
Differences in lighting as well as other conditions can change the number.
Perhaps the data would help someone who understands all the variables.
As a indication whether your lens is above average or not I do not think it is.
It is an average and how do we know that the average is not pulled down due to the technical skill off the masses.
It would be interesting to see the highs.
 
Upvote 0
Mar 25, 2011
16,847
1,835
Lens testers know that lenses perform differently on different bodies, but, as others noted, the unknown skill levels of the testers make one take the results with a grain of sand.

The high MP bodies are the most difficult to use to their maximum advantage, just putting one on a tripod is not enough. Testers have had to completely redo their test methodology in order to come up with reasonable test values. Even on concrete, nearby traffic causes issues, so very bright lighting and fast shutter speeds boost the numbers. If testing is done on the floor of a typical home or apartment under less than super bright lighting, the numbers will fall.

As long as there is enough light, Focal will find the proper AFMA, but believing the QOF values represent what is possible for the highest values is a stretch.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,882
The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.

I am sure that many CR members take their cholesterol-lowering drugs, anti-hypertensives etc based on the measured levels of their cholesterol and blood-pressure that have large uncertainties due to "copy (= your own body, not your camera's)" variation and other variables measured in different ways.
 
Upvote 0
For example, I'm embarrassed to make public my recent testing, but I know the value of the results and was only trying to get a rough calibration in the limited time I had.

100-400ii on 5Ds @400mm QoF of 1525
@560mm QoF of 1320

The numbers compare with the bottom of the scale for the 5DsR

Now the embarrassing part. Test done at night, travel tripod on wood floor, target home printed on typical home inkjet and taped to a refrigerator (I know, I know. Only location with enough line of sight), and only lighting available which was a single LED work flood light 30W.

So if that test method actually made the range of a body that should be sharper than mine, take your QoF value comparisons with a grain of salt when you do your testing with even a little bit of care.
 
Upvote 0
bluenoser1993 said:
For example, I'm embarrassed to make public my recent testing, but I know the value of the results and was only trying to get a rough calibration in the limited time I had.

100-400ii on 5Ds @400mm QoF of 1525
@560mm QoF of 1320

The numbers compare with the bottom of the scale for the 5DsR

Now the embarrassing part. Test done at night, travel tripod on wood floor, target home printed on typical home inkjet and taped to a refrigerator (I know, I know. Only location with enough line of sight), and only lighting available which was a single LED work flood light 30W.

So if that test method actually made the range of a body that should be sharper than mine, take your QoF value comparisons with a grain of salt when you do your testing with even a little bit of care.

The refrigerator was running and a 30W bulb. Obviously you have a bad lens since it scored so low. ::)
 
Upvote 0
AlanF said:
The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.

I am sure that many CR members take their cholesterol-lowering drugs, anti-hypertensives etc based on the measured levels of their cholesterol and blood-pressure that have large uncertainties due to "copy (= your own body, not your camera's)" variation and other variables measured in different ways.

I think if they organized the data to have comparisons with the same lighting and exposure it would give you a better data base to compare. I haven't paid attention to see if they collect that data.
 
Upvote 0
It's possible they're collecting the data, but kelvin is the only thing I see in the report. I remember older versions use to show EV level during set up, but it doesn't anymore (not that I saw anyway). I agree, if they could group the results with the shutter speed it would compare better.

The 30w was LED value, not the equivalent, but still way to low. Not to mention LED is not the best for AF anyway.
 
Upvote 0
Jul 21, 2010
31,228
13,091
AlanF said:
The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.

Given normal variance, yes. However, in the case of lens testing the variance is not normally distributed, it's skewed – there is proper testing, which will in effect yield the highest value possible for that copy of the lens; there is less proper (or improper) testing, which will yield lower values for the same lens copy; but, there no 'more proper' testing that will yield higher values. The mean is generally not a useful summary statistic for skewed distribution. Further, that skewed distribution is superimposed on the presumably normally distributed copy-to-copy lens variance.

However, the accuracy of user-derived QoF values is really not the major concern here. Rather, the bigger issue is that you are comparing your own measured absolute QoF value (the peak of the curve) with the absolute QoF values measured by other users (semi-arbitrary color coding of the Y-axis values as 'better'/green, 'typical'/blue, and 'poor'/red):

image02-1024x683.png


Why is comparing absolute QoF values a bad thing? Well, let's review what Reikan themselves had to say about it when FoCal v1.9 was released:

[quote author=Reikan]
First, it’s important to understand that FoCal works by analysing the relative differences between the QoF numbers, not the absolute value. For example, suppose you have two measurements during a test that give QoF values of 3000 and 1500 – the most important piece of information here is that the second value is 50% of the first value. If you change the lighting and target image, you may find that the actual QoF values for the same measurements are 2000 and 1000, but the end result is the same – the second is still 50% of the first.
...
As we have said from the release of FoCal, the absolute QoF value is unimportant, so you cannot compare the numbers from one test to another.
[/quote]

But with v2.0, all of a sudden the absolute QoF is important, and all of a sudden you can compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...
 
Upvote 0
neuroanatomist said:
But with v2.0, all of a sudden the absolute QoF is important, and all of a sudden you can compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...

Is it like the ADHD conspiracy theory and Ritalin? Promoting a disorder for an existing drug to treat. If you have it why not find a way to sell it.

Surely there is not a monetary motivation on Reikan's part.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,882
neuroanatomist said:
AlanF said:
The basic assumption in such crowd-sourcing data is that in a large number of measurements the errors and unknown factors average out so that the (relative) mean values are accurate but the spread is wider than for more carefully controlled measurements. It's a general principle in analysing statistics that a large number of inaccurate measurements often gives a more accurate measure of the true mean than a single precise measurement on one sample.

Given normal variance, yes. However, in the case of lens testing the variance is not normally distributed, it's skewed – there is proper testing, which will in effect yield the highest value possible for that copy of the lens; there is less proper (or improper) testing, which will yield lower values for the same lens copy; but, there no 'more proper' testing that will yield higher values. The mean is generally not a useful summary statistic for skewed distribution. Further, that skewed distribution is superimposed on the presumably normally distributed copy-to-copy lens variance.

However, the accuracy of user-derived QoF values is really not the major concern here. Rather, the bigger issue is that you are comparing your own measured absolute QoF value (the peak of the curve) with the absolute QoF values measured by other users (semi-arbitrary color coding of the Y-axis values as 'better'/green, 'typical'/blue, and 'poor'/red):

image02-1024x683.png


Why is comparing absolute QoF values a bad thing? Well, let's review what Reikan themselves had to say about it when FoCal v1.9 was released:

[quote author=Reikan]
First, it’s important to understand that FoCal works by analysing the relative differences between the QoF numbers, not the absolute value. For example, suppose you have two measurements during a test that give QoF values of 3000 and 1500 – the most important piece of information here is that the second value is 50% of the first value. If you change the lighting and target image, you may find that the actual QoF values for the same measurements are 2000 and 1000, but the end result is the same – the second is still 50% of the first.
...
As we have said from the release of FoCal, the absolute QoF value is unimportant, so you cannot compare the numbers from one test to another.

But with v2.0, all of a sudden the absolute QoF is important, and all of a sudden you can compare the numbers from one test to another. So what changed 'from the release of FoCal' to the release of v2.0? Oh yeah, they developed a database and now require people to pay for access to those data. File that under things that make you go hmmmmm...
[/quote]

First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4 with FoCal Pro. "FoCal users have been uploading calibration and test results for over 4 years, the database contains literally tens of millions of data points across tens of thousands of camera and lens combinations. Starting from FoCal 2.0, FoCal Pro users started to benefit from information showing how their camera and lens compares to other FoCal users." https://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/ (They did charge for earlier versions and I don't know when it became free.)

Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.
 

Attachments

  • Focal_QoF_Statistics.jpeg
    Focal_QoF_Statistics.jpeg
    497.2 KB · Views: 190
Upvote 0
AlanF said:
Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.

I think all that I would draw from this is despite how you feel about your technique, your technique is performing better than the average of the widely sampled group. Without data that shows the top level of performance from other lenses I wouldn't credit the results to the lens. Perhaps if you hand picked each of your lenses for the best copy when you bought them, then maybe you could make the claim it is the lens.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,882
takesome1 said:
AlanF said:
Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.

I think all that I would draw from this is despite how you feel about your technique, your technique is performing better than the average of the widely sampled group. Without data that shows the top level of performance from other lenses I wouldn't credit the results to the lens. Perhaps if you hand picked each of your lenses for the best copy when you bought them, then maybe you could make the claim it is the lens.

You cannot logically derive that my technique is performing better than average. What you can draw from the data is that the spread of my values is above the average. The reason for the better than average results could be a better technique or a better range of samples. Given the description of my technique, it is unlikely it is better than average.
 
Upvote 0
AlanF said:
takesome1 said:
AlanF said:
Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.

I think all that I would draw from this is despite how you feel about your technique, your technique is performing better than the average of the widely sampled group. Without data that shows the top level of performance from other lenses I wouldn't credit the results to the lens. Perhaps if you hand picked each of your lenses for the best copy when you bought them, then maybe you could make the claim it is the lens.

You cannot logically derive that my technique is performing better than average. What you can draw from the data is that the spread of my values is above the average. The reason for the better than average results could be a better technique or a better range of samples. Given the description of my technique, it is unlikely it is better than average.

If I can not derive that your technique is better than average, then you can not make the claim "it is unlikely it is better than average" since there is no data provided describing the skill level of the average user.

I based my assumptions on a few things, one is that you are a regular poster to this forum. Second you are serious enough that you would break down your data to compare. Both things I can relate to. Both would lead me to believe on your worse day you are probably testing better than the average tester.

The average user may at best one of the individuals that comes to the forum with 1 post asking why his camera is taking soft pictures. The forum ends up recommending focal, corn flake boxes and television screens to perform AFMA. Without knowing who the purchasers are of focal, we just do not know.
 
Upvote 0
Jul 21, 2010
31,228
13,091
AlanF said:
First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4 with FoCal Pro. "FoCal users have been uploading calibration and test results for over 4 years, the database contains literally tens of millions of data points across tens of thousands of camera and lens combinations. Starting from FoCal 2.0, FoCal Pro users started to benefit from information showing how their camera and lens compares to other FoCal users." https://www.reikan.co.uk/focalweb/index.php/2016/08/focal-2-2-add-full-canon-80d-and-1dx-mark-ii-more-comparison-data-and-internal-improvements/ (They did charge for earlier versions and I don't know when it became free.)

Indeed, I read that. But note that they've moved from a buy a major version of FoCal (which was the case for v1 - you bought it, you got all the updates perpetually until the next major version - for me, that was 3 years) to an annual subscription model that includes the database. So while database access came with your v2.4 purchase, after your annual Included Updates period ends, you'll lose access to the database, unless you pay for another year of updates (which you likely will not need unless you buy a new camera or they include a feature you can't live without).


AlanF said:
Secondly, I do a really bad of job of using FoCal, and I know the spread from many repeat runs. I have a target set up on a wall in the garden, simple laser printed on regular photocopying paper, and it sometimes gets soaked in the rain and dries (though I do change it occasionally). The light levels vary from very dull, uniformly, to changing to such an extent when the sun comes out from behind a cloud that I get warned by the software, to brightly illuminated. Here is the chart of the FoCal values with the spread of the values of mine below. Despite my rotten technique, which should skew them below average, the spread of my QoF values parallels the database and tends to be above the spread of typical values.

Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.
 
Upvote 0
AlanF said:
First of all, they are not charging extra for the comparison data: it comes free with at least my version 2.4

Secondly, I do a really bad of job of using FoCal

They aren't charging extra if you are a new subscriber, but in the past there wasn't a limit to getting the updates. You didn't have to worry about getting a new body that wasn't supported, just get the update once available. Once this idea of data base sharing came up, your subscription became timed, and all the users that created that data base now have to subscribe again if they want access to it.

On your second point, did you read my test method? The target was taped to an operating refrigerator! Light level was low enough that the target setup option wouldn't work. I kept getting the message "focus couldn't be achieved". Those results and I'm sure worse ones are part of the database.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,444
22,882
neuroanatomist said:
Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.

As a scientist, I always have more unpublished data to present to the referees to counter their arguments. Here are some lenses where I scored below average (blue is the average spread): my old 100-400, now gone; 40mm f/2.8, borrowed and returned; Sigma 35/2, sent back; and EF-S 55-250 II, which is actually pretty good. In contrast, my favourite, the 400mm DO II + 1.4TC + 5DS R, which is almost off scale. If my technique is better than average, the first four must have been total cr*p, rescued by my outstanding skills.

You are right, I am not upgrading FoCal until I buy a new body with which it is not currently compatible. For the time being, I have all the data for my existing lenses saved.
 

Attachments

  • Canon_EF100-400mm f_4.5-5.6L IS USM_400mm.jpg
    Canon_EF100-400mm f_4.5-5.6L IS USM_400mm.jpg
    180.5 KB · Views: 164
  • Canon_EF40mm f_2.8 STM_40mm.jpg
    Canon_EF40mm f_2.8 STM_40mm.jpg
    203.3 KB · Views: 152
  • Sigma_EF35mm f_2 IS USM_35mm.jpg
    Sigma_EF35mm f_2 IS USM_35mm.jpg
    168.1 KB · Views: 147
  • Canon_EF-S55-250mm f_4-5.6 IS STM_250mm.jpeg
    Canon_EF-S55-250mm f_4-5.6 IS STM_250mm.jpeg
    170.6 KB · Views: 155
  • Canon EOS 5DS R_EF400mm f_4 DO IS II USM +1.4x III_560mm.jpg
    Canon EOS 5DS R_EF400mm f_4 DO IS II USM +1.4x III_560mm.jpg
    109.2 KB · Views: 149
Upvote 0
AlanF said:
neuroanatomist said:
Agree with takesome1 here. I'm not surprised at all that, as a scientist, your 'really bad job' is still better than the average person's typical effort.

As a scientist, I always have more unpublished data to present to the referees to counter their arguments. Here are some lenses where I scored below average (blue is the average spread): my old 100-400, now gone; 40mm f/2.8, borrowed and returned; Sigma 35/2, sent back; and EF-S 55-250 II, which is actually pretty good. In contrast, my favourite, the 400mm DO II + 1.4TC + 5DS R, which is almost off scale. If my technique is better than average, the first four must have been total cr*p, rescued by my outstanding skills.

You are right, I am not upgrading FoCal until I buy a new body with which it is not currently compatible. For the time being, I have all the data for my existing lenses saved.

The peak in the chart indicate a good run. The second is probably the weakest.
On further review the original call stands. :p

I have seen very bad runs, and these just do not qualify.
 
Upvote 0