It doesn't matter what kind of sensor you have, low resolution, high resolution, or tomorrows resolution. A convolved result is a convolved result, and in this case stability (or the lack thereof) doesn't really apply like it might when trying to denoise or deblur.
Let me repeat. The question here is NOT to reconstruct the image before the convolution. It is to reconstruct the convolution kernel knowing what the image was and knowing what the convoluted image is. This is a very different problem, and a well posed one. In the case under discussion, we have two kernels but we have more than two bodies, so you get a system, etc.
You are talking about reverse engineering the actual lens PSF from an image produced by a grid of spatially incongruent red, green, and blue pixels (likely covered by additional lenses (microlenses)), then further interpolated by software to produce the kind of RGB color pixels we see on a screen and analyze with tools like Imatest (or DXO's software).
Why do not read their description first? They do not demosaic, and they test on each channel. Unfortunately, they decided to hide the data since recently. They do the slanted edge test, which averages over many pixels and makes it possible to estimate well what the effect of the pixels is just based on their number (but not the AA filter strength). It is the "purest" test I have seen but again, the data is hidden now.
Your article is an interesting start, but you are assuming a Gaussian PSF. An actual PSF is most definitely not Gaussian, nor is it constant across the area of the lens (i.e. it changes as you leave the center and approach the corners...do a search for "spot diagram" to see actual lens PSF's produced mathematically from detailed and accurate lens specifications...even for the best of lenses, outside of the most centeral on-axis results, a PSF can be wildly complicated).
I did say that it is not a Gaussian but for many purposes it is close enough. This is not my
it first appeared in a paper on optics and has been used many times since then. If somebody can point out a reference, I will put it there right away.
I also explained why its variance across the frame is not a problem. You just apply the formula in different regions with different values.
I also mentioned there that my point is not that
formula but the general principle: multiple blur factors act as a convolution with their convolution. This is a much more universal principle.
Not to mention the fact that you have to guess the kernel in the first place, so whatever your result, it is immediately affected by what you think the lens is capable of in the first place.
There is no much go guess. The AA filter can very well be approximated with a Gaussian, and the effect of the pixels can just be computed. The only weakness is that you do not really know that the AA filter has strength proportional to the pixel density. But then you have data for many bodies.
But even if you had the absolute lens data, what are you going to do with it? You never answered that question. Let me help you - to see how it performs on a future 5D4 or whatever, you need to make some assumptions on the AA filter, then you need to apply the formula that you do not like. There is no way around that factor.
Personally, I wouldn't trust any site that provided "lens resolution" results reverse engineered from an image produced by any sensor. I would actually rather take the "camera system" tests than have someone telling me what their best guess is for lens performance.
You probably never go to MRI or CT scan test because instead of just "seeing" what is inside, they "reverse engineer", i.e., they compute it.
Hmm, DXO's own description on the lens tests page begs to differ:
DxOMark's comprehensive camera lens test result database allows you to browse and select lenses for comparison based on its characteristics, brand, type, focal range, aperture and price.
This is nitpicking. Try to get the resolution numbers. You cannot get it without choosing a body, and the results are always displayed with the body well visible. Their articles are poorly written but it is not a "rocket science" to realize what you are looking at.