Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.
Some software "solutions" do not recover detail, they create it.
There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.
If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.
In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.
Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.
Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.
I am not proclaiming that we can 100% perfectly recover the original state of an image. Even with regularization, there are certainly limits. However I think we can recover a lot, and with some guess work and information fabrication, we can get very close, even if information remains lost.
There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.
That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do. That guys blurred sample image was almost completly blurred, and he recovered a lot of it. Not everyone is going to be recovering completely defocused images...the far more frequent case is slightly defocused images, in which case the error rate and magnitude is far lower (usually invisible), and the process is quite effective. I really love my 7D, but it does have it's AF quirks. There are too many times when I end up with something ever so slightly out of focus (usually a birds eye), and a deblur tool is useful (Topaz In Focus produces nearly perfect results.)
I'd point out that in the further examples from that link, the halos (waveform halos, or rings) are pretty bad. Topaz In Focus has the same problem, although not as seriously. From the description, it seems as though his PSF (blur function as he put it) is fairly basic (simple gaussian, although I think he mentioned a Laplacian function, which would probably be better). If you've ever pointed your camera at a point light source in a dark environment, defocused it as much as possible, and looked at the results at 100%, you can see the PSF is quite complex. It is clearly a waveform, but usually with artifacts (I call the effect "Rocks in the Pond" given how they affect the diffraction pattern.) I don't know what the fspecial
function of Matlab can do, however I'd imagine a laplacian function would be best to model the waveform of a point light source.
Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function? I haven't tried writign my own deblur tool...so it's an honest question. The gap is information...we lack enough information about the original function that did the blurring in the first place. With further image analysis after each attempt to deblur, we could continually reinform the algorithm with richer, more accurate information. I don't see why a multi-pass debluring deconvolution process couldn't produce better results with fewer artifacts and finer details.