Pixel density, resolution, and diffraction in cameras like the 7D II

Status
Not open for further replies.
jrista said:
Radiating said:
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

hjulenissen said:
Radiating said:
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

Yes diffraction changes with aperture but if you always shoot below f/5.6 you can ditch the AA filter without consequence, and those images shot below f/5.6 would be sharper than those taken with the same camera with an AA filter.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Radiating said:
jrista said:
Radiating said:
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

Do you have a link? Because 20% is insane, and I don't believe that figure. The most sharply focused images would look completely blurry if an AA filter imposed a 20% cost on IQ...it just isn't possible.

Radiating said:
hjulenissen said:
Radiating said:
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

No, it is not irreversible. The D800E is the perfect example of the fact that it is indeed REVERSIBLE. The AA filter is a convolution filter for certain frequencies. Convolution can be reversed with deconvolution. So long as you know the exact function of the AA filter, you can apply an inverse and restore the information. The D800E does exactly that...the first layer of the AA filter blurs high frequencies at Nyquist by a certain amount, and the second layer does the inverse to unblur those frequencies, restoring them to their original state.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
jrista:
My gripe was with your claims that seemingly everything can and should be described using Amplitude, phase and frequency.

A stochastic process (noise) have characteristics that vary from time to time and from realisation to realisation. This means that talking about the amplitude of noise tends to be counterproductive. What does not change (at least in stationary processes) is the statistic parameters: variance. Or PSD. etc.

What you want to learn from the response to a swept-frequency/amplitude sinoid is probably the depth of the modulation. Sure the sine has got a phase, but if it cannot tell us anything, why should we bother? If you do believe that it tells us anything, please tell me instead of explaining once more what a sine-wave is or how to Fourier-transform anything.

-h

Not "should", but "can". I am not sure I can explain anymore, as I think your latching on to meaningless points. I don't know your background, so if I'm explaining things you already know, apologies.

The point isn't about amplitude, simply that noise can be described as a set of frequencies in a Fourier series. Eliminate the wavelets that most closely represent noise (not exactly an easy thing to do without affecting the rest of the image, but not impossible either), and you leave behind only the wavelets that represent the image.

Describing an image as a discrete set of points of light, which disperse with known point spread functions, is another way to describe an image. In that context, you can apply other transformations to sharpen, deblur, etc.

The point was not to state that images should only ever be described as a Fourier series. Simply that they "can". Just as they "can" be described in terms of PSF and PSD.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Just to provide a better example than I can provide, here is a denoising algorithm that uses both FFT and wavelet inversion to deband as well as deconvolution of PSF to remove random noise:

http://lib.semi.ac.cn:8080/tsh/dzzy/wsqk/spie/vol6623/662316.pdf

This really sums up my point...simply that an image can be processed in different contexts via different modeling to remove noise while preserving image data. I am not trying to say that modeling an image signal as a Fourier series is better or the only way, and that assuming it is a discrete set of point light sources described by a PSF is invalid. They are both valid.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
jrista said:
Not "should", but "can". I am not sure I can explain anymore, as I think your latching on to meaningless points. I don't know your background, so if I'm explaining things you already know, apologies.

The point isn't about amplitude, simply that noise can be described as a set of frequencies in a Fourier series. Eliminate the wavelets that most closely represent noise (not exactly an easy thing to do without affecting the rest of the image, but not impossible either), and you leave behind only the wavelets that represent the image.

Describing an image as a discrete set of points of light, which disperse with known point spread functions, is another way to describe an image. In that context, you can apply other transformations to sharpen, deblur, etc.

The point was not to state that images should only ever be described as a Fourier series. Simply that they "can". Just as they "can" be described in terms of PSF and PSD.
Signal and noise are generally hard to separate. In a few cases, they may not be (as in "low-frequency signal, high-frequency noise" (->smoothing noise reduction) or "wide-band signal, narrow-band noise" (->notch filtering)). In any case, the DFT is information-preserving, meaning that anything that can be done to a chunk of data in the frequency-domain can be done to the data in the spatial-domain. It might be harder to do, but it can be done.

If you have additive noise, and know the exact values, it is trivial to subtract it from the signal in either the spatial domain or the frequency domain. I dont see any practical situations where you have this knowledge.

If you know the PDF of noise for a given camera, for a given setting, spatial frequency, signal level etc, you can have some prior information about the noise, and how to mitigate it. If you treat noise as a deterministic phenomena, it seems really hard to gain prior knowledge. You might have some insanely complex model of the world, try to fit the data to the model, and assume that the modelling error is noise. However, such nonlinear processes tends to have some nasty artifacts.

-h

It is true that you don't have precise knowledge about the exact specifications of a camera for a given setting, spatial frequency, signal level, etc. There is a certain amount of guesswork involved, but I think a lot of information can be derived by analyzing the signal you have. The link I provided in my last post demonstrates debanding using a fourier transform and wavelet inversion, and further denoising (for your Poisson and Gaussian noise) with standard PSF deconvolution. I don't believe there is any specific prior knowledge...what knowledge is used is derived. Is it 100% perfect? No, of course not. I think it's good enough, though, and while there is some softening in the final images, the results are pretty astounding. I've used Topaz Denoise 5, which applies a lot of this kind of advanced denoising theory. Its random noise removal is ok...I actually wouldn't call it as good as LR4's standard NR. However when it comes to debanding, Topaz Denoise 5 does a phenomenal job with very little impact to the rest of the image and does so in wavelet space using a fourier transform (sometimes you get softer light/dark bands, but as an artifact of the algorithm, they are very hard to notice in most cases, and more acceptable than the much harsher banding noise).

I won't deny that non-linear processing can produce some nasty artifacts. Topaz In-Focus is a deblurring tool. Its intended use is to correct small inaccuracies in focus, however it can be used to deblur images that are highly defocused. It is an interesting demonstration of the power of deconvolution, and when an image is nearly entirely blurred, you can recover it well enough to see moderately fine detail, including text. Under such extreme circumstances, however, you do tend to get "wave halos" around primary objects...which is indeed one of those nasty artifacts. One would assume, though, that the artifact is a consequence of an inadequate algorithm in the first place with some kind of repeating error. If so, with further refinement, the error could be corrected...no?
 
Upvote 0
Still quite a lot of misinformation in this thread, but before addressing any of that - some thread-title relevant material quite decoupled from the current discussions.

About pixel density, resolution and diffraction

Here's an image series that shows pretty graphically what jrista probably intended with this thread. Diffraction is NOT a problem. Lens sharpness is NOT a problem. And they won't be - for yet quite a long time... We have to double the number of MP's, and then double twice more before having trouble (with "better than mediocre" lenses of course...!)

Well, the Canon 400/5.6 is no slouch, but on the other hand it's no sharpness monster either. In the following image series, it was used wide open from a sturdy tripod - at 1/60s shutter speed for both cameras, on A) the 5Dmk2 and then on B) the Pentax Q.
The Q has a pixel pitch of ~1.55µm - giving an APS sensor of about 150MP!

I'll begin with explaining what the images are, and they show. both original images developed from raw with CaptureOne, sharpened individually. No noise reduction applied.

1 - 5D2, full frame scaled down.
2 - 5D2, 1:1 pixel scale, crop to about 6.2x4.55mm center of the 5D2 sensor - size as the Q sensor
3 - Pentax Q full frame, downsampled to same size
4 - Pentax Q, 1:1 pixel scale. This is like a 100% crop from a ~360MP FF camera
5 - What the 5D2 looks like when upsampled to the same presentation size.

1
C02_full.jpg

2
C02_frame.jpg

3
Q02_full.jpg

4
Q02_crop.jpg

5
C02_interp.jpg


So, at F5.6, you can see that the amount of red longitudinal CA in the 400/5.6 is a much bigger problem than diffraction - on a 150MP APS sensor.
And, in the last two images (4+5) you can clearly see that the 5Dmk2 isn't even close to scratching the surface of what the 400/5.6 is capable of. Note the difference in the feather pins, lower right of the image. Also note that the small-pixel image is a LOT less noisy than the 5D2 when scaled down to the same presentation size.

A.L., from whom I borrowed these images (with permission) is on assignment in South Korea at the moment, so I can't get at the raw files unfortunately.

I've done similar comparisons with the small Nikon 1-series and the FF D800. Same result there. The smaller pixels have a lot less noise at low ISOs, and the D800 isn't even close to resolving the same amount of detail as the smaller camera. Not even with cheap lenses like the 50/1.8 and the 85/1.8. But I thought a 5Dmk2 comparison would be more acceptable here... :)
 
Upvote 0
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Plamen said:
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
About pixel density, resolution and diffraction

Here's an image series that shows pretty graphically what jrista probably intended with this thread. Diffraction is NOT a problem. Lens sharpness is NOT a problem. And they won't be - for yet quite a long time... We have to double the number of MP's, and then double twice more before having trouble (with "better than mediocre" lenses of course...!)

Well, the Canon 400/5.6 is no slouch, but on the other hand it's no sharpness monster either. In the following image series, it was used wide open from a sturdy tripod - at 1/60s shutter speed for both cameras, on A) the 5Dmk2 and then on B) the Pentax Q.
The Q has a pixel pitch of ~1.55µm - giving an APS sensor of about 150MP!

I'll begin with explaining what the images are, and they show. both original images developed from raw with CaptureOne, sharpened individually. No noise reduction applied.

1 - 5D2, full frame scaled down.
2 - 5D2, 1:1 pixel scale, crop to about 6.2x4.55mm center of the 5D2 sensor - size as the Q sensor
3 - Pentax Q full frame, downsampled to same size
4 - Pentax Q, 1:1 pixel scale. This is like a 100% crop from a ~360MP FF camera
5 - What the 5D2 looks like when upsampled to the same presentation size.

1
C02_full.jpg

2
C02_frame.jpg

3
Q02_full.jpg

4
Q02_crop.jpg

5
C02_interp.jpg


So, at F5.6, you can see that the amount of red longitudinal CA in the 400/5.6 is a much bigger problem than diffraction - on a 150MP APS sensor.
And, in the last two images (4+5) you can clearly see that the 5Dmk2 isn't even close to scratching the surface of what the 400/5.6 is capable of. Note the difference in the feather pins, lower right of the image. Also note that the small-pixel image is a LOT less noisy than the 5D2 when scaled down to the same presentation size.

A.L., from whom I borrowed these images (with permission) is on assignment in South Korea at the moment, so I can't get at the raw files unfortunately.

I've done similar comparisons with the small Nikon 1-series and the FF D800. Same result there. The smaller pixels have a lot less noise at low ISOs, and the D800 isn't even close to resolving the same amount of detail as the smaller camera. Not even with cheap lenses like the 50/1.8 and the 85/1.8. But I thought a 5Dmk2 comparison would be more acceptable here... :)

Thanks for the examples! Great demonstration of what can be done, even with a 150mp APS-C equivalent (384mp FF equivalent) sensor. I could see a 150mp APS-C being possible...Canon has created a 120mp APS-H. I wonder if a 384mp FF is plausible...
 
Upvote 0

Skulker

PP is no vice and as shot is no virtue
Aug 1, 2012
413
1
That's a very interesting post from thesuede

It would be good to see the original files and see details of the adapter to mount that lens on that camera used I was wondering if there is a bit of an extension tube effect?

Generally if something is too good to be true there is a clue that there may be a catch. Any results that indicate that a camera like the pentax q can out perform a camera like the 5d2 is an interesting result.
 
Upvote 0
It's a standard PQ>>EF adapter. There's no extension (obviously, since that would offset focus...).

And there's no "catch". This is generally known truth - among anyone that's ever tried, or has any kind of education within the field. In fact, the PQ is almost 50% better in light-efficiency per square mm.

At higher ISOs (lower photometrical exposures) the PQ suffers from a lot higher integrated sum of RN per mm², so large pixels still win for high ISO applications.

Both were shot at 1/60s, F5.6 - so the exact same photometric exposure. No glass in the adapter of course, and this means that the cameras were fed the EXACT same amount of light per mm² of sensor estate. Both were at ISO200, but required slightly different offsets - the canon is a large camera, giving a lot more headroom in the raw files. The PQ is really an exchangeable lens compact camera, with a small sensor - so as little headroom as technically possible is used to offset RN. The ISO 12232:2006 (and the CIPA DC-008) allows defining ISO according to the camera exposure for medium gray, so that's generally ok (even if the Canon overstates ISO by a bit to much...). I'm not sure Capture One treats any of them absolutely correctly.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Skulker said:
That's a very interesting post from thesuede

It would be good to see the original files and see details of the adapter to mount that lens on that camera used I was wondering if there is a bit of an extension tube effect?

Generally if something is too good to be true there is a clue that there may be a catch. Any results that indicate that a camera like the pentax q can out perform a camera like the 5d2 is an interesting result.

Outperform is a broad word without an appropriate context. The Pentax SENSOR, from a spatial standpoint, definitely outperforms the 5D II. That is a simple function of pixels per unit area, and the Pentax plain and simply has more. There is nothing really "too good to be true" about that fact.

There are numerous areas where performance can be measured and compared, in addition to the sensor. For one, despite it's greater spatial resolution, the Pentax will suffer at higher ISO due to it's smaller pixel size (which is still much smaller, despite it being a BSI sensor). It just can't compare to the significantly greater surface area of the 5D II's pixels, which should perform well at high ISO.

The Pentax outperforms in terms of sheer spatial resolution, but I would say the 5D II outperforms in most other areas, such as ISO performance, image resolution, camera build and ergonomics, the use of a huge optical viewfinder, shutter speed range, etc.
 
Upvote 0
jrista said:
Plamen said:
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.

Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.

There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.


There
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Plamen said:
jrista said:
Plamen said:
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.

Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.

I am not proclaiming that we can 100% perfectly recover the original state of an image. Even with regularization, there are certainly limits. However I think we can recover a lot, and with some guess work and information fabrication, we can get very close, even if information remains lost.


Plamen said:
There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.

That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do. That guys blurred sample image was almost completly blurred, and he recovered a lot of it. Not everyone is going to be recovering completely defocused images...the far more frequent case is slightly defocused images, in which case the error rate and magnitude is far lower (usually invisible), and the process is quite effective. I really love my 7D, but it does have it's AF quirks. There are too many times when I end up with something ever so slightly out of focus (usually a birds eye), and a deblur tool is useful (Topaz In Focus produces nearly perfect results.)

I'd point out that in the further examples from that link, the halos (waveform halos, or rings) are pretty bad. Topaz In Focus has the same problem, although not as seriously. From the description, it seems as though his PSF (blur function as he put it) is fairly basic (simple gaussian, although I think he mentioned a Laplacian function, which would probably be better). If you've ever pointed your camera at a point light source in a dark environment, defocused it as much as possible, and looked at the results at 100%, you can see the PSF is quite complex. It is clearly a waveform, but usually with artifacts (I call the effect "Rocks in the Pond" given how they affect the diffraction pattern.) I don't know what the fspecial function of Matlab can do, however I'd imagine a laplacian function would be best to model the waveform of a point light source.

Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function? I haven't tried writign my own deblur tool...so it's an honest question. The gap is information...we lack enough information about the original function that did the blurring in the first place. With further image analysis after each attempt to deblur, we could continually reinform the algorithm with richer, more accurate information. I don't see why a multi-pass debluring deconvolution process couldn't produce better results with fewer artifacts and finer details.
 
Upvote 0
jrista said:
That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do.

I am. And I am mathematician. :)

Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function?

Even if you know the blur kernel, this is highly unstable. The easiest way to understand it is to consider the Fourier transform. High frequencies are attenuated and when they get close to the noise and the other errors level, they are gone forever. Whatever you do, they are gone.

BTW, if the blur is done with a "sharp" kernel, like a disk with a sharp edge, this is a much better behaved problem and allows better deconvolution.
 
Upvote 0
Two things very often glossed over by "pure" mathematicians regarding deconvolution is that:

1) The Bayer reconstruction interpolation destroys quite a lot of the valid information. The resulting interpolated values (2 colors per pixel) are often have a PSNR lower than 10 in detailed areas. This makes using those values in the deconvolution a highly unstable process.

2) The PSF is modulated in four (five, but curvature of field is often unimportant if small) major patterns as you go radially outwards from the optical center. The growth rate of those modulations can be higher-order functions, and the growth in one does not necessarily have the same order or even average growth as another.

Since having a good PSF model is the absolute base requirement of a good deconvolution, this gives quite a lot of problems when looking outside the central 15-20% of the image height. Deconvolution on actual images only works with really, really good lenses and images captured with low PSNR - which kind of limits the use case options.

The only real application at the moment (for normal photography) outside sharpening already "almost" sharp images is removing camera shake blur. Camera shake blur is - as long as you stay within reasonable limits - a very well defined PSF that is easy to analyze and deconvolve.
 
Upvote 0
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system). This is numerically unstable, and hence = impossible to deconvolve.

-UNLESS the AA filter OR the diffraction effect is so strong that the blurring induced is 2 pixel widths!

Movement sensors in mobile devices today are almost always MEMS, either capacitive or piezo based. The cheapest way in production today is to use high-speed filming and then combine the images back together after registration. There are systems on the market today capable of doing this. This method is actually computationally cheaper than deconvolution, but it increases electronic noise contamination.

Applying a center-based PSF on an entire image often does little "harm" to the edges, unless the lens has a lot of comatization or very different sagittal / meridonal focus planes (astigmatism) - which unfortunately is quite usual even in good lenses. Coma makes the PSF very assymetrical, all scatter is spread in a fan-like shape out towards the edge of the image from the optical center. Convolving this with a rotationally symmetrical PSF gives a lot of overshoot, which in turn causes massive increases in noise and exaggerated radial contrast. This can look really strange, and often worse than the original.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?
 
Upvote 0
hjulenissen said:
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
 
Upvote 0
Status
Not open for further replies.