Pixel density, resolution, and diffraction in cameras like the 7D II

Status
Not open for further replies.
jrista said:
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
jrista said:
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.

Aye, sorry. Wavelet deconvolution of a RAW was primarily used for debanding, which is why I mentioned it before. As for the rest, yes, we were talking about sharpening.
 
Upvote 0
Plamen said:
hjulenissen said:
Plamen said:
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.
[/quote]

Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.

And many of the de-stabilizing problems in the base material (the raw image) are synthetical - they stem from results that are weakly determined (like noise) or totally undetermined (like interpolation errors from having to weak AA-filters)... The totally undetermined errors are worst, since they are totally unpredictable.

The only really valid and reasonable way we have to deal with this right now is to shoot with a camera that has "more MP than we really need", and the downsample the result. Doing deconvolution sharpening on an image that has been scaled down to 1:1.5 or 1:2.0 (9MP or 5.5MP from a 5D2) usually yields reasonably accurate results. You can choose much more aggressive parameters without risking resonance.

I wouldn't even care if the camera I bought next year has 60MP, but only gave me 15MP raw files
- as long as the internal scaling algorithms were of reasonable quality. A 15MP [almost] pixel perfect image is way more than most people need. And it's actually way more real and actual detail than what you get from a 20-24MP camera.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect = a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
Plamen said:
hjulenissen said:
Plamen said:
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect = a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.

I am not so sure about the accurate pixel-level detail comment. I might extend that to "color-accurate pixel-level detail", given the spatial resolution of red and blue is half that of green. When I combine my 7D with the new 500 II, I get some truly amazing detail at 100% crop:

VC3kIDp.jpg


In the crop of the finch above, the per-pixel detail is exquisite. All the detail that is supposed to be there is, it is accurately reproduced, it is very sharp (that image has had zero post-process sharpening applied), contrast is great, and noise perfromace is quite good. I've always liked my 7D, but there were definitely times when IQ dissapointed. Until I popped on the 500mm f/4 L II. As I was saying originally in the conversation that started this thread...there is absolutely no alternative for high quality glass, and I think many of the complaints we have about bayer sensor technology really actually boil down to bad glass.

Which, ironically, is NOT a bad thing. I completely agree with you, that the only way to truly preserve all the detail our lenses resolve is to use a sensor that far outresolves the lens and downsample....however that takes us right back to the point we started with: People bitch when their pixel-peeping shows up "soft" detail. (Oh, I so can't wait for the days of 300ppi desktop computer screens...then people won't even be able to SEE a pixel, let alone complain about IQ at pixel level. :))
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
TheSuede said:
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.
cfafft.jpg

If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.
 
Upvote 0
This is something different.

Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Note, you do not even need to have zeros. Also, deconvolution is unstable before you sample (which makes it worse), so no need of discrete models.

hjulenissen said:
Plamen said:
hjulenissen said:
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
http://en.wikipedia.org/wiki/Z-transform
http://en.wikipedia.org/wiki/Finite_impulse_response
http://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations
http://dsp.rice.edu/sites/dsp.rice.edu/files/md/lec15.pdf
"Inverse systems
Many signal processing problems can be interpreted as trying to undo the action of some system. For example, echo cancellation, channel obvolution, etc. The problem is illustrated below.
If our goal is to design a system HI that reverses the action of H, then we
clearly need H(z)HI(z) = 1. In the case where Thus, the zeros of H(z) become poles of HI(z), and the poles of H(z) become zeros of HI(z). Recall that H(z) being stable and causal implies that all poles are inside the unit circle. If we want H(z) to have a stable, causal inverse HI(z), then we must have all zeros inside the unit circle, (since they become the poles of HI(z).) Combining these, H(z) is stable and causal with a stable and causal inverse if and only if all poles and zeros of H(z) are inside the unit circle. This type of system is called a minimum phase system."


For images you usually want a linear phase system. A pole can be approximated by a large number of zeros. For image processing, large delay may not be a problem (easily compensated), so a system function of "1" can be replaced by z^(-D)

MATLAB example:
Code:
%% setup an all-pole filter
order = 2;
a = [1 -0.5 0.1];
%% generate a vector of normally distributed noise
n = randn(1024,1);
%% apply the (allpole) filter to the noise
x = filter(1,a,n);
%% apply the inverse (allzero) filter
n_hat = filter(a,1,x);
%% see what happened
sum(abs(n - n_hat))
sum(abs(n))
%% plot filter time-domain impulse responses
[h1,t1] = impz(1,a, 10);
[h2,t2] = impz(a,1, 10);
Output:
>>2.3512e-14
>>815.4913


-h
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Plamen said:
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.
 
Upvote 0
jrista said:
Plamen said:
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.

If you say so... I am just telling you how it is. What you wrote is not related to my post.

Let us try it. I will post a blurred landscape, you will restore the detail. Deal?
 
Upvote 0
hjulenissen said:
Sampling is a nonlinear process, and having denser sampling would probably make it easier to treat it as a linear problem (in lower spatial frequiencies), if it comes at no other cost (e.g. Q.E.). Heres to future cameras that "outresolves" its lenses.

I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
 
Upvote 0
hjulenissen said:
TheSuede said:
Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.
Not sure that I understand what you are saying. If you apply the exact inverse to a noiseless signal, I would not expect ringing in the result. If the inverse is only approximate, it really depends on the measurement and the approximation. If you have a large, "hairy" eestimated PSF, you might like its approximate inverse to be too compact and smooth, rather than too large and irregular. If the SNR is really poor, it is perhaps not a point in doing deconvolution (good denoising may be more important)

Very deep spectral nulls is a problem that I acknowledged earlier. For anyone but (perhaps) textbook authors, they spell locally poor SNR. The OLPF tends to add a shifted version of the signal to itself, causing a comb-filter of (in principle) 0 gain.

Is this a problem in practice? That depends. Are the first zero within the band that we want to deconvolve? (I would think that they place the first zero close to Nyquist to make a crude lowpass filter, so no.). Is the total PSF dominated by OLPF, or does other contributors cause it to "blur" into something more Gaussian-like?

-h

Actually, Mr Nyquist was recently taken out behind a barn and shot... The main suspect is Mr Bayer. He made the linear sampling theorem totally irrelevant at pixel scale magnifications by adding sparse sampling into the image data. :)

To make this even more clear, imagine a strongly saturated color detail in an image. It may be a thin hairline, or a single point - doesn't matter. If that detail is smaller than two pixel widths, the information it leaves on the sensor is totally dependent on spatial relations, where the detail lands on the CFA pattern of the sensor.

**If the detail is centered on a "void", a different CFA color pixel, it will leave no trace in the raw file. We literally won't know it was there, just by looking at the raw file.
**If the detail is centered on a same-color pixel in the CFA on the other hand, it will have 100% effect.

The reason you cannot apply deconvolution to raw data (and actually not to interpolated data with low statistical reliability either...) is rather easy to see... Look at the image to the far right. It is the red channel result of letting the lens project pattern "A" on to the Bayer red pattern "B".

Can you, just by looking at result "C" get back to image "A"? Deconvolution is of no help here, since the data is sparsely sampled - 75% of the data is missing.
NondeterminedConvolution.png
 
Upvote 0
hjulenissen said:
Plamen said:
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Plamen said:
hjulenissen said:
Plamen said:
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.

Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.
 
Upvote 0
jrista said:
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT's (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (for Gaussian kernels).

Here is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution. Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
Plamen said:
jrista said:
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT's (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (for Gaussian kernels).

Here is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution. Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.

Thanks for the links. Let me read before I respond.
 
Upvote 0
hjulenissen said:
Everything that you say here have been mentioned (in some form) several times in the thread, I believe. If that is the ground-breaking flaw in prior posts that you wanted to point out, you must not have seen those posts?
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
I think that the main mistake you are doing is using a patronizing tone without actually reading or comprehending what they are writing. That makes it difficult to turn this discussion into the interesting discussion it should have been. If you really are a scientist working on deconvolution, I am sure that we could learn something from you, but learning is a lot easier when words are chosen more wisely than yours.

The discussion was civil until you decided to proclaim superiority and become patronizing. Read your own posts again.

I am done with this.
 
Upvote 0
Mar 2, 2012
3,188
543
hjulenissen said:
Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

If that's what you're going for, perhaps a definition of all acronyms used would be of use. This thread rapidly went from fairly straightforward to deeply convoluted (pun intended) and jargon-heavy.

I think I have an approximate understanding of the current line of discussion, however I'm not sure how it relates back to the OP (seems to have shifted from whether it makes sense to have a higher spacial resolution sensor in a diffraction-limited case to whether one can always algorithmically fix images which may have a host of problems).
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
Plamen said:
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
I objected to your claim that convolution with a smooth (and fast decaying) function did not have a stable inverse. I believe that I have shown an example of the opposite. If what you tried to say was that SNR and unknown PSF is a problem, I wish you would have said so.
http://www.canonrumors.com/forum/index.php?topic=13249.msg239997#msg239997

Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

I think Plamen's point is that noise and PSF kernels ARE hard to characterize. We can't know the effect atmosphere has on the image the lens is resolving. Neither can we truly know enough about the characteristics of noise (of which there are many varieties, not just photon shot noise that follows a Poisson distribution). We can't know how the imperfections in the elements of a lens affect the PSF, etc.

Based on one of the articles he linked, the notion of an ill-posed problem is fundamentally based on how much knowledge we "can" have about all the potential sources of error, and the notion that small amounts of error in source data can result in large amounts of error in the final solution. Theoretically, assuming we have the capacity for infinite knowledge, along with the capability of infinite measurement and infinite processing power, I don't see why the notion of an ill-posed problem would even exist. However given the simple fact that we can't know all the factors that may lead to error in the fully convolved image projected by a lens (even before it is resolved and converted into a signal by an imaging sensor), we cannot PERFECTLY deconvolve that image.

To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.

Finally, to the original point that started this tangent of discussion...why higher resolution sensors are more valuable. TheSuede pointed out that because of the nature of a bayer sensor, where sampling is sparse, that only poses a problem to the final output so long as the highest input frequencies are as high or higher than the sampling frequency. When the sampling frequency outresolves the highest spatial frequencies of the image, preferably by a factor of 2x or more, the potential for rouge error introduced by the sampling process itself (i.e. Moire) approaches zero. That is basically what I was stating with my original post, and ignoring the potentials that post-process deconvolution may offer, assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
jrista said:
To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.
Is it not fundamentally a problem of information?

If you have perfect knowledge of the "scrambling" carried out by a _pure LTI process_, you can in principle invert it (possibly with a delay that can be compensated) or as closely as you care, and in practice you usually can come close.

Yes, it is fundamentally a problem of information. This is where everyone is on the same page, I just think the page is interpreted a bit differently. Plamen's point is that we simply can't have all the information necessary to deconvolve an image beyond certain limits, and that those limits are fairly restrictive. We can't have perfect knowledge, and I don't think that any of the processes that convolve are "pure" in any sense. Hence the notion that the problem is ill-posed.


hjulenissen said:
Even with perfect knowledge of the statistics of an (e.g additive) noise corruption, you cannot usually recreate the original data perfectly. You would need deterministic knowledge of the actual noise sequence, something that is unrealistic (mother nature tends to not tell us beforehand how the dice will turn out).

Aye, again to Plamen's point...because we cannot know deterministically what the actual noise is, the problem is ill-posed. That does not mean we cannot solve the problem, it just means we cannot arrive at a perfect solution. We have to approximate, cheat, hack, fabricate, etc. to get a reasonable result...which again is subject to certain limitations. Even with a lot more knowledge and information than we have today, it is unlikely a completely blurred image from a totally defocused lens could ever be restored to artistic usefulness. We might be able to restore such an image well enough that it could be used for, say, use in a police investigation of a stolen car. Conversely, we could probably never fully restore the image to a degree that it would satisfy the need for near-perfect reproduction of a scene that could be printed large.

hjulenissen said:
Even with good estimation of the corrupting PSF, practical systems tends to have variable PSF. If you try to do 30dB of gain at an assumed spectral null, and that null have moved slightly so the corruption gain is no longer -30dB but -5dB, you are in trouble.

Real systems have both variable/unknown PSF and noise. Simple theory and backs-of-envelopes are nice to have, but good and robust solutions might be expected to have all kinds of inelegant band-aids and perceptually motivated hacks to make it actually work.

Completely agree here. All we need is good enough to trick the mind into thinking we have what we want. For a police man investigating a car theft, that point may be reached when he can read a license plate from a blurry photo. For a fine art nature photographer, that point could be reached when the expected detail resolves...even if some of it is fabricated.

hjulenissen said:
jrista said:
assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.
I agree. As density approach infinite, the discrete sampled sensor approach an ideal "analog" sensor. As we move towards a single/few-bit photon-counting device, the information present in a raw file would seem to be somewhat different. Perhaps us simple linear-system people would have to educate ourselves in quantum physics to understand how the information should be interpreted?

Yet, people seem very fixated on questions like "when will Nikon deliver lenses that outresolve the D800 36 MP FF sensor?" Perhaps it is only human to long for a flat passband up to the Nyquist frequency, no matter how big you would have to print in order to appreciate it?

I think people just want sharp results strait out of the camera. It is one thing to understand the the value behind a "soft" image that has been highly over sampled, with the expectation that you will always downsample for any kind of output...including print.

Most people don't think that way. They look at the pixels they have and think: This doesn't look sharp! That's pretty much all it really boils down to, and probably all it will ever boil down to. :p

hjulenissen said:
http://ericfossum.com/Presentations/2011%20December%20Photons%20to%20Bits%20and%20Beyond%20r7%20web.pdf (slide 39 onwards)

Quanta Imaging Sensor

Does anyone understand why the 3D convolution of the "jots" in X,Y,t is a claimed to be non-linear convolution?

That link didn't load. However, given the mention of "jots", it reminds me of this paper:

Gigapixel Digital Film Sensor Proposal
 
Upvote 0
Status
Not open for further replies.