October 02, 2014, 05:17:18 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Plamen

Pages: [1] 2 3 ... 6
1
Everything that you say here have been mentioned (in some form) several times in the thread, I believe. If that is the ground-breaking flaw in prior posts that you wanted to point out, you must not have seen those posts?
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
Quote
I think that the main mistake you are doing is using a patronizing tone without actually reading or comprehending what they are writing. That makes it difficult to turn this discussion into the interesting discussion it should have been. If you really are a scientist working on deconvolution, I am sure that we could learn something from you, but learning is a lot easier when words are chosen more wisely than yours.

The discussion was civil until you decided to proclaim superiority and become patronizing. Read your own posts again.

I am done with this.

2
Forgot to say which example in the last link - Example 3.18.

3
Reviews / Re: Review - Canon EF 85 f/1.2L II
« on: March 05, 2013, 09:45:26 PM »
5DII should be left out of that equation. WORST. AUTOFOCUS. EVER.

Hey, the center AF point of the 5DII was pretty good for still subjects.  :P
Two shots with the worst AF ever body and the 85LII:  ;)



4
Reviews / Re: Review - Canon EF 85 f/1.2L II
« on: March 05, 2013, 09:27:15 PM »
Have you had the chance to compare the Canon to the Sigma or other 85mm primes?

I have but not at the same time. The Sigma AF was very erratic and distance dependent. It is highly subjective but I liked the bokeh of the Canon better. BTW, I did not buy either one.

5
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT's (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (for Gaussian kernels).

Here is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution. Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.

6
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
Quote

That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.

7

Sampling is a nonlinear process, and having denser sampling would probably make it easier to treat it as a linear problem (in lower spatial frequiencies), if it comes at no other cost (e.g. Q.E.). Heres to future cameras that "outresolves" its lenses.

I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.

8
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.

If you say so... I am just telling you how it is. What you wrote is not related to my post.

Let us try it. I will post a blurred landscape, you will restore the detail. Deal?

9
This is something different.

Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Note, you do not even need to have zeros. Also, deconvolution is unstable before you sample (which makes it worse), so no need of discrete models.

If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
http://en.wikipedia.org/wiki/Z-transform
http://en.wikipedia.org/wiki/Finite_impulse_response
http://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations
http://dsp.rice.edu/sites/dsp.rice.edu/files/md/lec15.pdf
"Inverse systems
Many signal processing problems can be interpreted as trying to undo the action of some system. For example, echo cancellation, channel obvolution, etc. The problem is illustrated below.
If our goal is to design a system HI that reverses the action of H, then we
clearly need H(z)HI(z) = 1. In the case where Thus, the zeros of H(z) become poles of HI(z), and the poles of H(z) become zeros of HI(z). Recall that H(z) being stable and causal implies that all poles are inside the unit circle. If we want H(z) to have a stable, causal inverse HI(z), then we must have all zeros inside the unit circle, (since they become the poles of HI(z).) Combining these, H(z) is stable and causal with a stable and causal inverse if and only if all poles and zeros of H(z) are inside the unit circle. This type of system is called a minimum phase system."


For images you usually want a linear phase system. A pole can be approximated by a large number of zeros. For image processing, large delay may not be a problem (easily compensated), so a system function of "1" can be replaced by z^(-D)

MATLAB example:
Code: [Select]
%% setup an all-pole filter
order = 2;
a = [1 -0.5 0.1];
%% generate a vector of normally distributed noise
n = randn(1024,1);
%% apply the (allpole) filter to the noise
x = filter(1,a,n);
%% apply the inverse (allzero) filter
n_hat = filter(a,1,x);
%% see what happened
sum(abs(n - n_hat))
sum(abs(n))
%% plot filter time-domain impulse responses
[h1,t1] = impz(1,a, 10);
[h2,t2] = impz(a,1, 10);
Output:
>>2.3512e-14
>>815.4913


-h

10
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.

11

That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do.

I am. And I am mathematician.  :)

Quote
Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function?

Even if you know the blur kernel, this is highly unstable. The easiest way to understand it is to consider the Fourier transform. High frequencies are attenuated and when  they get close to the noise and  the other errors level, they are gone forever. Whatever you do, they are gone.

BTW, if the blur is done with a "sharp" kernel, like a disk with a sharp edge, this is a much better behaved problem and allows better deconvolution.

12
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.

Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.

There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.


There

13
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

14
I have the same formula, derived in a mathematical way, under some assumptions, here. It is actually a formula that first appeared in some publications in optics but I cannot find the references.

Excellent post, BTW.

15
EOS Bodies - For Stills / Re: Transition from Nikon to Canon
« on: March 03, 2013, 02:17:53 AM »
With all due respect to Nikon, the OP did not ask whether switching to Canon was a good idea. His question was very different.

There is a very good reason to shoot with the same system as your friends. You can exchange lenses, and get advice from people you trust.

Pages: [1] 2 3 ... 6