October 21, 2014, 05:08:34 PM

Author Topic: Pixel density, resolution, and diffraction in cameras like the 7D II  (Read 27110 times)

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #60 on: March 04, 2013, 10:52:20 PM »
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #60 on: March 04, 2013, 10:52:20 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #61 on: March 04, 2013, 11:23:16 PM »
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.

Aye, sorry. Wavelet deconvolution of a RAW was primarily used for debanding, which is why I mentioned it before. As for the rest, yes, we were talking about sharpening.

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #62 on: March 04, 2013, 11:28:08 PM »
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.
[/quote]

Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.

And many of the de-stabilizing problems in the base material (the raw image) are synthetical - they stem from results that are weakly determined (like noise) or totally undetermined (like interpolation errors from having to weak AA-filters)... The totally undetermined errors are worst, since they are totally unpredictable.

The only really valid and reasonable way we have to deal with this right now is to shoot with a camera that has "more MP than we really need", and the downsample the result. Doing deconvolution sharpening on an image that has been scaled down to 1:1.5 or 1:2.0 (9MP or 5.5MP from a 5D2) usually yields reasonably accurate results. You can choose much more aggressive parameters without risking resonance.

I wouldn't even care if the camera I bought next year has 60MP, but only gave me 15MP raw files
- as long as the internal scaling algorithms were of reasonable quality. A 15MP [almost] pixel perfect image is way more than most people need. And it's actually way more real and actual detail than what you get from a 20-24MP camera.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect =  a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #63 on: March 05, 2013, 12:05:59 AM »
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect =  a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.

I am not so sure about the accurate pixel-level detail comment. I might extend that to "color-accurate pixel-level detail", given the spatial resolution of red and blue is half that of green. When I combine my 7D with the new 500 II, I get some truly amazing detail at 100% crop:



In the crop of the finch above, the per-pixel detail is exquisite. All the detail that is supposed to be there is, it is accurately reproduced, it is very sharp (that image has had zero post-process sharpening applied), contrast is great, and noise perfromace is quite good. I've always liked my 7D, but there were definitely times when IQ dissapointed. Until I popped on the 500mm f/4 L II. As I was saying originally in the conversation that started this thread...there is absolutely no alternative for high quality glass, and I think many of the complaints we have about bayer sensor technology really actually boil down to bad glass.

Which, ironically, is NOT a bad thing. I completely agree with you, that the only way to truly preserve all the detail our lenses resolve is to use a sensor that far outresolves the lens and downsample....however that takes us right back to the point we started with: People bitch when their pixel-peeping shows up "soft" detail. (Oh, I so can't wait for the days of 300ppi desktop computer screens...then people won't even be able to SEE a pixel, let alone complain about IQ at pixel level. :))

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #64 on: March 05, 2013, 02:31:06 AM »
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.

If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #65 on: March 05, 2013, 08:06:35 AM »
This is something different.

Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Note, you do not even need to have zeros. Also, deconvolution is unstable before you sample (which makes it worse), so no need of discrete models.

If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
http://en.wikipedia.org/wiki/Z-transform
http://en.wikipedia.org/wiki/Finite_impulse_response
http://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations
http://dsp.rice.edu/sites/dsp.rice.edu/files/md/lec15.pdf
"Inverse systems
Many signal processing problems can be interpreted as trying to undo the action of some system. For example, echo cancellation, channel obvolution, etc. The problem is illustrated below.
If our goal is to design a system HI that reverses the action of H, then we
clearly need H(z)HI(z) = 1. In the case where Thus, the zeros of H(z) become poles of HI(z), and the poles of H(z) become zeros of HI(z). Recall that H(z) being stable and causal implies that all poles are inside the unit circle. If we want H(z) to have a stable, causal inverse HI(z), then we must have all zeros inside the unit circle, (since they become the poles of HI(z).) Combining these, H(z) is stable and causal with a stable and causal inverse if and only if all poles and zeros of H(z) are inside the unit circle. This type of system is called a minimum phase system."


For images you usually want a linear phase system. A pole can be approximated by a large number of zeros. For image processing, large delay may not be a problem (easily compensated), so a system function of "1" can be replaced by z^(-D)

MATLAB example:
Code: [Select]
%% setup an all-pole filter
order = 2;
a = [1 -0.5 0.1];
%% generate a vector of normally distributed noise
n = randn(1024,1);
%% apply the (allpole) filter to the noise
x = filter(1,a,n);
%% apply the inverse (allzero) filter
n_hat = filter(a,1,x);
%% see what happened
sum(abs(n - n_hat))
sum(abs(n))
%% plot filter time-domain impulse responses
[h1,t1] = impz(1,a, 10);
[h2,t2] = impz(a,1, 10);
Output:
>>2.3512e-14
>>815.4913


-h

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #66 on: March 05, 2013, 11:22:00 AM »
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #66 on: March 05, 2013, 11:22:00 AM »

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #67 on: March 05, 2013, 02:23:14 PM »
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.

If you say so... I am just telling you how it is. What you wrote is not related to my post.

Let us try it. I will post a blurred landscape, you will restore the detail. Deal?

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #68 on: March 05, 2013, 02:27:09 PM »

Sampling is a nonlinear process, and having denser sampling would probably make it easier to treat it as a linear problem (in lower spatial frequiencies), if it comes at no other cost (e.g. Q.E.). Heres to future cameras that "outresolves" its lenses.

I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #69 on: March 05, 2013, 05:58:47 PM »
Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.
Not sure that I understand what you are saying. If you apply the exact inverse to a noiseless signal, I would not expect ringing in the result. If the inverse is only approximate, it really depends on the measurement and the approximation. If you have a large, "hairy" eestimated PSF, you might like its approximate inverse to be too compact and smooth, rather than too large and irregular. If the SNR is really poor, it is perhaps not a point in doing deconvolution (good denoising may be more important)

Very deep spectral nulls is a problem that I acknowledged earlier. For anyone but (perhaps) textbook authors, they spell locally poor SNR. The OLPF tends to add a shifted version of the signal to itself, causing a comb-filter of (in principle) 0 gain.

Is this a problem in practice? That depends. Are the first zero within the band that we want to deconvolve? (I would think that they place the first zero close to Nyquist to make a crude lowpass filter, so no.). Is the total PSF dominated by OLPF, or does other contributors cause it to "blur" into something more Gaussian-like?

-h

Actually, Mr Nyquist was recently taken out behind a barn and shot... The main suspect is Mr Bayer. He made the linear sampling theorem totally irrelevant at pixel scale magnifications by adding sparse sampling into the image data. :)

To make this even more clear, imagine a strongly saturated color detail in an image. It may be a thin hairline, or a single point - doesn't matter. If that detail is smaller than two pixel widths, the information it leaves on the sensor is totally dependent on spatial relations, where the detail lands on the CFA pattern of the sensor.

**If the detail is centered on a "void", a different CFA color pixel, it will leave no trace in the raw file. We literally won't know it was there, just by looking at the raw file.
**If the detail is centered on a same-color pixel in the CFA on the other hand, it will have 100% effect.

The reason you cannot apply deconvolution to raw data (and actually not to interpolated data with low statistical reliability either...) is rather easy to see... Look at the image to the far right. It is the red channel result of letting the lens project pattern "A" on to the Bayer red pattern "B".

Can you, just by looking at result "C" get back to image "A"? Deconvolution is of no help here, since the data is sparsely sampled - 75% of the data is missing.

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #70 on: March 05, 2013, 07:18:30 PM »
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
Quote

That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #71 on: March 05, 2013, 08:16:55 PM »
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
Quote

That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.

Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #72 on: March 05, 2013, 09:05:38 PM »
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT's (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (for Gaussian kernels).

Here is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution. Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #72 on: March 05, 2013, 09:05:38 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4461
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #73 on: March 06, 2013, 12:01:46 AM »
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT's (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (for Gaussian kernels).

Here is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution. Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.

Thanks for the links. Let me read before I respond.

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #74 on: March 06, 2013, 12:49:54 AM »
Forgot to say which example in the last link - Example 3.18.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #74 on: March 06, 2013, 12:49:54 AM »