September 30, 2014, 05:59:36 PM

Author Topic: Pixel density, resolution, and diffraction in cameras like the 7D II  (Read 26624 times)

CarlTN

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2227
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #30 on: February 28, 2013, 02:44:04 PM »
Jrista, "+1" on you starting a new thread with this.  Thanks for all the useful info!

You mention NR software like Topaz for debanding, but what software works best for the luminance noise reduction? 

I mentioned before, that I noticed the luminance noise via my cousin's 5D3, such as at ISO 4000, had a very hard pebble-like grain structure that gets recorded the size of maybe 5 or 6 pixels across.  The luminance slider in ACR CS5 had very little effect on it until it got above 80%, so more detail was sacrificed.  With my 50D's files, the luminance grain is much smaller in size relative to the pixels, so the luminance slider has a far greater effect in its lower range.

I have practiced the art of optimizing a file in ACR before I ever even open it in Photoshop, but is it possible that this isn't always the best approach for noise reduction?  I still think it is, but I'm trying to be open minded and learn new things!

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #30 on: February 28, 2013, 02:44:04 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #31 on: February 28, 2013, 03:32:08 PM »
I don't think I've claimed Rayleigh is a "brick wall".
I am sorry, this was a general rant, not targeted at you.
Quote
I'd call Dawe's the brick wall, as anything less than that and you have two unresolved points of light.
https://www.astronomics.com/Dawes-limit_t.aspx
"This “Dawes’ limit” (which he determined empirically simply by testing the resolving ability of many observers on white star pairs of equal magnitude 6 brightness) only applies to point sources of light (stars). Smaller separations can be resolved in extended objects, such as the planets. For example, Cassini’s Division in the rings of Saturn (0.5 arc seconds across), was discovered using a 2.5” telescope – which has a Dawes’ limit of 1.8 arc seconds!"
Quote
The problem with Rayleigh, at least in the context of spatial resolution in the photographic context...is that detail becomes nearly inseparably mired with noise. At very low levels of contrast, even assuming you have extremely intelligent and effective deconvolution, detail at MTF 10 could never really be "certain"....is it detail...or is it noise?
I guess you can never be "certain" at lower spatial frequencies either? As long as we are dealing with PDFs, it is a matter of probability? How wide is the tail of a Poisson distribution?

I see no theoretical problem at MTF10 as long as the resulting SNR is sufficient (which it, of course, usually is not).

-h

Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #32 on: February 28, 2013, 05:49:48 PM »
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.

Contrast is meaningless as a metric - until you have both amplitude contrast AND frequency. This is inherently implied in MTF, as it is defined as contrast over frequency.... Contrast is just a difference in brightness. It doesn't become "detail" until the contrast is present at a high spatial frequency.

In practice (I've written and also quantified many Bayer interpolation schemes) you need at least MTF20 - a Michelson contrast of 0.2 - to get better than 50% pixel estimation accuracy when interpolating a raw image (based on Bayer of course).

Those 0.2 in contrast does by physical necessity INCLUDE noise. Even the best non-local schemes cannot accurately estimate a detail on pixel level when that detail has a contrast lower than approximately twice the average noise power of the pixel surrounds.

The only way to get past this is true oversampling, and that does not occur in normal cameras until you're at F16-F22. In that case no interpolation estimation is needed - just pure interpolation. At that point you can be certain that no detail in the projected image will be small enough to "fall in between" two pixels of the same color on the sensor.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #33 on: March 01, 2013, 12:24:36 AM »
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.

Contrast is meaningless as a metric - until you have both amplitude contrast AND frequency. This is inherently implied in MTF, as it is defined as contrast over frequency.... Contrast is just a difference in brightness. It doesn't become "detail" until the contrast is present at a high spatial frequency.

In practice (I've written and also quantified many Bayer interpolation schemes) you need at least MTF20 - a Michelson contrast of 0.2 - to get better than 50% pixel estimation accuracy when interpolating a raw image (based on Bayer of course).

Those 0.2 in contrast does by physical necessity INCLUDE noise. Even the best non-local schemes cannot accurately estimate a detail on pixel level when that detail has a contrast lower than approximately twice the average noise power of the pixel surrounds.

The only way to get past this is true oversampling, and that does not occur in normal cameras until you're at F16-F22. In that case no interpolation estimation is needed - just pure interpolation. At that point you can be certain that no detail in the projected image will be small enough to "fall in between" two pixels of the same color on the sensor.

I don't think we are saying different things... I agree that having a certain minimum contrast is necessary for detail at high frequencies to be discernible as detail.

I am not sure I completely follow...some of the grammar is confusing. In an attempt to clarify for other readers, I think you are saying that because of the nature of a bayer type sensor, MTF at no less than 20% is necessary to demosaic detail from a bayer sensor's RAW data such that it could actually be perceived differently than noise in the rendered image. Again...I don't disagree with that on principal, however with post-process sharpening, you CAN extract a lot of high frequency detail that is low in contrast. The moon is a superb example of this, where detail most certainly exists at contrast levels below 20%, as low as Rayleigh, and possibly even lower.

The only time when it doesn't matter is at very narrow apertures, where the sensor is thoroughly outresolving the lens, and the finest resolved element of detail is larger than a pixel.

(I believe that is what The Suede is saying...)
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #34 on: March 01, 2013, 12:32:10 PM »
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.
I am talking about something like this:


Increasing (spatial) frequency from left to right, increasing contrast from top to bottom. As we are talking about light and imaging, I think that amplitude/phase are cumbersome properties.

Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave. At the bottom, the amplitude is high, and constant across the whole length of the image, while frequency increases from left to right. At the top, the amplitude is flat. At the middle, the amplitude is about 50%, while again frequency increases from left to right.

"Spatial" frequencies are exactly that...you don't have a waveform without frequency, amplitude, and phase. Technically speaking, the image above is also modulated in both frequency and amplitude, with a phase shift of zero.

If you convolve this image with a sensible PSF, you will get blurring, affecting the high frequencies the most. As convolution is a linear process, high-contrast and low-contrast parts will be affected equally. Now, if you add noise to the image, the SNR will be more affected in low-contrast than in high-contrast areas.

With the image above, you could convolve it with a sensible PSF, and deconvolve it perfectly. Noise, however, would actually affect both the low frequency parts of the image as well as the low contrast parts. The high frequency high contrast parts are actually the most resilient against noise...everything else is susceptible (which is why noise at high ISO tends to show up much more in OOF backgrounds than in a detailed subject.)



If the "ideal mathematical idea" of this image is recorded as a finite number of Poisson-distributed photons, you get some uncertainty. The uncertainty will be largest where there are a few photons representing a tiny feature, and smallest where there are many photons representing a smooth (large) feature. My point was simply that the uncertainty is there for the entire image. However unilkely, that image that seems to portray a Lion at the zoo _could_ really be of a soccer game, only shot noise altered it.

Assuming an image affected solely by Poisson-distributed photons, then theoretically, yes. However, the notion that an image of a soccer game might end up looking like a lion at the zoo would only really be probable at extremely low exposures. SNR would have to be near zero, such that the Poisson distribution of photon strikes left the majority of the sensor untouched, leaving more guesswork and less structure in figuring out what the image is. As the signal strength increases, the uncertainty shrinks, and the chances of a soccer game being misunderstood as a lion at the zoo diminishes to near-zero. Within the first couple stops of an EV, the uncertainty drops well below 1, and assuming you fully expose the sensor (i.e. ETTR) to maximize SNR, uncertainty should be well below 0.1. As uncertainty drops, the ease with which we can remove photon noise should increase.

However...noise is a disjoint factor from MTF. The two are not mutually exclusive, however they are distinct factors, and as such they can be affected independently with high precision deconvolution. You can, for example, eliminate banding noise while barely touching the spatial frequencies of the image.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #35 on: March 01, 2013, 07:35:05 PM »
I am talking about something like this:

Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave.
.../cut/...

No. Using the word "amplitude" as a straight replacement for the word "contrast" (red-marked text) - is actually very misleading.

The amplitude is not equal to contrast in optics, and especially not when you're talking about visual contrast. Contrast, as normal people speak of it, is in most cases closely related to [amplitude divided by average level]. And so are MTF figures - this is not a coincidence.

An amplitude of +/-10 is a relatively large contrast if the average level is 20
-giving an absolute amplitude swing from 10 to 30 >> an MTF of 0.5
But if the average level is 100, then swing is 90-110 >> MTF is only 0.1. That's a very much lower contrast, and a lot harder to see or accurately reproduce.

Contrast is what we "see", not amplitude swing.

And no, noise in general is not generally disjointed from MTF... Patterned noise is separable from image detail in an FFT, and you can eliminate most of it without disturbing underlying material. Poisson noise or any other non-patterned noise on the other hand isn't separable, by any known algorithm. And since the FFT of Poisson is basically a Gauss bell curve, you remove Poisson noise by applying a Gaussian blur... Any attempt to reconstruct the actual underlying material will be - at worst - a wild guess, and - at best - and educated guess. The educated guess is still a guess, and the reliability of the result is highly dependent on non-local surrounds.

The Gaussian blur radius you need to apply to dampen non-patterned noise by a factor "X" is (again, not by coincidence!) almost exactly the same as the amount of downwards shift in MTF that you get.

As noise suppression algorithms get smarter and smarter, the amount of correct guesses-estimates in a certain image with a certain noise amount present will continue to increase (correlation to reality will get better and better) - but they're still guesses. But that's good enough for most commercial use. What we're doing today in commercial post-processing regarding noise reduction is mostly adapting to psycho-visuals. We find ways to make the viewer THINK that:  -"Ah... That looks good, that must be right" - by finding what types of noise patterns that humans react strongly to, and then trying to avoid creating those patterns when blurring the image (all noise suppression is blurring!) and making/estimating new sharp edges.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #36 on: March 01, 2013, 09:26:31 PM »
I am talking about something like this:

Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave.
.../cut/...

No. Using the word "amplitude" as a straight replacement for the word "contrast" (red-marked text) - is actually very misleading.

The amplitude is not equal to contrast in optics, and especially not when you're talking about visual contrast. Contrast, as normal people speak of it, is in most cases closely related to [amplitude divided by average level]. And so are MTF figures - this is not a coincidence.

An amplitude of +/-10 is a relatively large contrast if the average level is 20
-giving an absolute amplitude swing from 10 to 30 >> an MTF of 0.5
But if the average level is 100, then swing is 90-110 >> MTF is only 0.1. That's a very much lower contrast, and a lot harder to see or accurately reproduce.

Contrast is what we "see", not amplitude swing.

And no, noise in general is not generally disjointed from MTF... Patterned noise is separable from image detail in an FFT, and you can eliminate most of it without disturbing underlying material. Poisson noise or any other non-patterned noise on the other hand isn't separable, by any known algorithm. And since the FFT of Poisson is basically a Gauss bell curve, you remove Poisson noise by applying a Gaussian blur... Any attempt to reconstruct the actual underlying material will be - at worst - a wild guess, and - at best - and educated guess. The educated guess is still a guess, and the reliability of the result is highly dependent on non-local surrounds.

The Gaussian blur radius you need to apply to dampen non-patterned noise by a factor "X" is (again, not by coincidence!) almost exactly the same as the amount of downwards shift in MTF that you get.

As noise suppression algorithms get smarter and smarter, the amount of correct guesses-estimates in a certain image with a certain noise amount present will continue to increase (correlation to reality will get better and better) - but they're still guesses. But that's good enough for most commercial use. What we're doing today in commercial post-processing regarding noise reduction is mostly adapting to psycho-visuals. We find ways to make the viewer THINK that:  -"Ah... That looks good, that must be right" - by finding what types of noise patterns that humans react strongly to, and then trying to avoid creating those patterns when blurring the image (all noise suppression is blurring!) and making/estimating new sharp edges.

Well, I can't speak directly to optics specifically.

I was thinking more in the context of the image itself, as recorded by the sensor. The image is a digital signal. There is more than one way to "think about" an image, and in one sense any image can be logically decomposed into discrete waves. Any row or column of pixels, block pixels, however you want to decompose it, could be treated as a Fourier series. The whole image can even be projected into a three dimensional surface shaped by a composition of waves in the X and Y axes, with amplitude defining the Z axis.

Performing such a decomposition is very complex, I won't deny that. Sure, a certain amount of guesswork is involved, and it is not perfect. Some algorithms are blind, and use multiple passes to guess the right functions for deconvolution, choosing the one that produces the best result. It is possible, however, to closely reproduce the inverse of the Poisson noise signal, apply it to the series, and largely eliminate that noise...with minimal impact to the rest of the image. Banding noise can be removed the same way. The process of doing so accurately is intense, and requires a considerable amount of computing power. And since a certain amount of guesswork IS involved, it can't be done perfectly without affecting the rest of the image at all. But it can be done fairly accurately with minimal blurring or other impact.

Assuming the image is just a digital signal, which in turn is just a composition of discrete waveforms, opens up a lot of possibilities. It would also mean that, assuming we generate a wave for just the bottom row of pixels in the sample image (the one without noise)...we have a modulated signal of high amplitude and decreasing frequency. The "contrast" of each line pair in that wave is fundamentally determined by the amplitude of the wavelet. The row half-way up the image would have half the amplitude...which leads to what we would perceive as less contrast.

Perhaps it is incorrect to say that amplitude itself IS contrast, I guess I wouldn't dispute that. A shrinking amplitude around the middle gray tone of the image as a whole does directly lead to less contrast as you move up from the bottom row of pixels to the top in that image. Amplitude divided by average level sounds like a good way to describe it then, so again, I don't disagree. I apologize for being misleading.

I'd also offer that there is contrast on multiple levels. There is the overall contrast of the image (or an area of the image), as well as "microcontrast". If we use the noisy version of the image I created, the bottom row could not be represented as a single smoothly modulated wave. It is the combination of the base waveform of increasing frequency, as well as a separate waveform that represents the noise. The noise increases contrast on a per-pixel level, without hugely affecting the contrast of the image overall.

Perhaps this is an incorrect way of thinking about real light passing through a real lens in analog form. I know far less about optics. I do believe Hjulenissen was talking about algorithms processing a digital image on a computer, in which case discussing spatial frequencies of a digital signal seemed more appropriate. And in that context, a white/black line pair's contrast is directly affected by amplitude (again, sorry for the misleading notion that amplitude IS contrast...I agree that is incorrect.)
« Last Edit: March 01, 2013, 09:58:24 PM by jrista »
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #36 on: March 01, 2013, 09:26:31 PM »

Radiating

  • 7D
  • *****
  • Posts: 336
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #37 on: March 01, 2013, 11:03:38 PM »
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #38 on: March 02, 2013, 01:22:43 AM »
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #39 on: March 02, 2013, 12:32:38 PM »
Perhaps this is an incorrect way of thinking about real light passing through a real lens in analog form. I know far less about optics. I do believe Hjulenissen was talking about algorithms processing a digital image on a computer, in which case discussing spatial frequencies of a digital signal seemed more appropriate. And in that context, a white/black line pair's contrast is directly affected by amplitude (again, sorry for the misleading notion that amplitude IS contrast...I agree that is incorrect.)
"Light" is an electro-magnetic wave, or can at least be treated as one. Radio is also based on electro-magnetic waves. In radio, you have coherent transmitters and receivers, and properties of the waveform (amplitude, phase, frequency) can contain information, or be used to improve directional behaviour etc. In regular imaging applications, we tend to treat light in a somewhat different way. The sun (and all other illuminators except LASERs) are incoherent. Imaging sensors dont record the light "waveform" at a given spatial/temporal region, but rather records intensity within a frequency band. This treats light in a slightly more statistical manner, just like static noise on a radio. What is the frequency of filtered white noise? What is its phase? Amplitude? Such terms does not make sense, but its Spectral Power Density, its Variance does make sense.

Well, I understand the nature of light, its wave particle duality, all of that. I am just not an optical engineer, so I am not sure if there is any knowledge in that field that would give a different understanding to exactly what happens to light in the context of optical imaging. That said, you are thinking about light as a particle. I am actually not thinking about light at all...but rather the spatial frequencies of an image, or in the context of a RAW image on a computer (well past the point where physical light is involved), a digital signal.

I'm not sure if I can describe it such that you'll understand or not...but think of each pixel as a sample of a wave. Relative to its neighboring pixels, it is either lighter, darker or the same tone. If we have a black pixel next to a white pixel, the black pixel is the trough of a "wave", and the white pixel is the crest. If we have a white-black-white-black, we have two full "wavelengths" next to each other. The amplitude of a spatial wave is the difference between the average tone and its trough or crest. In the case of our white-black-white-black example, the average tone is middle gray,  Spatial frequencies exist in two dimensions, along both the X and the Y axis. I'll see if I can find a way to plot one of the pixel rows as a wave.

In the image file that I attached, once printed and illuminated using a light bulb (or the sun), it is the intensity that is modulated (possibly through some nonlinear transform in the printer driver). The amount of black ink varies, and this means that more photons will be absorbed in "black" regions than in "white". The amplitude and phase properties of the resultant light is of little relevance. The frequency properties are also of little relevance as the image (and hopefully the illuminant) should be flat spectrum. If you point your camera to such a print, it will record the number of photons (again, intensity).

When it comes to the modulation of the intensity in the figure, this was probably done by a sinoid of sweeping frequency (left-right) and amplitude (up-down). The phase of the modulation does not matter much, as we are primarily interested in how the imaging system under test reduce the modulation at different spatial frequencies, and (more difficult) if this behaviour is signal-level dependent (like USM sharpening would be). If you change the phas of the modulation by 180 degrees, you would still be able to learn the same about the lense/sensor/... used to record the test image.

So, again, all that is thinking about light directly as a waveform or particle. That is entirely valid, however there are other ways of thinking about the image produced by the lens. The image itself is comprised of frequencies based on the intensity of a pixel. A grayscale image is much easier to demonstrate with than a  color image, so I'll use that to demonstrate:



The image above models the bottom row of pixels from your image as a spatial frequency. I've stretched that single row of pixels to be 10 pixels tall, simply so it can be seen better. I've plotted the spatial waveform below. The concept is abstract...it is not a physical property of light. It is simply a way of modeling the oscillations inherent to the pixels of an image based on their level. This is a very simplistic example...we have the luxury of an even-toned middle gray as our "zero energy" level. Assuming the image is an 8-bit image, we have 256 levels. Levels 0-127 are negative power, levels 128-255 are positive power. The intensity of each pixel in the image oscillates between levels 0 and 255, thus producing a wave...with frequency and amplitude. Phase exists too...if we shift the whole image to the left or right, and appropriately fill in the pixels to the opposite side, we have all of the properties that represent a wave.

Noise can be modeled the same way...only as a different wave with different characteristics. The noise channel from the full sized image below is shown at the bottom of the wave model above (although it is not modeled itself...can't really do it well in photoshop.) Thought of as a Fourier series, the noise wave and the image wave are composible and decomposible facets of the image.

MTF with Noise:


MTF Plot:


Noise channel:
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

Radiating

  • 7D
  • *****
  • Posts: 336
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #40 on: March 02, 2013, 08:39:35 PM »
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
Quote
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

Yes diffraction changes with aperture but if you always shoot below f/5.6 you can ditch the AA filter without consequence, and those images shot below f/5.6 would be sharper than those taken with the same camera with an AA filter.
« Last Edit: March 02, 2013, 08:41:14 PM by Radiating »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #41 on: March 02, 2013, 10:13:02 PM »
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

Do you have a link? Because 20% is insane, and I don't believe that figure. The most sharply focused images would look completely blurry if an AA filter imposed a 20% cost on IQ...it just isn't possible.

Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
Quote
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

No, it is not irreversible. The D800E is the perfect example of the fact that it is indeed REVERSIBLE. The AA filter is a convolution filter for certain frequencies. Convolution can be reversed with deconvolution. So long as you know the exact function of the AA filter, you can apply an inverse and restore the information. The D800E does exactly that...the first layer of the AA filter blurs high frequencies at Nyquist by a certain amount, and the second layer does the inverse to unblur those frequencies, restoring them to their original state.
« Last Edit: March 03, 2013, 12:46:58 PM by jrista »
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #42 on: March 03, 2013, 12:25:20 PM »
jrista:
My gripe was with your claims that seemingly everything can and should be described using Amplitude, phase and frequency.

A stochastic process (noise) have characteristics that vary from time to time and from realisation to realisation. This means that talking about the amplitude of noise tends to be counterproductive. What does not change (at least in stationary processes) is the statistic parameters: variance. Or PSD. etc.

What you want to learn from the response to a swept-frequency/amplitude sinoid is probably the depth of the modulation. Sure the sine has got a phase, but if it cannot tell us anything, why should we bother? If you do believe that it tells us anything, please tell me instead of explaining once more what a sine-wave is or how to Fourier-transform anything.

-h

Not "should", but "can". I am not sure I can explain anymore, as I think your latching on to meaningless points. I don't know your background, so if I'm explaining things you already know, apologies.

The point isn't about amplitude, simply that noise can be described as a set of frequencies in a Fourier series. Eliminate the wavelets that most closely represent noise (not exactly an easy thing to do without affecting the rest of the image, but not impossible either), and you leave behind only the wavelets that represent the image.

Describing an image as a discrete set of points of light, which disperse with known point spread functions, is another way to describe an image. In that context, you can apply other transformations to sharpen, deblur, etc.

The point was not to state that images should only ever be described as a Fourier series. Simply that they "can". Just as they "can" be described in terms of PSF and PSD.
« Last Edit: March 03, 2013, 12:31:13 PM by jrista »
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #42 on: March 03, 2013, 12:25:20 PM »

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #43 on: March 03, 2013, 12:44:00 PM »
I have the same formula, derived in a mathematical way, under some assumptions, here. It is actually a formula that first appeared in some publications in optics but I cannot find the references.

Excellent post, BTW.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4725
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #44 on: March 03, 2013, 01:33:20 PM »
Just to provide a better example than I can provide, here is a denoising algorithm that uses both FFT and wavelet inversion to deband as well as deconvolution of PSF to remove random noise:

http://lib.semi.ac.cn:8080/tsh/dzzy/wsqk/spie/vol6623/662316.pdf

This really sums up my point...simply that an image can be processed in different contexts via different modeling to remove noise while preserving image data. I am not trying to say that modeling an image signal as a Fourier series is better or the only way, and that assuming it is a discrete set of point light sources described by a PSF is invalid. They are both valid.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #44 on: March 03, 2013, 01:33:20 PM »