October 01, 2014, 12:58:22 PM

Author Topic: Pixel density, resolution, and diffraction in cameras like the 7D II  (Read 26650 times)

yyz

  • SX50 HS
  • **
  • Posts: 2
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #15 on: February 28, 2013, 08:56:30 AM »
I need to support thewaywewalk:

Though I completely support jrista in arguing that increased pixel density is always a resolution advantage (zeiss have also published two excellent papers on that subject) there are other disadvantages of smaller pixels.

Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #15 on: February 28, 2013, 08:56:30 AM »

RGF

  • 1D X
  • *******
  • Posts: 1276
  • How you relate to the issue, is the issue.
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #16 on: February 28, 2013, 09:49:33 AM »
Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.
How much of a practical disadvantage does this give the Nikon D800 vs the D600?

http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/834|0/(brand)/Nikon/(appareil2)/792|0/(brand2)/Nikon

Dynamic range->print

-h

This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #17 on: February 28, 2013, 10:27:31 AM »
Some interesting points here, though not all of them are 100% based on optical reality (which has a measurable match in optical theory to 99+% most of the time, as long as you stay within Fraunhofer and quantum limits)

Rayleigh is indeed the limit where MTF has sunk to 7% (or 13%, depending if your target is sine or sharp-edge, OTF) - a point where it is very hard to recover any detail by sharpening if your image contains any higher levels of noise. It's hard even at low levels of noise. And Rayleigh is defined as: "When the peak of one point maxima coincides with the first Airy disk null on the neighboring point".

Consider again what this means.
You have two points, at X distance p-p. The distance you're interested in finding is where they begin to merge enough to totally mask out the void in between. Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot.

But this does not make Rayleigh the p-p distance of pixels needed to register that resolution, to register the void in the first place you have to have one row of pixels between the two points. That means that Rayleigh is a line-pair distance, not a line distance. If Rayleigh is 1mm and the sensor 10mm, you need to have 20 pixels to resolve it.

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #18 on: February 28, 2013, 10:39:51 AM »
So, the real "optical limit in a perfect lens" is:
(For green light, 0.55µm - at F4.0)
1.22 x F4.0 x 0.55µm = 2.684µm is a line PAIR. >>> The pixels have to be 1.34µm

That gives 372lp/mm, or
(24x16mm) / (0.00134mm ^2) = 213MP on an APS sensor (x2.56 = ~550MP FF.

This is for F4.0 - and that seems to be a reasonable point of aperture size to choose. Lower aperture values are EXTREMELY seldomly diffraction limited in normal photographic lenses. Higher aperture values Often are, at least in the center of the iamge field.

This is easily verifiable in reality, by using a camera like the small Pentax Q (that has 1.55µm pixels) with a Canon EF adapter. Many current Canon lenses outresolve the little Pentax sensor (give moire and aliasing effect) at F4.0. You can also verify it by putting the lens in a real traversing-slit MTF bench, some of which have upper image formation resolution limits well below 1µm.
...........

A couple of counter / reinforcing arguments for higher resolution (no noise involved, yet...!):
Reinforcing:
1) The actual resolution of a Bayer sensor is about sqrt(pixel pitch) for a random oriented image detail. Not all lines or details line up perfectly with "pixel pairs". This increases the needed MP by a factor of two, since spatial resolution has to be sqrt(2) higher.
2) As long as RN + downstream electronic noise is kept low (or you have a hardware level binning scheme) image detail ACCURACY (not resolution) continues to increase measurably and visibly about one twofold increase in MP past the theoretical limit. This is maybe the most important part - for me. You get mroe ACCURATE detail, not just "more" detail.

counters:
1) Lenses very rarely truly diffraction limited at - or below - F4.0. The best lens we've ever measured - that can be used on a FF sensor - have a "loss-factor" of about 1.1. This means that diffraction + aberration losses can be approximated by [diffraction(actual f/# * 1.1)]. This is valid for large apertures if the lens is still considered sharp (like an 85L at F2.8, very sharp, but definitely not "perfect". Add in global contrast loss factors too.
2) Sharpness is also limited by shutter speed and vibration in various ways (at some shutter speeds with lighter lenses, the shutter actually induces more vibration into the image than the mirror-slap...!).
3) Sharpness is also limited by subject movement / shutter speed.
4) Sharpness is often limited by the NEED to have a deeper DoF, i.e smaller aperture - more diffraction.
.....................

From a purely practical PoV (well, I do quite a lot of actual shooting too...) I generally say that a factor of two times more MP than the largest presentation size you need is optimal. This includes both the electrical and the optical side of the equation.

If you NEVER use anything larger than 10MP output sizes, 20MP is good enough. Then there's very little actual - practical - gain to be had to go to 30 or 40MP (or higher...). This has to do with practical noise considerations.
(And then I'll immediately contradict myself - this isn't valid for FF bodies, if they're less than 16-18MP. At this point, so many lenses are so much sharper than what the camera can accurately resolve that you get aliasing and moire problems very easily...)

weixing

  • Canon 70D
  • ****
  • Posts: 292
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #19 on: February 28, 2013, 11:21:15 AM »
Can I make a little objection?
You're talking about resolution, but what about the light sensitivity?
Is it right, that doubled MP would halve the amount of light every pixel gets to see?

Or is the increase of Megapixels always connected to a higher amplification per pixel? (Which would increase noise)

This is what I'm worried about talking about a 24MP APS-C Sensor.

Sorry for Offtopic!
Hi,
    IMHO, I'm not worry about a new 24MP APS-C sensor... I think it should perform better than the current 60D 18MP APS-C sensor.

    I got a Canon G15... a 1/1.7" (7.44mm x 5.58mm) 12MP (4000 x 3000) sensor and the result (noise performance) at ISO 1600 is not that far behind the 60D 18MPAPS-C sensor. If an APS-C sensor base on G15 pixel size, it would be a 90++MP APS-C sensor, so a new 24MP APS-C should not be that bad.

    The attached is the 100% crop shot using G15 and 60D using the below setting:
ISO 1600, 1/6s and F2.8, NR Standard (G15 NR at high ISO cannot turn off when shooting RAW). I process both using the same setting in DPP (faithful, sharpening all set to 0 and NR all set to 0) and export to PS in 16-bit TIFF. Then I crop both in PS, copy, paste and save as jpeg.

   Have a nice day.

RGF

  • 1D X
  • *******
  • Posts: 1276
  • How you relate to the issue, is the issue.
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #20 on: February 28, 2013, 11:44:54 AM »
This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
I believe that this differs for so-called "ISO-less" cameras and... "non-ISO-less" (sic) cameras. Canon generally belongs to the latter cathegory.

Imagine a pipeline consisting of:
(noise injected)->(amplifier)->(noise injected)->(amplifier)

If the second injection of noise is significant, then you gain SNR by employing the first amplifier. If the first noise source is dominant, then it does not matter which amplifier you use.


The high-DR@lowISO sensors used by Sony/Nikon seems to give similar quality if you do the gain in a raw editor as if you do it in-camera. There are still disadvantages to this method, though. You cannot use the auto-exposure, in-camera preview is useless, and the histogram is hard to interpret. You gain better highlights (DR) in low-light images, though.

-h

Thanks.  I think I understand most of what you are saying.  HOwever the amplication via the computer is should not introduce any noise.  The A to D is reduced from 12 (or 14) bits to 9 (or 11) bits for a 3 stop gain.  Shadows may go from 6 bit to 3 bit.  Not noise, but posterization?

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #21 on: February 28, 2013, 11:56:58 AM »
I am trying to correlate the resulting image from a DSLR exposed at Rayleigh to how well a viewer of that image at it's native print size could resolve detail at an appropriate viewing distance, hence the reference to vision. In and of itself, Rayleigh is not a constant or anything like that. The reason MTF 80, 50, 10 (really 9%, Rayleigh) and 0 (or really just barely above 0%, Dawe's) are used is that they correlate to specific aspects of human perception regarding a print of said image. MTF 50 is generally referred to as the best measure of resolution that produces output we consider well resolved...sharp...good contrast and high acutance. MTF10 would be the limit of useful resolution, and does not directly correlate with sharpness or IQ...simply the finest level of detail resolvable that each pixel could be differentiated (excluding any effects image noise may have, which can greatly impact the viability of MTF10, another reason it is not particluarly useful for measuring photographic resolution.)
But deconvolution can shift MTF curves, can it not? At MTF0, it is really hard to see how any future technology might dig up any details (a hypothetic image scaler might make good details based on lower resolution images + good models of the stuff that is in the picture, but I think that is besides the point here).

Deconvolution can shift MTF curves, however for deconv algorithms to be most effective, as they would need to be at Rayleigh, and even more so Dawes, you need to know something about the nature of the diffraction you are working with. Diffraction is the convolution, and for a given point light source at Dawes, you would need to know the nature of it, the PSF, to properly deconvolve it. The kind of deconv algorithms we have today, such as denoise and debanding and the like, are useful up to a degree. We would need some very, very advanced algorithms to deconvolve complex scenes full of near-infinite discrete point light sources at MTF0. I can see deconvolution being useful for extracting and enhancing detail at MTF10...I think we can do that today.

That said, what we can do with software to an image once it is captured was kind of beyond the scope of my original point...which was really simply to prove that higher resolution sensors really do offer benefits over lower resolution sensors, even in diffraction-limited scenarios.


The fundamental question (to my mind, at least) is "given the system MTF/noise behaviour, after good/optimal deconvolution and noise reduction, how perceptually accurate/pleasing is the end-result?". Of course, my question is a lot vaguer and harder to figure than yours.

I figure that in 10 years, deconvolution will be a lot better (and faster) than today. This has a (small) influence on my actions today, as my raw files are stored long-term.

Sure, I totally agree! I've seen some amazing things done using deconvolution research, and I'm pretty excited about these increasingly advanced algorithms finding their way into commercial software. For one, I really can't wait until high quality debanding finds its way into Lightroom. People complain a lot about the DR of Canon sensors...however Canon sensors still pack in the deep shadow pixels, full of detail, underneath all that banding. Remove the banding, and you can recover completely usable deep shadows and greatly expand your DR. I've never seen it get as good as what an Exmor offers out of the box, but at least a stop, stop and a half beyond the 10-11 stops we get natively.

We can certainly look forward to improved digital signal processing and deconvolution. That again was a bit beyond the scope of the original points I was trying to make, hence the lack of any original discussion involving them.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #21 on: February 28, 2013, 11:56:58 AM »

Don Haines

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3244
  • Posting cat pictures on the internet since 1986
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #22 on: February 28, 2013, 12:17:58 PM »
The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

Well said! Might I add that even if one could afford the $13,000 600mm lens, for many of us who backpack it's just too large to bring along on a multi-day trek through the mountains. Sometimes bigger is not better.
The best camera is the one in your hands

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #23 on: February 28, 2013, 12:26:36 PM »
Can jrista take account of the heavy AA filters on Canon sensors in his calcs or have I missed something?

Well first, as I've mentioned in the past, I believe the idea that Canon uses overly strong AA filters is a bit overblown. I thought my 7D had an aggressive AA filter until I first put on the EF 300mm f/2.8 L II in August 2012. I'd been using a 16-35 L II and the 100-400 L. Both are "good" lenses, but neither is a truly "great" lens. Both don't seem to have enough resolving power and/or contrast to really do the 7D justice. They get really close, but often enough they fall just a little short, which produces that well-known "soft" look to 7D images.

With the F 300/2.8 II, 500/4 II, and 600/4 II, with and without 1.4x and 2x TCs, the IQ of the 7D is stellar. I've never seen any of the classic softness that I did with my other lenses. Based on the IQ from using Canons better lenses, I do NOT believe they have aggressive AA filters...I think they actually have AA filters that are just right! :)

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.

Does this matter in the real world? Well yes maybe if you want to use 100pc crops.

Will Canon introduce something like the D7100 in their well trailed up coming 7D2, 70D, 700D series?

Somehow doubt it.

Excellent stuff jrista - thanks for posting.

I'm curious about the difference with the Pentax K5 II. If there is that much of a difference, I'd presume that the AA filter WAS aggressive. From what I have heard, the D800 and D800E, when you use a good lens, does NOT exhibit that much of a difference. In many reviews I've read, the differences were sometimes barely perceptible, with the added cost on the D800E that if you shoot anything with repeating patterns, you can end up with aliasing and moire. There is definitely some improvement to shooting without an AA filter, but I am not sure it is really all it is cracked up to be.

Generally speaking, I would blame the lens for any general softness unless it is definitively proven to outresolve the sensor. For sensors with the densities they have today, lenses are generally only capable of outresolving the sensor in a fairly narrow band of aperture settings...from around f/3.5 to f/8 for FF sensors, and f/3.5 to f/5.6 for APS-C sensors. The higher the density of the sensor, the narrower the range....a 24mp APS-C can probably only be outresolved at around f/4, unless the lens is more diffraction-limited at wider apertures. Wider than f/3.5 in the majority of cases, optical aberrations cause softening, in many cases much more than you experience from diffraction even at f/22.

That said, so long as you pair a high quality lens with a high density sensor on a Canon camera, or for that matter a Nikon camera, I do not believe the AA filter will ever be a serious problem. When it comes to other brands, I don't really know enough. In the case of the K5 II, it really sounds more like the AA version DOES have an aggressive filter, which is why there is a large difference between the AA and non-AA versions.

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.
If you check out luminous-landscape.com there is a nice thread comparing optimally sharpened D800 vs optimally sharpened D800E. I believe the conclusion is that for low-noise situations, the performance is virtually identical.

Aye, this is what I've heard as well. There is a small improvement with the D800E in the right circumstances, but overall it does not seem to be as great as it otherwise sounds on paper. Given the IQ I can get out of the 7D, which does have an AA filter, with top-shelf lenses...I really do not believe it has an aggressive AA filter, and I am quite thankful that the AA filter is there. Without it, I'd never be able to photograph birds...their feathers are moire hell!
« Last Edit: February 28, 2013, 12:28:55 PM by jrista »
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

3kramd5

  • 7D
  • *****
  • Posts: 440
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #24 on: February 28, 2013, 12:31:28 PM »
Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!

Nokia got halfway there (1.4 micron) with a camera phone... :D
5D3, 5D2, 40D; Various lenses

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #25 on: February 28, 2013, 12:34:22 PM »
Some interesting points here, though not all of them are 100% based on optical reality (which has a measurable match in optical theory to 99+% most of the time, as long as you stay within Fraunhofer and quantum limits)

Rayleigh is indeed the limit where MTF has sunk to 7% (or 13%, depending if your target is sine or sharp-edge, OTF) - a point where it is very hard to recover any detail by sharpening if your image contains any higher levels of noise. It's hard even at low levels of noise. And Rayleigh is defined as: "When the peak of one point maxima coincides with the first Airy disk null on the neighboring point".

Consider again what this means.
You have two points, at X distance p-p. The distance you're interested in finding is where they begin to merge enough to totally mask out the void in between. Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot.

But this does not make Rayleigh the p-p distance of pixels needed to register that resolution, to register the void in the first place you have to have one row of pixels between the two points. That means that Rayleigh is a line-pair distance, not a line distance. If Rayleigh is 1mm and the sensor 10mm, you need to have 20 pixels to resolve it.

You've hit it on the head...measuring resolution from a digital sensor at Rayleigh is very difficult because of noise. The contrast level is so low that, when the image is combined with noise (both photon shot and read), there is really no good way to discern whether the difference between two pixels is caused by differences in the detail resolved from the scene, or differences caused by noise.

There is probably a better "sweet spot" MTF, lower than MTF 50 but higher than MTF @ Rayleigh, that would give us a better idea of how well digital image sensors can resolve detail. Given the complexities involved with using MTF @ Rayleigh, and the ubiquitous acceptance of MTF 50 as the best measure of resolution from a human perception standpoint, I prefer to use MTF 50.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #26 on: February 28, 2013, 12:42:28 PM »
"Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot."

I guess this is the point of discussion. Like I said earlier, if you have some knowledge of the scene (such as knowing that it is dirac-like stars on a dark sky), or if you have knowledge of the PSF/low noise/access to good deconvolution, you can challenge this limit.

I have no issues with the Rayleigh criterion being a practically important indicator of "diminishing returns". I do have an issue with people claiming that it is an absolute brickwall (nature seems to dislike brickwall filters - perhaps because it assumes acausality?).

From a theoretic perspective it would be interesting to know if there are true, fundamental limits to the information passed onto the sensor (a PSF will to some degree "scramble" the information, making it hard to sort out. That is not the same as removing it altogether). I have seen some hints of such a limit, but I never had much optical physics back in University, and I am too lazy to read up on the theory myself. There were a discussion on dpreview where they talked about a few hundred megapixels/ gigapixel for an FF sensor before the blue sensels could not receive any more spatial information.

At the very least, I assume that we move into quantum trouble sooner or later. When a finite number of photons hits a sensor, there is only so much information to record. If you cannot simultaneously record the precise position and energy of a photon, then that is a limit. I assume that as a sensel approach the wavelength of light, nastyness happens. That is one more area of physics that I do not master.

-h

I don't think I've claimed Rayleigh is a "brick wall". I'd call Dawe's the brick wall, as anything less than that and you have two unresolved points of light. The problem with Rayleigh, at least in the context of spatial resolution in the photographic context...is that detail becomes nearly inseparably mired with noise. At very low levels of contrast, even assuming you have extremely intelligent and effective deconvolution, detail at MTF 10 could never really be "certain"....is it detail...or is it noise? Even low levels of noise can have much higher contrast than detail at MTF 10. Dawe's Limit is the brick wall, Rayleigh is the effective limit of resolution for all practical intents, and leave a fairly significant degree of uncertainty in discussions like this.

MTF50 is widely accepted as being a moderately lower contrast level where detail is acceptably perceivable by the human eye for a print at native resolution. In the film days, the perception of a viewer was evaluated from contact prints, so what the film resolved is what the viewer could see. Given the broad use and recognition of MTF 50, its what I use. Ultimately, it wouldn't really matter of I used MTF 50, MTF 10, or MTF 0...the math will work out roughly the same either way, and the relative benefits of a 24mp sensor over an 18mp sensor will still exist. The MTF is really just to provide a consistent frame of reference, nothing more.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #27 on: February 28, 2013, 12:54:48 PM »
This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
I believe that this differs for so-called "ISO-less" cameras and... "non-ISO-less" (sic) cameras. Canon generally belongs to the latter cathegory.

Imagine a pipeline consisting of:
(noise injected)->(amplifier)->(noise injected)->(amplifier)

If the second injection of noise is significant, then you gain SNR by employing the first amplifier. If the first noise source is dominant, then it does not matter which amplifier you use.


The high-DR@lowISO sensors used by Sony/Nikon seems to give similar quality if you do the gain in a raw editor as if you do it in-camera. There are still disadvantages to this method, though. You cannot use the auto-exposure, in-camera preview is useless, and the histogram is hard to interpret. You gain better highlights (DR) in low-light images, though.

-h

Thanks.  I think I understand most of what you are saying.  HOwever the amplication via the computer is should not introduce any noise.  The A to D is reduced from 12 (or 14) bits to 9 (or 11) bits for a 3 stop gain.  Shadows may go from 6 bit to 3 bit.  Not noise, but posterization?

"Amplification" via software does not introduce noise...however it can enhance the noise present, because at that point, assuming noise exists in the digital signal, it is "baked in". When it comes to Exmor (Sony/Nikon high-DR sensor), the level of noise is extremely low, so pushing exposure around in post is "amplifying" pixels that have FAR less noise than the competition.

The Exmor sensor could be called ISO-LESS because it is primarily a DIGITAL pipeline.

In most sensors, when a pixel is read, analog CDS is applied, analog per-pixel amplification is applies, the columns of a row are read out, the signal is often sent off the sensor die via bus, a downstream analog amplifier may be applied, and the pixels are finally converted to digital by a high frequency ADC. Along that whole pipeline there are many chances for noise to be introduced to the analog signal. Canon's sensors are like this, and some of the key sources of noise are the non-uniform response of the CDS circuits (which is the first source of banding noise), transmission of the signal along a high speed bus, downstream amplification (which amplifies all the noise in the signal prior to secondary amplification...which only occurs at the highest ISO settings), and finally large bucket parallel ADC via high frequency converters (which is where the second source of banding noise comes from).

Unlike most sensors, the only analog stage in Exmor is the direct read of each pixel. Once a pixel is read, it is sent directly to an ON-DIE ADC, where pixels are converted directly to a digital form, where digital CDS is applied, where digital amplification is applied, and from which point on the entire signal remains digital. Once the signal is in a digital form, it is, for all intents and purposes, immune to contamination by analog sources of noise. Transmission along a bus, further image processing, etc. all work on bits rather than an analog signal. As such, Exmor IS effectively "ISO-less", since amplification occurs post-ADC. ISO 800 with Exmor is really the same as ISO 100 with a 3-stop exposure boost in post...there is really little difference.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #27 on: February 28, 2013, 12:54:48 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #28 on: February 28, 2013, 12:56:46 PM »
The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

Well said! Might I add that even if one could afford the $13,000 600mm lens, for many of us who backpack it's just too large to bring along on a multi-day trek through the mountains. Sometimes bigger is not better.

That's a great point! Sometimes bigger is definitely not better...if I was hiking around Rocky Mountain National Park, I'd probably not want to bring anything larger than the 100-400mm L.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4731
  • POTATO
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #29 on: February 28, 2013, 01:55:45 PM »
Deconvolution can shift MTF curves, however for deconv algorithms to be most effective, as they would need to be at Rayleigh, and even more so Dawes, you need to know something about the nature of the diffraction you are working with. Diffraction is the convolution, and for a given point light source at Dawes, you would need to know the nature of it, the PSF, to properly deconvolve it.
If you knew the precise PSF and had no noise, then you could in theory retrieve the unblurred "original" perfectly through linear deconvolution. In practice, it is impossible to know the precise PSF, there is noise, and the PSF might contain deep spectral zeros. This is where non-linear, blind deconvolution comes into the picture. Sadly, I don't know much about how they work, but I know some Wiener filtering, and I believe that serve as a starting-point?

There are a whole lot of starting points. People are doing amazing things with advanced deconvolution algorithms these days. Debanding. Denoising. Eliminating motion blur. Recovering detail from a completely defocused image. The list of what we can do with deconvolution algorithms, particularly in the wavelet space, is long and growing. Whether it will help us really extract more resolution at contrast levels as low as or less than 10%, I can't say. I guess if you could denoise extremely well, and had a rough idea of the PSFs, then you could probably do some amazing things. I guess we'll see when amazing things start happening over the next decade. ;)

Quote
The kind of deconv algorithms we have today, such as denoise and debanding and the like, are useful up to a degree. We would need some very, very advanced algorithms to deconvolve complex scenes full of near-infinite discrete point light sources at MTF0. I can see deconvolution being useful for extracting and enhancing detail at MTF10...I think we can do that today.
Doing anything up against the theoretical limit tends to be increasingly hard for vanishing benefits.

Certainly.

Quote
That said, what we can do with software to an image once it is captured was kind of beyond the scope of my original point...which was really simply to prove that higher resolution sensors really do offer benefits over lower resolution sensors, even in diffraction-limited scenarios.
I think that deconvolution strengthens your point, as it means that even higher resolution sensors make some sense even when operating in the "diffraction limited" regime. If the sensor is of lower resolution, then higher spatial frequencies are abruptly cutoff (or folded into lower frequencies) and impossible to recover through deconvolution or other means.

That is a good point. It is kind of along the same lines as the argument of, when you need a deep DOF, using a very narrow aperture like f/22 or f/32 despite the softening it incurs, rather than opting for a wider aperture that won't necessarily produce the DOF you need. Correcting the uniform blurring caused by diffraction is a hell of a lot easier than correcting the non-linear blurring caused by a too-thin DOF. Deconvolution is definitely a powerful post-processing tool that can enhance the use of higher resolution sensors (among other things), and realize fine detail at low contrast levels that exist, but are not readily apparent.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #29 on: February 28, 2013, 01:55:45 PM »