canon rumors FORUM

Rumors => EOS Bodies => Topic started by: jrista on February 27, 2013, 08:29:39 PM

Title: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 27, 2013, 08:29:39 PM
I'm starting this thread to continue a tangent from another. Rather than derail the other thread, but in order not to lose the discussion, I thought we could continue it in its own thread. I think there is important information to be gleaned from the discussion, which started when I responded to a comment by @rs:

Ps - I really hope Canon resist the temptation to take their 1.6x crop sensor up to 24mp. It'll suffer from softness due to diffraction from f6.0 onwards - mount an f5.6 lens on there and you've got little in the way of options. Even the legendary 300/2.8 II with a 2x TC III will underperform, and leave you with just one aperture option if you want to attempt to utilise all of those megapixels. Leave the MP lower, and let those lower processing overheads allow them to push the hardware of the small mirror and shutter to its limits.

Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.

You can follow the quote above to read the precursor comments on this topic. So, continuing on from the last reply by @rs:



Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.
I'm not saying its worse, its just the extra MP don't make any difference to the resolving power once diffraction has set in. Take another example - scan a photo which was a bit blurry - if a 600dpi scan looks blurry on screen at 100%, you wouldn't then think 'let's find out if anyone makes a 10,000dpi scanner so I can make this look sharper?' You'd know it would offer no advantages - at that point you're resolving more detail than is available - weakest link in the chain and all that...

I think you are generally misunderstanding resolution in a multi-component system. It is not the lowest common denominator that determines resolution...total system resolution is the root mean square of all the components. To keep things simple for this forum, and in general this is adequate for most discussion, we'll just factor in the lens resolution and sensor resolution, in terms of spatial resolution. The way I approach this is to determine the "system blur". Diffraction itself is what we call "blur" from the lens, assuming the lens is diffraction limited (and, for this discussion, we'll just assume the lens is always diffraction limited, as determining blur from optical aberrations is more complex), and it is caused by the physical nature of light. Blur from the lens changes depending on the aperture used, and as the aperture is stopped down, diffraction limits the maximum spatial resolution of the lens.

The sensor also introduces "blur", however this is a fixed, intrinsic factor determined by the size and spacing of the pixels, whether micro lenses are used, etc. For the purposes of discussion here, lets just assume that 100% of the pixel area is utilized thanks to "perfect" microlensing. That leaves us with a sensor blur equal to the pixel pitch (scalar size, horizontal or vertical, of each pixel) times two (to get us lp/mm or line pairs per millimeter, rather than simply l/mm or lines per millimeter).

[NOTE: I assume MTF50 as that is the standard that historically represents what we perceive as clear, crisp, sharp, with high microcontrast. MTF10, in contrast, is usually used to determine what might be considered the maximum resolution at the lowest level of contrast the human eye could detect...which might be useful for determining the resolution of barely perceptible features on the surface of the moon...assuming atmospheric conditions are perfect, but otherwise it is not really adequate for the discussion here. Maximum spatial resolution in MTF10 can be considerably higher than in MTF50, but there is no guarantee that the difference between one pixel and the next is detectable by the average person (Rayleigh Criterion, often described as the limit of human visual acuity for 20/20 vision)...it is more of the "true mathematical/theoretical" limit of resolution at very low, barely detectable levels of contrast. MTF0 would be spatial resolution where contrast approaches zero, which is largely useless for general photography, outside of the context of astronomy endeavors where minute changes in the shape and structure of an airy disk for a star can be used to determine if it is a single, binary, or tertiary system...or other scientific endeavors where knowing the shape of an airy disk at MTF0, or Dawe's Limit (the theoretical absolute maximum resolving power of an optical system at near zero contrast level) is useful.]

For starters, lets assume we have a perfect (diffraction-limited) lens at f/8, on a 7D sensor which has a pixel pitch of 4.3 microns. The lens, at f/8, has a spatial resolution of 86 lp/mm at MTF50. The sensor has a raw spatial resolution of approximately 116 lp/mm (assuming the most ideal circumstances, and ignoring the difference between green and red or blue pixels.) Total system blur is derived by taking the root mean square of all the blurs of each component in the system. The formula for this is:

Code: [Select]
tb = sqrt(lb^2 + sb^2)
Where tb is Total Blur, lb is Lens Blur, and sb is Sensor Blur. We can convert spatial resolution, from lp/mm, into a blur circle in mm, by simply taking the reciprocal of the spatial resolution:

Code: [Select]
blur = 1/sr
Where blur is the diameter of the blur circle, and sr is the spatial resolution. We get  0.01163mm for the blur size of the lens @ f/8, and 0.00863 for the blur size of the sensor. From these, we can compute the total blur of the 7D with an f/8 lens:

Code: [Select]
tb = sqrt(0.01163mm^2 + 0.00863mm^2) = sqrt(0.0001352mm + 0.0000743mm) = sqrt(0.0002095mm) = 0.014475mm
We can convert this back into lp/mm simply by taking the reciprocal again, which gives us a total system spatial resolution for the 7D of ~69lp/mm. Seems surprising, given the spatial resolution of the lens...but then again, that is for f/8. If we move up to f/4, the spatial resolution of the lens jumps from 86lp/mm to 173lp/mm. Refining our equation to stay in lp/mm:

Code: [Select]
tsr = 1/sqrt((1/lsr)^2 + (1/ssr)^2)
Where tsr is total spatial resolution, lsr is lens spatial resolution, and ssr is sensor spatial resolution, plugging in 173lp/mm and 116lp/mm for lens and sensor respectively gets us:

Code: [Select]
tsr = 1/sqrt((1/173)^2 + (1/116)^2) = 1/sqrt(0.0000334 + 0.0000743) = 1/0.0001077 = 96.34
With a diffraction limited f/4 lens, the 7D is capable of achieving ~96lp/mm spatial resolution.

The debate at hand is whether a 24.1mp APS-C sensor is "worth it", and whether it will provide any kind of meaningful benefit over something like the 7D's 18mp APS-C sensor. My response is absolutely!! However, we can prove the case by applying the math above. A 24.1mp APS-C sensor (Canon-style, 22.3mmx14.9mm dimensions) would have a pixel pitch of 3.7µm, or ~135lp/mm:

Code: [Select]
(1/(pitch µm / 1000µm/mm)) / 2 l/lp = (1/(3.7µm / 1000µm/mm)) / 2 l/lp = (1/(0.0037mm)) / 2 l/lp = 270l/mm / 2 l/lp = 135 lp/mm
Plugging that, for an f/4 lens, into our formula from above:

Code: [Select]
tsr = 1/sqrt((1/173)^2 + (1/135)^2) = 1/sqrt(0.0000334 + 0.0000549) = 1/sqrt(0.0000883) = 1/0.0094 = 106.4

The 24.1mp sensor, with the same lens, produces a better result...we gained 10lp/mm, up to 106lp/mm from 96lp/mm on the 18mp sensor. That is an improvement of 10%! Certainly nothing to shake a stick at! But...the lens is outresolving the sensor...there wouldn't be any difference at f/8, right? Well...not quite. Because of the nature of "total system blur" being a factor of all components in the system, we will still see improved resolution at f/8. Here is the proof:

Code: [Select]
tsr = 1/sqrt((1/86)^2 + (1/135)^2) = 1/sqrt(0.0001352 + 0.0000549) = 1/sqrt(0.00019) = 1/0.0138 = 72.5
Despite the fact that the theoretical 24.1mp sensor from the hypothetical 7D II is DIFFRACTION LIMITED at f/8, it still resolves more! In fact, it resolves about 5% more than the 7D at f/8. So, according to the theory, even if the lens is not outresolving the sensor, even if the lens and sensor are both thoroughly diffraction limited, a higher resolution sensor will always produce better results. The improvements will certainly be smaller and smaller as the lens is stopped down, thus producing diminishing returns. If we run our calculations for both sensors at f/16, the difference between the two is less than at f/8:

18.0mp @ f/16 = 40lp/mm
24.1mp @ f/16 = 41lp/mm

The difference between the 24mp sensor and the 18mp sensor at f/16 has shrunk by half to 2.5%. By f/22, the difference is 29.95lp/mm vs. 30.21lp/mm, or an improvement of only 0.9%. Diminishing returns...however even at f/22, the 24mp is still producing better results...not that anyone would really notice...but it is still producing better results.

The aperture used was f/9, so diffraction has definitely "set in" and is visible given the 7D's f/6.9 DLA. The subject, in this case a Juvenile Baird's Sandpiper, comprised only the center 25% of the frame, and the 300 f/2.8 II w/ 2x TC STILL did a superb job resolving a LOT of detail:
You've got some great shots there, very impressive  ;) - and it clearly does show the difference between good glass and great glass. But the f9 300 II + 2x shot isn't 100% pixel sharp like your native 500/4 shot is. I'm not saying there's anything wrong with the shot - it's great, and the detail there is still great. Its just not 18MP of perfection great. A 15MP sensor wouldn't have resolved any less detail behind that lens, but that wouldn't have made a 15MP shot any better. This thread is clearly going off on a tangent here, as pixel peeping is rarely anything to do with what makes a great photo - its just we are debating whether the extra MP are worth it. And just to re-iterate, great shots jrista  :)

No, it certainly isn't 18mp of perfection great, because it is only a quarter of the frame. It is more like 4.5mp "great". :P My 100-400 wouldn't do as well, not because it doesn't resolve as much, at f/9 it would resolve roughly the same...but because it would produce lower contrast. Microcontrast from the 300mm f/2.8 II lens is beyond excellent....microcontrast from the 100-400 is bordering on piss-poor. There is also the advancements in IS technology to consider. I forgot to mention this before, but Canon has greatly improved the image stabilization of their new generation of lenses. Where we MAYBE got two stops of hand-holdability before, we easily get at least four stops now, and I've managed to get some good shots at five stops. As a matter of fact, the Sandpiper photo was hand held (with me squatting in an awkward manner on soggy, marshy ground that made the whole thing a real pain), at 600mm, on a 7D, and the BARE MINIMUM shutter speed to get a clear shot in that situation is 1/1000s.

So, I still stress...there are very good reasons to have higher resolution sensors, and with the significantly advanced new generation of lenses Canon is releasing, I believe we have the optical resolving power to not only handle a 24mp APS-C sensor, but up to 65-70mp FF sensors, if not more, in the future.

You've got some great shots there, very impressive  ;) -  /* ...clip... */ And just to re-iterate, great shots jrista  :)

Thanks!  ;D
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: dtaylor on February 27, 2013, 10:00:23 PM
Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing. At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

Thanks again for the post!
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: wickidwombat on February 27, 2013, 10:46:41 PM
you should  post your birdy pics again they help explain however it would also be good to show the same lenses shot on a FF say a 5Dmk3 for comparison
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: phoenix7 on February 27, 2013, 10:52:13 PM
http://www.anandtech.com/show/6777/understanding-camera-optics-smartphone-camera-trends (http://www.anandtech.com/show/6777/understanding-camera-optics-smartphone-camera-trends)

This seems to have some of the info about smaller photon-sites/greater megapixels for a given size at least
as regards smaller sensors for phones and such but I think the math explanations about the light waves
and such might help make things clear for those less mathematically inclined.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 27, 2013, 10:53:09 PM
Excellent post. Thank you for digging up and laying out the formulas. I remember where they're at, but I was being too lazy to dig out the book and copy them. You posted them along with a clear explanation.

No digging...that was strait out of my brain! :P (Honestly...I've written those formulas out so many times at this point, I remember them all by heart...and when I don't, it is just a matter of deriving them.) I just try to avoid the math when possible, as not everyone understands it. There was no real way to prove the notion that higher resolution sensors still offer benefits over lower resolution ones, even beyond the point of diffraction limitation, without the math, though.

I would only add that post processing can recover details <MTF50, giving more potential to the 24 MP sensor past it's diffraction "limit". And that diffraction is not the same for all wavelengths, something sensor designers are aware of and will likely exploit in future very high resolution sensors with very high speed in camera processing.

It is true that diffraction differs depending on the wavelength, which is why I stated I'm generally ignoring the nature of bayer sensors and the difference in resolution of red and blue pixels vs. green. Green light is easy, ~555nm wavelength, and it falls approximately mid-way between red light and blue light. Diffraction in red gives a slight advantage to red, from blur standpoint, while it gives a slight disadvantage to blue...relative to their lower spatial resolution in the sensor. The math gets a lot more complex if you try to account for all three channels and cover spatial resolution for three wavelengths of light. I don't think that is quite appropriate for this kind of forum (and I don't have all of that memorized, either! :P)

At that point you adjust the Bayer pattern to gain detail and process it all down to a file size smaller then the native sensor output, but with more detail then an image from a regular Bayer sensor.

That is an option, however I am not sure it is the best one. A couple things here. For one, people who have never done much processing with say mRAW or sRAW from a Canon camera don't realize how limited it is compared to a true RAW file. Neither mRAR nor sRAW are true raw images...they are far more like a JPEG than a RAW, in that the camera "bins" pixels (via software) and produces a lossless compressed but high-precision 14-bit YCbCr image (JPEG is also a form of YCC, only it uses lossy compression). When I first got my 7D, I photographed in mRAW for a couple weeks. I likes the quality of the output, it was sharp and clear...but after editing the images in LR for a while, I realized that editing latitude was far lower. I couldn't push exposure as far without experiencing undesirable and uncorrectable shifts in color, getting banding, etc. The same went for white balance, color tuning, vibrancy and saturation, etc. Without the original digital signal that could be reinterpolated as needed without ANY conversion and permanent loss of precision, you lose editing capabilities.

A 200mp sensor that uses hardware pixel binning sounds cool, and so long as you expose perfectly every time the results would probably be great. But if you need or want that post-process exposure latitude (which, as dynamic range has moved well beyond the 8-10 stops of a modern computer screen, is almost essential regardless of any other reasons you may want it), the only way to get the same editing latitude as a 50mp RAW...you would need the 200mp image in a true RAW form as well. There is only one RAW, and any processing a camera does to bin or downsample will eliminate the kind of freedom we have all come to expect when using a DSLR these days.

Second, I guess I should also mention...there is an upper limit on how much you can resolve with a sensor, and still be reasonably priced. If we take a perfect f/4 lens, for example, you have an upper limit of 173lp/mm as far as the lens goes. That assumes optical aberrations contribute approximately nothing to blur, and that it is all caused by diffraction. I would say that Canon's 500mm f/4 II, 600mm f/4 II, as well as probably the 300mm /2.8 II and 400mm f/2.8 II lenses fall into this category. In other words, not many lenses actually produce truly diffraction-limited results, or at least get close enough such that they might as well be perfect, at f/4.

The question is...what kind of sensor would it take to actually resolve all 173lp/mm from a total system spatial resolution standpoint? You mention a 200mp sensor as being the likely upper limit. From a cost standpoint ten years from now, that might be the case...but it would still be woefully insufficient to fully realize the potential of a perfect f/4 lens @ f/4. Theoretically speaking, system resolution is asymptotically related to the lowest common denominator...so you could never actually achieve 173lp/mm exactly. You can only approach it...however assuming we basically get there...172.9lp/mm. To really get there...you would need no less than a 650mp APS-C sensor!! In terms of FF, that would be a 1.6 GIGAPIXEL sensor, 49824 pixels wide by 33216 pixels tall!! That would be an 80 GIGABYTE 16-bit TIFF file, assuming you could actually import the thing! :D

Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!! The physical size of the photodiode on a 180nm process would probably be around 350nm...which is well into the ultraviolet spectrum!! Perhaps, with subwavelength technology, we might be able to capture the light...I don't know all that much about that field...however I can't imagine it being cheap. And on top of the cost of making pixels that small in the first place! (That is nothing to say of the noise or dynamic range at that density...I can't imagine full-well charge capacity being high enough to be very useful at such a small pixel pitch.)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 27, 2013, 10:54:44 PM
you should  post your birdy pics again they help explain however it would also be good to show the same lenses shot on a FF say a 5Dmk3 for comparison

Yeah...I'll post those images again. I may just have to rent the 5D III and a 600mm lens as well, and produce some examples with the 600mm on both the FF and APS-C (at the same distances.)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: dr croubie on February 27, 2013, 11:18:00 PM
Very nice reading for us nerdly-types. I know not everyone around here is so-inclined, so for a real-world example just read this (http://www.lensrentals.com/blog/2013/01/a-24-70mm-system-comparison), maybe not even the whole thing, just the bit halfway down to the bottom.

The same Tamron Lens on the D800e and 5D3 scores better on the D800, because more pixels means better MTF. (I know he didn't test at allegedly "diffraction limited" apertures, but at least the bit about the denser sensor works IRL).
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: wickidwombat on February 27, 2013, 11:36:00 PM
Here is my contribution I still maintain FF delivers noticably sharper images than 1.6 crop with current tech
(who knows what future tech will bring however apply the same tech advances to FF that you apply to crop and
FF will still be ahead. however I believe that the law of diminishing returns will apply soon and that in reality you wont be able to see the difference unless seriously pixel peeping 

here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

both at f8 shot on tripod in Live view and manually focused

I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

however the FF combo is noticably sharper

these are 100% center crops
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 27, 2013, 11:53:08 PM
Here is my contribution I still maintain FF delivers noticably sharper images than 1.6 crop with current tech
(who knows what future tech will bring however apply the same tech advances to FF that you apply to crop and
FF will still be ahead. however I believe that the law of diminishing returns will apply soon and that in reality you wont be able to see the difference unless seriously pixel peeping 

here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

both at f8 shot on tripod in Live view and manually focused

I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

however the FF combo is noticably sharper

these are 100% center crops

I don't disagree, actually. When it comes to the FF vs. APS-C argument, assuming you compose the scene the same (i.e. get closer or use a longer lens to achieve same subject size in the frame), the higher pixel COUNTS of most FF sensors, along with the better performance of each pixel thanks to their larger size, definitely results in better IQ. This argument is better made in terms of pixels on subject rather than pixel density. Regardless of how big (or small) the pixels are, if you get more on subject, and get more better pixels on subject, then your subject overall will end up looking better. Assuming the same pixel performance between a 22.3mp FF and an 18mp APS-C (same amount of noise for any given exposure and ISO), if you frame the same subject the same size in the frame, the FF sensor should look about 24% better. It has 24% more pixels on the subject.

Now, assuming cost is no object, one can always pick up a 5D III, slap on a 600mm lens, and go zipping around photographing birds, wildlife, sports, air shows, whatever tickles your fancy and get better results than a 7D with a 400mm lens. You'll get roughly the same FF-effective FOV as the 600mm, but the larger, newer, better pixels of the 5D III, along with the fact that there are more of them in total, will just blow the 7D away.

The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

I challenge you to re-do your test with the 5D III and EOS-M. However this time, use the 300mm on both cameras, with the cameras at the same distance to the subject. The EOS-M should end up looking sharper on a size-normal basis (either scale its image down to a crop of the same area as the 5D III...or scale a matching crop of the 5D III up to the EOS-M image.) If you use the 300mm on both, and frame the subject in-camera the same for both, the 5D III again should end up being the winner because it gets more pixels on subject.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:35:26 AM
Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!! The physical size of the photodiode on a 180nm process would probably be around 350nm...which is well into the ultraviolet spectrum!! Perhaps, with subwavelength technology, we might be able to capture the light...I don't know all that much about that field...however I can't imagine it being cheap. And on top of the cost of making pixels that small in the first place! (That is nothing to say of the noise or dynamic range at that density...I can't imagine full-well charge capacity being high enough to be very useful at such a small pixel pitch.)

Actually, we would be using BSI tech with such a sensor for sure. Fabrication process would probably be better than 180nm as well...maybe down to a 64nm process by then. So, assuming BSI, such a sensor could be barely viable...but there would still be the full-well capacity issues, and dynamic range issues, and noise issues, and read-out performance issues (1.6 GIGAPIXELS...assuming similar scaling in image processing chips...we might get...what...1fps?!?)

Cheers! :)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 04:06:05 AM
I believe that the "diffraction limit" or Rayleigh criterion is a somewhat arbitrary criterion where two impulses can be resolved at a certain contrast. There is nothing fundamental about this limit AFAIK.

True, it is. It is explicitly described as:

Quote
Rayleigh criterion: Imaging process is said to be diffraction-limited when the first diffraction minimum of the image of one source point coincides with the maximum of another.

I am trying to correlate the resulting image from a DSLR exposed at Rayleigh to how well a viewer of that image at it's native print size could resolve detail at an appropriate viewing distance, hence the reference to vision. In and of itself, Rayleigh is not a constant or anything like that. The reason MTF 80, 50, 10 (really 9%, Rayleigh) and 0 (or really just barely above 0%, Dawe's) are used is that they correlate to specific aspects of human perception regarding a print of said image. MTF 50 is generally referred to as the best measure of resolution that produces output we consider well resolved...sharp...good contrast and high acutance. MTF10 would be the limit of useful resolution, and does not directly correlate with sharpness or IQ...simply the finest level of detail resolvable that each pixel could be differentiated (excluding any effects image noise may have, which can greatly impact the viability of MTF10, another reason it is not particluarly useful for measuring photographic resolution.)


If you use your camera to shoot images of stars (impulse-like), I believe that you can resolve information far beyond the Rayleigh criterion. If your camera had very little noise and/or high-frequency details had very high contrast, then I believe that good deconvolution could resolve beyond Rayleigh for general images.

This is true. Since stars are bright points of light on a very deep, dark background, assuming low noise, you can resolve stars down to Dawe's Limit...which is MTF 0. I touched on that in the first post of this thread. The only real use for assuming MTF0 that I know of is for the detection of multi-star stellar systems. At Dawe's Limit, airy discs are barely separated, much less so than at Rayleigh, but enough that they affect the presentation of the airy disc. Study of star airy discs with an understanding of how diffraction from multiple very close points of light interact allows astronomers to detect binary, tertiary, and even planetary systems.

From an artistic photographic perspective, resolution at Dawes Limit is generally useless. Outside of astrophotography, where any meaningful light comes from otherwise widely spaced point sources, one could never see the difference between points so closely spaced, nor observe airy disc shapes...it would all just be one big blur. :P

Instead of treating these rules-of-thumb as absolute limits, I think it makes more sense to treat them as rules-of-thumb: as your sensel density approach x and your aperture is y, it is going to be increasingly difficult to obtain substantially more details.

I'm not proposing these as absolutes that one should take into account in their day to day photography. I was imply trying to prove the case that, at least for the foreseeable future, continued increase in sensor resolution still provides value, and that even in diffraction-limited scenarios, a higher density sensor WILL produce better results than a lower density sensor.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Hillsilly on February 28, 2013, 04:46:28 AM
....here is the comparison shots i did.... the FF combo is noticably sharper...
Yes, but could you convince anyone that they were real?

Jrista, nice post.  In an ideal world, we'd all be shooting with the world's best equipment.  But as mentioned in one example above, due to financial constraints, many people are focal length limited or shooting with crop sensored cameras.  So its interesting to read about the positives of increased megapixels.

If people like you are taking a real interest in how some aspects of the 7Dii sensor might perform, I'm also hoping that the Canon engineers are also taking it seriously, too.  Wouldn't the world be an interesting place if the 7Dii had a spectacularly good sensor that rivalled some FF cameras at low ISOs?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plainsman on February 28, 2013, 06:06:56 AM
Can jrista take account of the heavy AA filters on Canon sensors in his calcs or have I missed something?

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.

Does this matter in the real world? Well yes maybe if you want to use 100pc crops.

Will Canon introduce something like the D7100 in their well trailed up coming 7D2, 70D, 700D series?

Somehow doubt it.

Excellent stuff jrista - thanks for posting.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: dtaylor on February 28, 2013, 06:19:04 AM
here is the comparison shots i did for another thread
5Dmk3 + 300f4L IS + Canon 2X mk3
vs
EOS-M + 70-200 f2.8L IS II + Canon 2X mk3

While I agree in principle that, out of camera and all other things equal, FF shots are sharper, you've got way too many differences in this test.

But even here, if you sharpen the crop sample they look identical. Sharpening is not an unlimited good, so as long as the gap between component A and B is below a certain amount, post processing can eliminate the gap. This is true FF v. crop at low ISO, but not at high ISO.

Quote
I think we can all agree the 70-200 is a sharp lens with lots of resolving power
the 300f4L IS is a much older optical design

For testing purposes you can't make those kinds of assumptions. How do you know that your copy of lens A is sharper then your copy of lens B when used with your copy of teleconverter C at aperture D? Eliminate all relevant differences in the test to isolate the variable you're testing for.

But again I agree with the point. Out of camera, all other things equal, FF is sharper.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: thewaywewalk on February 28, 2013, 06:41:08 AM
Can I make a little objection?
You're talking about resolution, but what about the light sensitivity?
Is it right, that doubled MP would halve the amount of light every pixel gets to see?

Or is the increase of Megapixels always connected to a higher amplification per pixel? (Which would increase noise)

This is what I'm worried about talking about a 24MP APS-C Sensor.

Sorry for Offtopic!
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: yyz on February 28, 2013, 08:56:30 AM
I need to support thewaywewalk:

Though I completely support jrista in arguing that increased pixel density is always a resolution advantage (zeiss have also published two excellent papers on that subject) there are other disadvantages of smaller pixels.

Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: RGF on February 28, 2013, 09:49:33 AM
Reduced energy to each pixel leading to necessary increase in signal amplification again leading to increased noice is one of these disadvantages. Fortunately technology is constantly improving in that area but at present it is a practical tradeoff.
How much of a practical disadvantage does this give the Nikon D800 vs the D600?

http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/834 (http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/834)|0/(brand)/Nikon/(appareil2)/792|0/(brand2)/Nikon

Dynamic range->print

-h

This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on February 28, 2013, 10:27:31 AM
Some interesting points here, though not all of them are 100% based on optical reality (which has a measurable match in optical theory to 99+% most of the time, as long as you stay within Fraunhofer and quantum limits)

Rayleigh is indeed the limit where MTF has sunk to 7% (or 13%, depending if your target is sine or sharp-edge, OTF) - a point where it is very hard to recover any detail by sharpening if your image contains any higher levels of noise. It's hard even at low levels of noise. And Rayleigh is defined as: "When the peak of one point maxima coincides with the first Airy disk null on the neighboring point".

Consider again what this means.
You have two points, at X distance p-p. The distance you're interested in finding is where they begin to merge enough to totally mask out the void in between. Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot.

But this does not make Rayleigh the p-p distance of pixels needed to register that resolution, to register the void in the first place you have to have one row of pixels between the two points. That means that Rayleigh is a line-pair distance, not a line distance. If Rayleigh is 1mm and the sensor 10mm, you need to have 20 pixels to resolve it.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on February 28, 2013, 10:39:51 AM
So, the real "optical limit in a perfect lens" is:
(For green light, 0.55µm - at F4.0)
1.22 x F4.0 x 0.55µm = 2.684µm is a line PAIR. >>> The pixels have to be 1.34µm

That gives 372lp/mm, or
(24x16mm) / (0.00134mm ^2) = 213MP on an APS sensor (x2.56 = ~550MP FF.

This is for F4.0 - and that seems to be a reasonable point of aperture size to choose. Lower aperture values are EXTREMELY seldomly diffraction limited in normal photographic lenses. Higher aperture values Often are, at least in the center of the iamge field.

This is easily verifiable in reality, by using a camera like the small Pentax Q (that has 1.55µm pixels) with a Canon EF adapter. Many current Canon lenses outresolve the little Pentax sensor (give moire and aliasing effect) at F4.0. You can also verify it by putting the lens in a real traversing-slit MTF bench, some of which have upper image formation resolution limits well below 1µm.
...........

A couple of counter / reinforcing arguments for higher resolution (no noise involved, yet...!):
Reinforcing:
1) The actual resolution of a Bayer sensor is about sqrt(pixel pitch) for a random oriented image detail. Not all lines or details line up perfectly with "pixel pairs". This increases the needed MP by a factor of two, since spatial resolution has to be sqrt(2) higher.
2) As long as RN + downstream electronic noise is kept low (or you have a hardware level binning scheme) image detail ACCURACY (not resolution) continues to increase measurably and visibly about one twofold increase in MP past the theoretical limit. This is maybe the most important part - for me. You get mroe ACCURATE detail, not just "more" detail.

counters:
1) Lenses very rarely truly diffraction limited at - or below - F4.0. The best lens we've ever measured - that can be used on a FF sensor - have a "loss-factor" of about 1.1. This means that diffraction + aberration losses can be approximated by [diffraction(actual f/# * 1.1)]. This is valid for large apertures if the lens is still considered sharp (like an 85L at F2.8, very sharp, but definitely not "perfect". Add in global contrast loss factors too.
2) Sharpness is also limited by shutter speed and vibration in various ways (at some shutter speeds with lighter lenses, the shutter actually induces more vibration into the image than the mirror-slap...!).
3) Sharpness is also limited by subject movement / shutter speed.
4) Sharpness is often limited by the NEED to have a deeper DoF, i.e smaller aperture - more diffraction.
.....................

From a purely practical PoV (well, I do quite a lot of actual shooting too...) I generally say that a factor of two times more MP than the largest presentation size you need is optimal. This includes both the electrical and the optical side of the equation.

If you NEVER use anything larger than 10MP output sizes, 20MP is good enough. Then there's very little actual - practical - gain to be had to go to 30 or 40MP (or higher...). This has to do with practical noise considerations.
(And then I'll immediately contradict myself - this isn't valid for FF bodies, if they're less than 16-18MP. At this point, so many lenses are so much sharper than what the camera can accurately resolve that you get aliasing and moire problems very easily...)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: weixing on February 28, 2013, 11:21:15 AM
Can I make a little objection?
You're talking about resolution, but what about the light sensitivity?
Is it right, that doubled MP would halve the amount of light every pixel gets to see?

Or is the increase of Megapixels always connected to a higher amplification per pixel? (Which would increase noise)

This is what I'm worried about talking about a 24MP APS-C Sensor.

Sorry for Offtopic!
Hi,
    IMHO, I'm not worry about a new 24MP APS-C sensor... I think it should perform better than the current 60D 18MP APS-C sensor.

    I got a Canon G15... a 1/1.7" (7.44mm x 5.58mm) 12MP (4000 x 3000) sensor and the result (noise performance) at ISO 1600 is not that far behind the 60D 18MPAPS-C sensor. If an APS-C sensor base on G15 pixel size, it would be a 90++MP APS-C sensor, so a new 24MP APS-C should not be that bad.

    The attached is the 100% crop shot using G15 and 60D using the below setting:
ISO 1600, 1/6s and F2.8, NR Standard (G15 NR at high ISO cannot turn off when shooting RAW). I process both using the same setting in DPP (faithful, sharpening all set to 0 and NR all set to 0) and export to PS in 16-bit TIFF. Then I crop both in PS, copy, paste and save as jpeg.

   Have a nice day.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: RGF on February 28, 2013, 11:44:54 AM
This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
I believe that this differs for so-called "ISO-less" cameras and... "non-ISO-less" (sic) cameras. Canon generally belongs to the latter cathegory.

Imagine a pipeline consisting of:
(noise injected)->(amplifier)->(noise injected)->(amplifier)

If the second injection of noise is significant, then you gain SNR by employing the first amplifier. If the first noise source is dominant, then it does not matter which amplifier you use.


The high-DR@lowISO sensors used by Sony/Nikon seems to give similar quality if you do the gain in a raw editor as if you do it in-camera. There are still disadvantages to this method, though. You cannot use the auto-exposure, in-camera preview is useless, and the histogram is hard to interpret. You gain better highlights (DR) in low-light images, though.

-h

Thanks.  I think I understand most of what you are saying.  HOwever the amplication via the computer is should not introduce any noise.  The A to D is reduced from 12 (or 14) bits to 9 (or 11) bits for a 3 stop gain.  Shadows may go from 6 bit to 3 bit.  Not noise, but posterization?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 11:56:58 AM
I am trying to correlate the resulting image from a DSLR exposed at Rayleigh to how well a viewer of that image at it's native print size could resolve detail at an appropriate viewing distance, hence the reference to vision. In and of itself, Rayleigh is not a constant or anything like that. The reason MTF 80, 50, 10 (really 9%, Rayleigh) and 0 (or really just barely above 0%, Dawe's) are used is that they correlate to specific aspects of human perception regarding a print of said image. MTF 50 is generally referred to as the best measure of resolution that produces output we consider well resolved...sharp...good contrast and high acutance. MTF10 would be the limit of useful resolution, and does not directly correlate with sharpness or IQ...simply the finest level of detail resolvable that each pixel could be differentiated (excluding any effects image noise may have, which can greatly impact the viability of MTF10, another reason it is not particluarly useful for measuring photographic resolution.)
But deconvolution can shift MTF curves, can it not? At MTF0, it is really hard to see how any future technology might dig up any details (a hypothetic image scaler might make good details based on lower resolution images + good models of the stuff that is in the picture, but I think that is besides the point here).

Deconvolution can shift MTF curves, however for deconv algorithms to be most effective, as they would need to be at Rayleigh, and even more so Dawes, you need to know something about the nature of the diffraction you are working with. Diffraction is the convolution, and for a given point light source at Dawes, you would need to know the nature of it, the PSF, to properly deconvolve it. The kind of deconv algorithms we have today, such as denoise and debanding and the like, are useful up to a degree. We would need some very, very advanced algorithms to deconvolve complex scenes full of near-infinite discrete point light sources at MTF0. I can see deconvolution being useful for extracting and enhancing detail at MTF10...I think we can do that today.

That said, what we can do with software to an image once it is captured was kind of beyond the scope of my original point...which was really simply to prove that higher resolution sensors really do offer benefits over lower resolution sensors, even in diffraction-limited scenarios.


The fundamental question (to my mind, at least) is "given the system MTF/noise behaviour, after good/optimal deconvolution and noise reduction, how perceptually accurate/pleasing is the end-result?". Of course, my question is a lot vaguer and harder to figure than yours.

I figure that in 10 years, deconvolution will be a lot better (and faster) than today. This has a (small) influence on my actions today, as my raw files are stored long-term.

Sure, I totally agree! I've seen some amazing things done using deconvolution research, and I'm pretty excited about these increasingly advanced algorithms finding their way into commercial software. For one, I really can't wait until high quality debanding finds its way into Lightroom. People complain a lot about the DR of Canon sensors...however Canon sensors still pack in the deep shadow pixels, full of detail, underneath all that banding. Remove the banding, and you can recover completely usable deep shadows and greatly expand your DR. I've never seen it get as good as what an Exmor offers out of the box, but at least a stop, stop and a half beyond the 10-11 stops we get natively.

We can certainly look forward to improved digital signal processing and deconvolution. That again was a bit beyond the scope of the original points I was trying to make, hence the lack of any original discussion involving them.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Don Haines on February 28, 2013, 12:17:58 PM
The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

Well said! Might I add that even if one could afford the $13,000 600mm lens, for many of us who backpack it's just too large to bring along on a multi-day trek through the mountains. Sometimes bigger is not better.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:26:36 PM
Can jrista take account of the heavy AA filters on Canon sensors in his calcs or have I missed something?

Well first, as I've mentioned in the past, I believe the idea that Canon uses overly strong AA filters is a bit overblown. I thought my 7D had an aggressive AA filter until I first put on the EF 300mm f/2.8 L II in August 2012. I'd been using a 16-35 L II and the 100-400 L. Both are "good" lenses, but neither is a truly "great" lens. Both don't seem to have enough resolving power and/or contrast to really do the 7D justice. They get really close, but often enough they fall just a little short, which produces that well-known "soft" look to 7D images.

With the F 300/2.8 II, 500/4 II, and 600/4 II, with and without 1.4x and 2x TCs, the IQ of the 7D is stellar. I've never seen any of the classic softness that I did with my other lenses. Based on the IQ from using Canons better lenses, I do NOT believe they have aggressive AA filters...I think they actually have AA filters that are just right! :)

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.

Does this matter in the real world? Well yes maybe if you want to use 100pc crops.

Will Canon introduce something like the D7100 in their well trailed up coming 7D2, 70D, 700D series?

Somehow doubt it.

Excellent stuff jrista - thanks for posting.

I'm curious about the difference with the Pentax K5 II. If there is that much of a difference, I'd presume that the AA filter WAS aggressive. From what I have heard, the D800 and D800E, when you use a good lens, does NOT exhibit that much of a difference. In many reviews I've read, the differences were sometimes barely perceptible, with the added cost on the D800E that if you shoot anything with repeating patterns, you can end up with aliasing and moire. There is definitely some improvement to shooting without an AA filter, but I am not sure it is really all it is cracked up to be.

Generally speaking, I would blame the lens for any general softness unless it is definitively proven to outresolve the sensor. For sensors with the densities they have today, lenses are generally only capable of outresolving the sensor in a fairly narrow band of aperture settings...from around f/3.5 to f/8 for FF sensors, and f/3.5 to f/5.6 for APS-C sensors. The higher the density of the sensor, the narrower the range....a 24mp APS-C can probably only be outresolved at around f/4, unless the lens is more diffraction-limited at wider apertures. Wider than f/3.5 in the majority of cases, optical aberrations cause softening, in many cases much more than you experience from diffraction even at f/22.

That said, so long as you pair a high quality lens with a high density sensor on a Canon camera, or for that matter a Nikon camera, I do not believe the AA filter will ever be a serious problem. When it comes to other brands, I don't really know enough. In the case of the K5 II, it really sounds more like the AA version DOES have an aggressive filter, which is why there is a large difference between the AA and non-AA versions.

On pentaxforums you will see comparisons between the K5-II and the filterless K5-IIs. Both have same 16Mp sensors but the -IIs is a whole lot sharper and is claimed to be equivalent to a filtered 24Mp camera.
If you check out luminous-landscape.com there is a nice thread comparing optimally sharpened D800 vs optimally sharpened D800E. I believe the conclusion is that for low-noise situations, the performance is virtually identical.

Aye, this is what I've heard as well. There is a small improvement with the D800E in the right circumstances, but overall it does not seem to be as great as it otherwise sounds on paper. Given the IQ I can get out of the 7D, which does have an AA filter, with top-shelf lenses...I really do not believe it has an aggressive AA filter, and I am quite thankful that the AA filter is there. Without it, I'd never be able to photograph birds...their feathers are moire hell!
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: 3kramd5 on February 28, 2013, 12:31:28 PM
Such a sensor would really be pushing the limits, as well, and probably wouldn't even be physically possible. The pixel pitch of such a sensor would be around 723 nanometers (0.723µm)!

Nokia got halfway there (1.4 micron) with a camera phone... :D
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:34:22 PM
Some interesting points here, though not all of them are 100% based on optical reality (which has a measurable match in optical theory to 99+% most of the time, as long as you stay within Fraunhofer and quantum limits)

Rayleigh is indeed the limit where MTF has sunk to 7% (or 13%, depending if your target is sine or sharp-edge, OTF) - a point where it is very hard to recover any detail by sharpening if your image contains any higher levels of noise. It's hard even at low levels of noise. And Rayleigh is defined as: "When the peak of one point maxima coincides with the first Airy disk null on the neighboring point".

Consider again what this means.
You have two points, at X distance p-p. The distance you're interested in finding is where they begin to merge enough to totally mask out the void in between. Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot.

But this does not make Rayleigh the p-p distance of pixels needed to register that resolution, to register the void in the first place you have to have one row of pixels between the two points. That means that Rayleigh is a line-pair distance, not a line distance. If Rayleigh is 1mm and the sensor 10mm, you need to have 20 pixels to resolve it.

You've hit it on the head...measuring resolution from a digital sensor at Rayleigh is very difficult because of noise. The contrast level is so low that, when the image is combined with noise (both photon shot and read), there is really no good way to discern whether the difference between two pixels is caused by differences in the detail resolved from the scene, or differences caused by noise.

There is probably a better "sweet spot" MTF, lower than MTF 50 but higher than MTF @ Rayleigh, that would give us a better idea of how well digital image sensors can resolve detail. Given the complexities involved with using MTF @ Rayleigh, and the ubiquitous acceptance of MTF 50 as the best measure of resolution from a human perception standpoint, I prefer to use MTF 50.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:42:28 PM
"Rayleigh distance gives "barely discernible differentiation, you can JUST make out that there's two spots, not one wide spot."

I guess this is the point of discussion. Like I said earlier, if you have some knowledge of the scene (such as knowing that it is dirac-like stars on a dark sky), or if you have knowledge of the PSF/low noise/access to good deconvolution, you can challenge this limit.

I have no issues with the Rayleigh criterion being a practically important indicator of "diminishing returns". I do have an issue with people claiming that it is an absolute brickwall (nature seems to dislike brickwall filters - perhaps because it assumes acausality?).

From a theoretic perspective it would be interesting to know if there are true, fundamental limits to the information passed onto the sensor (a PSF will to some degree "scramble" the information, making it hard to sort out. That is not the same as removing it altogether). I have seen some hints of such a limit, but I never had much optical physics back in University, and I am too lazy to read up on the theory myself. There were a discussion on dpreview where they talked about a few hundred megapixels/ gigapixel for an FF sensor before the blue sensels could not receive any more spatial information.

At the very least, I assume that we move into quantum trouble sooner or later. When a finite number of photons hits a sensor, there is only so much information to record. If you cannot simultaneously record the precise position and energy of a photon, then that is a limit. I assume that as a sensel approach the wavelength of light, nastyness happens. That is one more area of physics that I do not master.

-h

I don't think I've claimed Rayleigh is a "brick wall". I'd call Dawe's the brick wall, as anything less than that and you have two unresolved points of light. The problem with Rayleigh, at least in the context of spatial resolution in the photographic context...is that detail becomes nearly inseparably mired with noise. At very low levels of contrast, even assuming you have extremely intelligent and effective deconvolution, detail at MTF 10 could never really be "certain"....is it detail...or is it noise? Even low levels of noise can have much higher contrast than detail at MTF 10. Dawe's Limit is the brick wall, Rayleigh is the effective limit of resolution for all practical intents, and leave a fairly significant degree of uncertainty in discussions like this.

MTF50 is widely accepted as being a moderately lower contrast level where detail is acceptably perceivable by the human eye for a print at native resolution. In the film days, the perception of a viewer was evaluated from contact prints, so what the film resolved is what the viewer could see. Given the broad use and recognition of MTF 50, its what I use. Ultimately, it wouldn't really matter of I used MTF 50, MTF 10, or MTF 0...the math will work out roughly the same either way, and the relative benefits of a 24mp sensor over an 18mp sensor will still exist. The MTF is really just to provide a consistent frame of reference, nothing more.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:54:48 PM
This leads me to a question that I have not received a satisfactory answer yet.

Consider an exposure at ISO 800, why is it that we can better results by setting the ISO to 800 (amplification within the camera via electronics -analog ?) versus taking the same picture at ISO 100 and adjusting exposure in the computer.   Of course I am talking about a raw capture.

In both case the amount of light hitting the sensor will be the same, so the signal and S/N will be the same(?), but amplifying the signal in the camera via electronics seems to give a cleaning image

Thanks
I believe that this differs for so-called "ISO-less" cameras and... "non-ISO-less" (sic) cameras. Canon generally belongs to the latter cathegory.

Imagine a pipeline consisting of:
(noise injected)->(amplifier)->(noise injected)->(amplifier)

If the second injection of noise is significant, then you gain SNR by employing the first amplifier. If the first noise source is dominant, then it does not matter which amplifier you use.


The high-DR@lowISO sensors used by Sony/Nikon seems to give similar quality if you do the gain in a raw editor as if you do it in-camera. There are still disadvantages to this method, though. You cannot use the auto-exposure, in-camera preview is useless, and the histogram is hard to interpret. You gain better highlights (DR) in low-light images, though.

-h

Thanks.  I think I understand most of what you are saying.  HOwever the amplication via the computer is should not introduce any noise.  The A to D is reduced from 12 (or 14) bits to 9 (or 11) bits for a 3 stop gain.  Shadows may go from 6 bit to 3 bit.  Not noise, but posterization?

"Amplification" via software does not introduce noise...however it can enhance the noise present, because at that point, assuming noise exists in the digital signal, it is "baked in". When it comes to Exmor (Sony/Nikon high-DR sensor), the level of noise is extremely low, so pushing exposure around in post is "amplifying" pixels that have FAR less noise than the competition.

The Exmor sensor could be called ISO-LESS because it is primarily a DIGITAL pipeline.

In most sensors, when a pixel is read, analog CDS is applied, analog per-pixel amplification is applies, the columns of a row are read out, the signal is often sent off the sensor die via bus, a downstream analog amplifier may be applied, and the pixels are finally converted to digital by a high frequency ADC. Along that whole pipeline there are many chances for noise to be introduced to the analog signal. Canon's sensors are like this, and some of the key sources of noise are the non-uniform response of the CDS circuits (which is the first source of banding noise), transmission of the signal along a high speed bus, downstream amplification (which amplifies all the noise in the signal prior to secondary amplification...which only occurs at the highest ISO settings), and finally large bucket parallel ADC via high frequency converters (which is where the second source of banding noise comes from).

Unlike most sensors, the only analog stage in Exmor is the direct read of each pixel. Once a pixel is read, it is sent directly to an ON-DIE ADC, where pixels are converted directly to a digital form, where digital CDS is applied, where digital amplification is applied, and from which point on the entire signal remains digital. Once the signal is in a digital form, it is, for all intents and purposes, immune to contamination by analog sources of noise. Transmission along a bus, further image processing, etc. all work on bits rather than an analog signal. As such, Exmor IS effectively "ISO-less", since amplification occurs post-ADC. ISO 800 with Exmor is really the same as ISO 100 with a 3-stop exposure boost in post...there is really little difference.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 12:56:46 PM
The benefit of a high density APS-C really only comes into play when you can't afford that $13,000 600mm lens, meaning even if you had the 5D III, you could still only use that 400mm lens. Your in a focal-length limited scenario now. It is in these situations, where you have both a high density APS-C and a lower density FF body, that something like the 18mp 7D or a 24mp 7D II really shine. Even though their pixels aren't as good as the 5D III (assuming there isn't some radical new technology that Canon brings to the table with the 7D II), you can get more of them on the subject. You don't need to crop as much on the high density APS-C as with the lower density FF. On a size-normal basis, the noise of the APS-C should be similar to the FF, as the FF would be cropped more (by a factor of 1.6x), so the noise difference can be greatly reduced or eliminated by scaling down.

Well said! Might I add that even if one could afford the $13,000 600mm lens, for many of us who backpack it's just too large to bring along on a multi-day trek through the mountains. Sometimes bigger is not better.

That's a great point! Sometimes bigger is definitely not better...if I was hiking around Rocky Mountain National Park, I'd probably not want to bring anything larger than the 100-400mm L.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 01:55:45 PM
Deconvolution can shift MTF curves, however for deconv algorithms to be most effective, as they would need to be at Rayleigh, and even more so Dawes, you need to know something about the nature of the diffraction you are working with. Diffraction is the convolution, and for a given point light source at Dawes, you would need to know the nature of it, the PSF, to properly deconvolve it.
If you knew the precise PSF and had no noise, then you could in theory retrieve the unblurred "original" perfectly through linear deconvolution. In practice, it is impossible to know the precise PSF, there is noise, and the PSF might contain deep spectral zeros. This is where non-linear, blind deconvolution comes into the picture. Sadly, I don't know much about how they work, but I know some Wiener filtering, and I believe that serve as a starting-point?

There are a whole lot of starting points. People are doing amazing things with advanced deconvolution algorithms these days. Debanding. Denoising. Eliminating motion blur. Recovering detail from a completely defocused image. The list of what we can do with deconvolution algorithms, particularly in the wavelet space, is long and growing. Whether it will help us really extract more resolution at contrast levels as low as or less than 10%, I can't say. I guess if you could denoise extremely well, and had a rough idea of the PSFs, then you could probably do some amazing things. I guess we'll see when amazing things start happening over the next decade. ;)

Quote
The kind of deconv algorithms we have today, such as denoise and debanding and the like, are useful up to a degree. We would need some very, very advanced algorithms to deconvolve complex scenes full of near-infinite discrete point light sources at MTF0. I can see deconvolution being useful for extracting and enhancing detail at MTF10...I think we can do that today.
Doing anything up against the theoretical limit tends to be increasingly hard for vanishing benefits.

Certainly.

Quote
That said, what we can do with software to an image once it is captured was kind of beyond the scope of my original point...which was really simply to prove that higher resolution sensors really do offer benefits over lower resolution sensors, even in diffraction-limited scenarios.
I think that deconvolution strengthens your point, as it means that even higher resolution sensors make some sense even when operating in the "diffraction limited" regime. If the sensor is of lower resolution, then higher spatial frequencies are abruptly cutoff (or folded into lower frequencies) and impossible to recover through deconvolution or other means.

That is a good point. It is kind of along the same lines as the argument of, when you need a deep DOF, using a very narrow aperture like f/22 or f/32 despite the softening it incurs, rather than opting for a wider aperture that won't necessarily produce the DOF you need. Correcting the uniform blurring caused by diffraction is a hell of a lot easier than correcting the non-linear blurring caused by a too-thin DOF. Deconvolution is definitely a powerful post-processing tool that can enhance the use of higher resolution sensors (among other things), and realize fine detail at low contrast levels that exist, but are not readily apparent.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: CarlTN on February 28, 2013, 02:44:04 PM
Jrista, "+1" on you starting a new thread with this.  Thanks for all the useful info!

You mention NR software like Topaz for debanding, but what software works best for the luminance noise reduction? 

I mentioned before, that I noticed the luminance noise via my cousin's 5D3, such as at ISO 4000, had a very hard pebble-like grain structure that gets recorded the size of maybe 5 or 6 pixels across.  The luminance slider in ACR CS5 had very little effect on it until it got above 80%, so more detail was sacrificed.  With my 50D's files, the luminance grain is much smaller in size relative to the pixels, so the luminance slider has a far greater effect in its lower range.

I have practiced the art of optimizing a file in ACR before I ever even open it in Photoshop, but is it possible that this isn't always the best approach for noise reduction?  I still think it is, but I'm trying to be open minded and learn new things!
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on February 28, 2013, 03:32:08 PM
I don't think I've claimed Rayleigh is a "brick wall".
I am sorry, this was a general rant, not targeted at you.
Quote
I'd call Dawe's the brick wall, as anything less than that and you have two unresolved points of light.
https://www.astronomics.com/Dawes-limit_t.aspx (https://www.astronomics.com/Dawes-limit_t.aspx)
"This “Dawes’ limit” (which he determined empirically simply by testing the resolving ability of many observers on white star pairs of equal magnitude 6 brightness) only applies to point sources of light (stars). Smaller separations can be resolved in extended objects, such as the planets. For example, Cassini’s Division in the rings of Saturn (0.5 arc seconds across), was discovered using a 2.5” telescope – which has a Dawes’ limit of 1.8 arc seconds!"
Quote
The problem with Rayleigh, at least in the context of spatial resolution in the photographic context...is that detail becomes nearly inseparably mired with noise. At very low levels of contrast, even assuming you have extremely intelligent and effective deconvolution, detail at MTF 10 could never really be "certain"....is it detail...or is it noise?
I guess you can never be "certain" at lower spatial frequencies either? As long as we are dealing with PDFs, it is a matter of probability? How wide is the tail of a Poisson distribution?

I see no theoretical problem at MTF10 as long as the resulting SNR is sufficient (which it, of course, usually is not).

-h

Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on February 28, 2013, 05:49:48 PM
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.

Contrast is meaningless as a metric - until you have both amplitude contrast AND frequency. This is inherently implied in MTF, as it is defined as contrast over frequency.... Contrast is just a difference in brightness. It doesn't become "detail" until the contrast is present at a high spatial frequency.

In practice (I've written and also quantified many Bayer interpolation schemes) you need at least MTF20 - a Michelson contrast of 0.2 - to get better than 50% pixel estimation accuracy when interpolating a raw image (based on Bayer of course).

Those 0.2 in contrast does by physical necessity INCLUDE noise. Even the best non-local schemes cannot accurately estimate a detail on pixel level when that detail has a contrast lower than approximately twice the average noise power of the pixel surrounds.

The only way to get past this is true oversampling, and that does not occur in normal cameras until you're at F16-F22. In that case no interpolation estimation is needed - just pure interpolation. At that point you can be certain that no detail in the projected image will be small enough to "fall in between" two pixels of the same color on the sensor.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 01, 2013, 12:24:36 AM
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.

Contrast is meaningless as a metric - until you have both amplitude contrast AND frequency. This is inherently implied in MTF, as it is defined as contrast over frequency.... Contrast is just a difference in brightness. It doesn't become "detail" until the contrast is present at a high spatial frequency.

In practice (I've written and also quantified many Bayer interpolation schemes) you need at least MTF20 - a Michelson contrast of 0.2 - to get better than 50% pixel estimation accuracy when interpolating a raw image (based on Bayer of course).

Those 0.2 in contrast does by physical necessity INCLUDE noise. Even the best non-local schemes cannot accurately estimate a detail on pixel level when that detail has a contrast lower than approximately twice the average noise power of the pixel surrounds.

The only way to get past this is true oversampling, and that does not occur in normal cameras until you're at F16-F22. In that case no interpolation estimation is needed - just pure interpolation. At that point you can be certain that no detail in the projected image will be small enough to "fall in between" two pixels of the same color on the sensor.

I don't think we are saying different things... I agree that having a certain minimum contrast is necessary for detail at high frequencies to be discernible as detail.

I am not sure I completely follow...some of the grammar is confusing. In an attempt to clarify for other readers, I think you are saying that because of the nature of a bayer type sensor, MTF at no less than 20% is necessary to demosaic detail from a bayer sensor's RAW data such that it could actually be perceived differently than noise in the rendered image. Again...I don't disagree with that on principal, however with post-process sharpening, you CAN extract a lot of high frequency detail that is low in contrast. The moon is a superb example of this, where detail most certainly exists at contrast levels below 20%, as low as Rayleigh, and possibly even lower.

The only time when it doesn't matter is at very narrow apertures, where the sensor is thoroughly outresolving the lens, and the finest resolved element of detail is larger than a pixel.

(I believe that is what The Suede is saying...)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 01, 2013, 12:32:10 PM
Well, I am not necessarily talking about frequencies...just contrast (which would really be amplitude rather than frequency.) It would be correct to assume that low MTF can affect amplitude regardless of frequency, however I think it has a more apparent impact at higher frequencies than lower.
I am talking about something like this:
(http://www.sce.carleton.ca/faculty/adler/elg7173/visual_system/contrast-spatial-freq.jpg)

Increasing (spatial) frequency from left to right, increasing contrast from top to bottom. As we are talking about light and imaging, I think that amplitude/phase are cumbersome properties.

Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave. At the bottom, the amplitude is high, and constant across the whole length of the image, while frequency increases from left to right. At the top, the amplitude is flat. At the middle, the amplitude is about 50%, while again frequency increases from left to right.

"Spatial" frequencies are exactly that...you don't have a waveform without frequency, amplitude, and phase. Technically speaking, the image above is also modulated in both frequency and amplitude, with a phase shift of zero.

If you convolve this image with a sensible PSF, you will get blurring, affecting the high frequencies the most. As convolution is a linear process, high-contrast and low-contrast parts will be affected equally. Now, if you add noise to the image, the SNR will be more affected in low-contrast than in high-contrast areas.

With the image above, you could convolve it with a sensible PSF, and deconvolve it perfectly. Noise, however, would actually affect both the low frequency parts of the image as well as the low contrast parts. The high frequency high contrast parts are actually the most resilient against noise...everything else is susceptible (which is why noise at high ISO tends to show up much more in OOF backgrounds than in a detailed subject.)

(http://i.imgur.com/KdQGdvS.jpg)

If the "ideal mathematical idea" of this image is recorded as a finite number of Poisson-distributed photons, you get some uncertainty. The uncertainty will be largest where there are a few photons representing a tiny feature, and smallest where there are many photons representing a smooth (large) feature. My point was simply that the uncertainty is there for the entire image. However unilkely, that image that seems to portray a Lion at the zoo _could_ really be of a soccer game, only shot noise altered it.

Assuming an image affected solely by Poisson-distributed photons, then theoretically, yes. However, the notion that an image of a soccer game might end up looking like a lion at the zoo would only really be probable at extremely low exposures. SNR would have to be near zero, such that the Poisson distribution of photon strikes left the majority of the sensor untouched, leaving more guesswork and less structure in figuring out what the image is. As the signal strength increases, the uncertainty shrinks, and the chances of a soccer game being misunderstood as a lion at the zoo diminishes to near-zero. Within the first couple stops of an EV, the uncertainty drops well below 1, and assuming you fully expose the sensor (i.e. ETTR) to maximize SNR, uncertainty should be well below 0.1. As uncertainty drops, the ease with which we can remove photon noise should increase.

However...noise is a disjoint factor from MTF. The two are not mutually exclusive, however they are distinct factors, and as such they can be affected independently with high precision deconvolution. You can, for example, eliminate banding noise while barely touching the spatial frequencies of the image.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 01, 2013, 07:35:05 PM
I am talking about something like this:
(http://www.sce.carleton.ca/faculty/adler/elg7173/visual_system/contrast-spatial-freq.jpg)
Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave.
.../cut/...

No. Using the word "amplitude" as a straight replacement for the word "contrast" (red-marked text) - is actually very misleading.

The amplitude is not equal to contrast in optics, and especially not when you're talking about visual contrast. Contrast, as normal people speak of it, is in most cases closely related to [amplitude divided by average level]. And so are MTF figures - this is not a coincidence.

An amplitude of +/-10 is a relatively large contrast if the average level is 20
-giving an absolute amplitude swing from 10 to 30 >> an MTF of 0.5
But if the average level is 100, then swing is 90-110 >> MTF is only 0.1. That's a very much lower contrast, and a lot harder to see or accurately reproduce.

Contrast is what we "see", not amplitude swing.

And no, noise in general is not generally disjointed from MTF... Patterned noise is separable from image detail in an FFT, and you can eliminate most of it without disturbing underlying material. Poisson noise or any other non-patterned noise on the other hand isn't separable, by any known algorithm. And since the FFT of Poisson is basically a Gauss bell curve, you remove Poisson noise by applying a Gaussian blur... Any attempt to reconstruct the actual underlying material will be - at worst - a wild guess, and - at best - and educated guess. The educated guess is still a guess, and the reliability of the result is highly dependent on non-local surrounds.

The Gaussian blur radius you need to apply to dampen non-patterned noise by a factor "X" is (again, not by coincidence!) almost exactly the same as the amount of downwards shift in MTF that you get.

As noise suppression algorithms get smarter and smarter, the amount of correct guesses-estimates in a certain image with a certain noise amount present will continue to increase (correlation to reality will get better and better) - but they're still guesses. But that's good enough for most commercial use. What we're doing today in commercial post-processing regarding noise reduction is mostly adapting to psycho-visuals. We find ways to make the viewer THINK that:  -"Ah... That looks good, that must be right" - by finding what types of noise patterns that humans react strongly to, and then trying to avoid creating those patterns when blurring the image (all noise suppression is blurring!) and making/estimating new sharp edges.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 01, 2013, 09:26:31 PM
I am talking about something like this:
(http://www.sce.carleton.ca/faculty/adler/elg7173/visual_system/contrast-spatial-freq.jpg)
Another way to put that would be Frequency (low to high) on the X axis, and Amplitude (high to low) on the Y axis. :) Contrast is simply the amplitude of the frequency wave.
.../cut/...

No. Using the word "amplitude" as a straight replacement for the word "contrast" (red-marked text) - is actually very misleading.

The amplitude is not equal to contrast in optics, and especially not when you're talking about visual contrast. Contrast, as normal people speak of it, is in most cases closely related to [amplitude divided by average level]. And so are MTF figures - this is not a coincidence.

An amplitude of +/-10 is a relatively large contrast if the average level is 20
-giving an absolute amplitude swing from 10 to 30 >> an MTF of 0.5
But if the average level is 100, then swing is 90-110 >> MTF is only 0.1. That's a very much lower contrast, and a lot harder to see or accurately reproduce.

Contrast is what we "see", not amplitude swing.

And no, noise in general is not generally disjointed from MTF... Patterned noise is separable from image detail in an FFT, and you can eliminate most of it without disturbing underlying material. Poisson noise or any other non-patterned noise on the other hand isn't separable, by any known algorithm. And since the FFT of Poisson is basically a Gauss bell curve, you remove Poisson noise by applying a Gaussian blur... Any attempt to reconstruct the actual underlying material will be - at worst - a wild guess, and - at best - and educated guess. The educated guess is still a guess, and the reliability of the result is highly dependent on non-local surrounds.

The Gaussian blur radius you need to apply to dampen non-patterned noise by a factor "X" is (again, not by coincidence!) almost exactly the same as the amount of downwards shift in MTF that you get.

As noise suppression algorithms get smarter and smarter, the amount of correct guesses-estimates in a certain image with a certain noise amount present will continue to increase (correlation to reality will get better and better) - but they're still guesses. But that's good enough for most commercial use. What we're doing today in commercial post-processing regarding noise reduction is mostly adapting to psycho-visuals. We find ways to make the viewer THINK that:  -"Ah... That looks good, that must be right" - by finding what types of noise patterns that humans react strongly to, and then trying to avoid creating those patterns when blurring the image (all noise suppression is blurring!) and making/estimating new sharp edges.

Well, I can't speak directly to optics specifically.

I was thinking more in the context of the image itself, as recorded by the sensor. The image is a digital signal. There is more than one way to "think about" an image, and in one sense any image can be logically decomposed into discrete waves. Any row or column of pixels, block pixels, however you want to decompose it, could be treated as a Fourier series. The whole image can even be projected into a three dimensional surface shaped by a composition of waves in the X and Y axes, with amplitude defining the Z axis.

Performing such a decomposition is very complex, I won't deny that. Sure, a certain amount of guesswork is involved, and it is not perfect. Some algorithms are blind, and use multiple passes to guess the right functions for deconvolution, choosing the one that produces the best result. It is possible, however, to closely reproduce the inverse of the Poisson noise signal, apply it to the series, and largely eliminate that noise...with minimal impact to the rest of the image. Banding noise can be removed the same way. The process of doing so accurately is intense, and requires a considerable amount of computing power. And since a certain amount of guesswork IS involved, it can't be done perfectly without affecting the rest of the image at all. But it can be done fairly accurately with minimal blurring or other impact.

Assuming the image is just a digital signal, which in turn is just a composition of discrete waveforms, opens up a lot of possibilities. It would also mean that, assuming we generate a wave for just the bottom row of pixels in the sample image (the one without noise)...we have a modulated signal of high amplitude and decreasing frequency. The "contrast" of each line pair in that wave is fundamentally determined by the amplitude of the wavelet. The row half-way up the image would have half the amplitude...which leads to what we would perceive as less contrast.

Perhaps it is incorrect to say that amplitude itself IS contrast, I guess I wouldn't dispute that. A shrinking amplitude around the middle gray tone of the image as a whole does directly lead to less contrast as you move up from the bottom row of pixels to the top in that image. Amplitude divided by average level sounds like a good way to describe it then, so again, I don't disagree. I apologize for being misleading.

I'd also offer that there is contrast on multiple levels. There is the overall contrast of the image (or an area of the image), as well as "microcontrast". If we use the noisy version of the image I created, the bottom row could not be represented as a single smoothly modulated wave. It is the combination of the base waveform of increasing frequency, as well as a separate waveform that represents the noise. The noise increases contrast on a per-pixel level, without hugely affecting the contrast of the image overall.

Perhaps this is an incorrect way of thinking about real light passing through a real lens in analog form. I know far less about optics. I do believe Hjulenissen was talking about algorithms processing a digital image on a computer, in which case discussing spatial frequencies of a digital signal seemed more appropriate. And in that context, a white/black line pair's contrast is directly affected by amplitude (again, sorry for the misleading notion that amplitude IS contrast...I agree that is incorrect.)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Radiating on March 01, 2013, 11:03:38 PM
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 02, 2013, 01:22:43 AM
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 02, 2013, 12:32:38 PM
Perhaps this is an incorrect way of thinking about real light passing through a real lens in analog form. I know far less about optics. I do believe Hjulenissen was talking about algorithms processing a digital image on a computer, in which case discussing spatial frequencies of a digital signal seemed more appropriate. And in that context, a white/black line pair's contrast is directly affected by amplitude (again, sorry for the misleading notion that amplitude IS contrast...I agree that is incorrect.)
"Light" is an electro-magnetic wave, or can at least be treated as one. Radio is also based on electro-magnetic waves. In radio, you have coherent transmitters and receivers, and properties of the waveform (amplitude, phase, frequency) can contain information, or be used to improve directional behaviour etc. In regular imaging applications, we tend to treat light in a somewhat different way. The sun (and all other illuminators except LASERs) are incoherent. Imaging sensors dont record the light "waveform" at a given spatial/temporal region, but rather records intensity within a frequency band. This treats light in a slightly more statistical manner, just like static noise on a radio. What is the frequency of filtered white noise? What is its phase? Amplitude? Such terms does not make sense, but its Spectral Power Density, its Variance does make sense.

Well, I understand the nature of light, its wave particle duality, all of that. I am just not an optical engineer, so I am not sure if there is any knowledge in that field that would give a different understanding to exactly what happens to light in the context of optical imaging. That said, you are thinking about light as a particle. I am actually not thinking about light at all...but rather the spatial frequencies of an image, or in the context of a RAW image on a computer (well past the point where physical light is involved), a digital signal.

I'm not sure if I can describe it such that you'll understand or not...but think of each pixel as a sample of a wave. Relative to its neighboring pixels, it is either lighter, darker or the same tone. If we have a black pixel next to a white pixel, the black pixel is the trough of a "wave", and the white pixel is the crest. If we have a white-black-white-black, we have two full "wavelengths" next to each other. The amplitude of a spatial wave is the difference between the average tone and its trough or crest. In the case of our white-black-white-black example, the average tone is middle gray,  Spatial frequencies exist in two dimensions, along both the X and the Y axis. I'll see if I can find a way to plot one of the pixel rows as a wave.

In the image file that I attached, once printed and illuminated using a light bulb (or the sun), it is the intensity that is modulated (possibly through some nonlinear transform in the printer driver). The amount of black ink varies, and this means that more photons will be absorbed in "black" regions than in "white". The amplitude and phase properties of the resultant light is of little relevance. The frequency properties are also of little relevance as the image (and hopefully the illuminant) should be flat spectrum. If you point your camera to such a print, it will record the number of photons (again, intensity).

When it comes to the modulation of the intensity in the figure, this was probably done by a sinoid of sweeping frequency (left-right) and amplitude (up-down). The phase of the modulation does not matter much, as we are primarily interested in how the imaging system under test reduce the modulation at different spatial frequencies, and (more difficult) if this behaviour is signal-level dependent (like USM sharpening would be). If you change the phas of the modulation by 180 degrees, you would still be able to learn the same about the lense/sensor/... used to record the test image.

So, again, all that is thinking about light directly as a waveform or particle. That is entirely valid, however there are other ways of thinking about the image produced by the lens. The image itself is comprised of frequencies based on the intensity of a pixel. A grayscale image is much easier to demonstrate with than a  color image, so I'll use that to demonstrate:

(http://i.imgur.com/QaVy6CO.jpg)

The image above models the bottom row of pixels from your image as a spatial frequency. I've stretched that single row of pixels to be 10 pixels tall, simply so it can be seen better. I've plotted the spatial waveform below. The concept is abstract...it is not a physical property of light. It is simply a way of modeling the oscillations inherent to the pixels of an image based on their level. This is a very simplistic example...we have the luxury of an even-toned middle gray as our "zero energy" level. Assuming the image is an 8-bit image, we have 256 levels. Levels 0-127 are negative power, levels 128-255 are positive power. The intensity of each pixel in the image oscillates between levels 0 and 255, thus producing a wave...with frequency and amplitude. Phase exists too...if we shift the whole image to the left or right, and appropriately fill in the pixels to the opposite side, we have all of the properties that represent a wave.

Noise can be modeled the same way...only as a different wave with different characteristics. The noise channel from the full sized image below is shown at the bottom of the wave model above (although it is not modeled itself...can't really do it well in photoshop.) Thought of as a Fourier series, the noise wave and the image wave are composible and decomposible facets of the image.

MTF with Noise:
(http://i.imgur.com/5jgDUlY.jpg)

MTF Plot:
(http://i.imgur.com/YdWFtm1.jpg)

Noise channel:
(http://i.imgur.com/DoFT4RB.jpg)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Radiating on March 02, 2013, 08:39:35 PM
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
Quote
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

Yes diffraction changes with aperture but if you always shoot below f/5.6 you can ditch the AA filter without consequence, and those images shot below f/5.6 would be sharper than those taken with the same camera with an AA filter.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 02, 2013, 10:13:02 PM
Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.

If you look at a sharp lens at f11 like a super telephoto and a soft lens at f/11 the sharp lens looks sharper despite being at the diffraction limit.

What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.

Where do you get that 20% figure? I can't say I've experienced that with anything other than the 100-400 @ 400mm f/5.6...however in that case, I presume the issue is the lens, not the AA filter...

MTF tests of the D800 and D800E back to back

Do you have a link? Because 20% is insane, and I don't believe that figure. The most sharply focused images would look completely blurry if an AA filter imposed a 20% cost on IQ...it just isn't possible.

Canon WANTS diffraction to be a limiting factor so that they can remove the AA filter.
If the AA filter is an expensive/complex component, increasing the sensel density until diffraction takes care of prefiltering is definitely one possible approach.
Quote
What 24MP does is it allows the whole system to be sharper due to a weaker AA filter. Diffraction is the best AA filter on earth, current ones degrade the image by 20% which is a lot.
Diffraction is dependant on aperture, and not a constant function. In practice, one never have perfect focus (and most of us dont shoot flat brick-walls), so defocus affects the PSF. Lenses and motion further extent the effective PSF. The AA filter is one more component. I have seen compelling arguments that the total PSF might as well be modelled as a Gaussian, du to the many contributors that change with all kinds of parameters.

Claiming that the AA filter degrade "image quality" (?) by 20% is nonsense. Practical comparisions of the Nikon D800 vs D800E suggests that under some, ideal conditions, the difference in detail is practically none, once both are optimally sharpened. In other conditions (high noise), you may not be able to sharpen the D800 to the point where it offers details comparable to the D800E. Manufacturers dont include AA filters because they _like_ throwing in more component, but because when the total, effective PSF is too small compared to pixel pitch, you can have annoying aliasing that tends to look worse and is harder to remove than slight blurring.

-h

You can't compare an unsharpened D800E image to a sharpened D800 image, that's not how information processing works.

The AA filter destroys incoming information from the lens, irreversibly. Sharpening can trick MTF tests into scoring higher numbers, but that is besides the point.

No, it is not irreversible. The D800E is the perfect example of the fact that it is indeed REVERSIBLE. The AA filter is a convolution filter for certain frequencies. Convolution can be reversed with deconvolution. So long as you know the exact function of the AA filter, you can apply an inverse and restore the information. The D800E does exactly that...the first layer of the AA filter blurs high frequencies at Nyquist by a certain amount, and the second layer does the inverse to unblur those frequencies, restoring them to their original state.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 12:25:20 PM
jrista:
My gripe was with your claims that seemingly everything can and should be described using Amplitude, phase and frequency.

A stochastic process (noise) have characteristics that vary from time to time and from realisation to realisation. This means that talking about the amplitude of noise tends to be counterproductive. What does not change (at least in stationary processes) is the statistic parameters: variance. Or PSD. etc.

What you want to learn from the response to a swept-frequency/amplitude sinoid is probably the depth of the modulation. Sure the sine has got a phase, but if it cannot tell us anything, why should we bother? If you do believe that it tells us anything, please tell me instead of explaining once more what a sine-wave is or how to Fourier-transform anything.

-h

Not "should", but "can". I am not sure I can explain anymore, as I think your latching on to meaningless points. I don't know your background, so if I'm explaining things you already know, apologies.

The point isn't about amplitude, simply that noise can be described as a set of frequencies in a Fourier series. Eliminate the wavelets that most closely represent noise (not exactly an easy thing to do without affecting the rest of the image, but not impossible either), and you leave behind only the wavelets that represent the image.

Describing an image as a discrete set of points of light, which disperse with known point spread functions, is another way to describe an image. In that context, you can apply other transformations to sharpen, deblur, etc.

The point was not to state that images should only ever be described as a Fourier series. Simply that they "can". Just as they "can" be described in terms of PSF and PSD.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 03, 2013, 12:44:00 PM
I have the same formula, derived in a mathematical way, under some assumptions, here (http://plamen.emilstefanov.net/Resolution/index.htm). It is actually a formula that first appeared in some publications in optics but I cannot find the references.

Excellent post, BTW.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 01:33:20 PM
Just to provide a better example than I can provide, here is a denoising algorithm that uses both FFT and wavelet inversion to deband as well as deconvolution of PSF to remove random noise:

http://lib.semi.ac.cn:8080/tsh/dzzy/wsqk/spie/vol6623/662316.pdf (http://lib.semi.ac.cn:8080/tsh/dzzy/wsqk/spie/vol6623/662316.pdf)

This really sums up my point...simply that an image can be processed in different contexts via different modeling to remove noise while preserving image data. I am not trying to say that modeling an image signal as a Fourier series is better or the only way, and that assuming it is a discrete set of point light sources described by a PSF is invalid. They are both valid.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 02:07:06 PM
Not "should", but "can". I am not sure I can explain anymore, as I think your latching on to meaningless points. I don't know your background, so if I'm explaining things you already know, apologies.

The point isn't about amplitude, simply that noise can be described as a set of frequencies in a Fourier series. Eliminate the wavelets that most closely represent noise (not exactly an easy thing to do without affecting the rest of the image, but not impossible either), and you leave behind only the wavelets that represent the image.

Describing an image as a discrete set of points of light, which disperse with known point spread functions, is another way to describe an image. In that context, you can apply other transformations to sharpen, deblur, etc.

The point was not to state that images should only ever be described as a Fourier series. Simply that they "can". Just as they "can" be described in terms of PSF and PSD.
Signal and noise are generally hard to separate. In a few cases, they may not be (as in "low-frequency signal, high-frequency noise" (->smoothing noise reduction) or "wide-band signal, narrow-band noise" (->notch filtering)). In any case, the DFT is information-preserving, meaning that anything that can be done to a chunk of data in the frequency-domain can be done to the data in the spatial-domain. It might be harder to do, but it can be done.

If you have additive noise, and know the exact values, it is trivial to subtract it from the signal in either the spatial domain or the frequency domain. I dont see any practical situations where you have this knowledge.

If you know the PDF of noise for a given camera, for a given setting, spatial frequency, signal level etc, you can have some prior information about the noise, and how to mitigate it. If you treat noise as a deterministic phenomena, it seems really hard to gain prior knowledge. You might have some insanely complex model of the world, try to fit the data to the model, and assume that the modelling error is noise. However, such nonlinear processes tends to have some nasty artifacts.

-h

It is true that you don't have precise knowledge about the exact specifications of a camera for a given setting, spatial frequency, signal level, etc. There is a certain amount of guesswork involved, but I think a lot of information can be derived by analyzing the signal you have. The link I provided in my last post demonstrates debanding using a fourier transform and wavelet inversion, and further denoising (for your Poisson and Gaussian noise) with standard PSF deconvolution. I don't believe there is any specific prior knowledge...what knowledge is used is derived. Is it 100% perfect? No, of course not. I think it's good enough, though, and while there is some softening in the final images, the results are pretty astounding. I've used Topaz Denoise 5, which applies a lot of this kind of advanced denoising theory. Its random noise removal is ok...I actually wouldn't call it as good as LR4's standard NR. However when it comes to debanding, Topaz Denoise 5 does a phenomenal job with very little impact to the rest of the image and does so in wavelet space using a fourier transform (sometimes you get softer light/dark bands, but as an artifact of the algorithm, they are very hard to notice in most cases, and more acceptable than the much harsher banding noise).

I won't deny that non-linear processing can produce some nasty artifacts. Topaz In-Focus is a deblurring tool. Its intended use is to correct small inaccuracies in focus, however it can be used to deblur images that are highly defocused. It is an interesting demonstration of the power of deconvolution, and when an image is nearly entirely blurred, you can recover it well enough to see moderately fine detail, including text. Under such extreme circumstances, however, you do tend to get "wave halos" around primary objects...which is indeed one of those nasty artifacts. One would assume, though, that the artifact is a consequence of an inadequate algorithm in the first place with some kind of repeating error. If so, with further refinement, the error could be corrected...no?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 03, 2013, 05:25:33 PM
Still quite a lot of misinformation in this thread, but before addressing any of that - some thread-title relevant material quite decoupled from the current discussions.

About pixel density, resolution and diffraction

Here's an image series that shows pretty graphically what jrista probably intended with this thread. Diffraction is NOT a problem. Lens sharpness is NOT a problem. And they won't be - for yet quite a long time... We have to double the number of MP's, and then double twice more before having trouble (with "better than mediocre" lenses of course...!)

Well, the Canon 400/5.6 is no slouch, but on the other hand it's no sharpness monster either. In the following image series, it was used wide open from a sturdy tripod - at 1/60s shutter speed for both cameras, on A) the 5Dmk2 and then on B) the Pentax Q.
The Q has a pixel pitch of ~1.55µm - giving an APS sensor of about 150MP!

I'll begin with explaining what the images are, and they show. both original images developed from raw with CaptureOne, sharpened individually. No noise reduction applied.

1 - 5D2, full frame scaled down.
2 - 5D2, 1:1 pixel scale, crop to about 6.2x4.55mm center of the 5D2 sensor - size as the Q sensor
3 - Pentax Q full frame, downsampled to same size
4 - Pentax Q, 1:1 pixel scale. This is like a 100% crop from a ~360MP FF camera
5 - What the 5D2 looks like when upsampled to the same presentation size.

1
(http://i306.photobucket.com/albums/nn242/Overflate/C02_full.jpg)
2
(http://i306.photobucket.com/albums/nn242/Overflate/C02_frame.jpg)
3
(http://i306.photobucket.com/albums/nn242/Overflate/Q02_full.jpg)
4
(http://i306.photobucket.com/albums/nn242/Overflate/Q02_crop.jpg)
5
(http://i306.photobucket.com/albums/nn242/Overflate/C02_interp.jpg)

So, at F5.6, you can see that the amount of red longitudinal CA in the 400/5.6 is a much bigger problem than diffraction - on a 150MP APS sensor.
And, in the last two images (4+5) you can clearly see that the 5Dmk2 isn't even close to scratching the surface of what the 400/5.6 is capable of. Note the difference in the feather pins, lower right of the image. Also note that the small-pixel image is a LOT less noisy than the 5D2 when scaled down to the same presentation size.

A.L., from whom I borrowed these images (with permission) is on assignment in South Korea at the moment, so I can't get at the raw files unfortunately.

I've done similar comparisons with the small Nikon 1-series and the FF D800. Same result there. The smaller pixels have a lot less noise at low ISOs, and the D800 isn't even close to resolving the same amount of detail as the smaller camera. Not even with cheap lenses like the 50/1.8 and the 85/1.8. But I thought a 5Dmk2 comparison would be more acceptable here... :)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 03, 2013, 05:51:34 PM
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 06:46:30 PM
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 06:49:34 PM
About pixel density, resolution and diffraction

Here's an image series that shows pretty graphically what jrista probably intended with this thread. Diffraction is NOT a problem. Lens sharpness is NOT a problem. And they won't be - for yet quite a long time... We have to double the number of MP's, and then double twice more before having trouble (with "better than mediocre" lenses of course...!)

Well, the Canon 400/5.6 is no slouch, but on the other hand it's no sharpness monster either. In the following image series, it was used wide open from a sturdy tripod - at 1/60s shutter speed for both cameras, on A) the 5Dmk2 and then on B) the Pentax Q.
The Q has a pixel pitch of ~1.55µm - giving an APS sensor of about 150MP!

I'll begin with explaining what the images are, and they show. both original images developed from raw with CaptureOne, sharpened individually. No noise reduction applied.

1 - 5D2, full frame scaled down.
2 - 5D2, 1:1 pixel scale, crop to about 6.2x4.55mm center of the 5D2 sensor - size as the Q sensor
3 - Pentax Q full frame, downsampled to same size
4 - Pentax Q, 1:1 pixel scale. This is like a 100% crop from a ~360MP FF camera
5 - What the 5D2 looks like when upsampled to the same presentation size.

1
(http://i306.photobucket.com/albums/nn242/Overflate/C02_full.jpg)
2
(http://i306.photobucket.com/albums/nn242/Overflate/C02_frame.jpg)
3
(http://i306.photobucket.com/albums/nn242/Overflate/Q02_full.jpg)
4
(http://i306.photobucket.com/albums/nn242/Overflate/Q02_crop.jpg)
5
(http://i306.photobucket.com/albums/nn242/Overflate/C02_interp.jpg)

So, at F5.6, you can see that the amount of red longitudinal CA in the 400/5.6 is a much bigger problem than diffraction - on a 150MP APS sensor.
And, in the last two images (4+5) you can clearly see that the 5Dmk2 isn't even close to scratching the surface of what the 400/5.6 is capable of. Note the difference in the feather pins, lower right of the image. Also note that the small-pixel image is a LOT less noisy than the 5D2 when scaled down to the same presentation size.

A.L., from whom I borrowed these images (with permission) is on assignment in South Korea at the moment, so I can't get at the raw files unfortunately.

I've done similar comparisons with the small Nikon 1-series and the FF D800. Same result there. The smaller pixels have a lot less noise at low ISOs, and the D800 isn't even close to resolving the same amount of detail as the smaller camera. Not even with cheap lenses like the 50/1.8 and the 85/1.8. But I thought a 5Dmk2 comparison would be more acceptable here... :)

Thanks for the examples! Great demonstration of what can be done, even with a 150mp APS-C equivalent (384mp FF equivalent) sensor. I could see a 150mp APS-C being possible...Canon has created a 120mp APS-H. I wonder if a 384mp FF is plausible...
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Skulker on March 03, 2013, 08:00:15 PM
That's a very interesting post from thesuede

It would be good to see the original files and see details of the adapter to mount that lens on that camera  used I was wondering if there is a bit of an extension tube effect?

Generally if something is too good to be true there is a clue that there may be a catch. Any results that indicate that a camera like the pentax q can out perform a camera like the 5d2 is an interesting result.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 03, 2013, 08:13:41 PM
It's a standard PQ>>EF adapter. There's no extension (obviously, since that would offset focus...).

And there's no "catch". This is generally known truth - among anyone that's ever tried, or has any kind of education within the field. In fact, the PQ is almost 50% better in light-efficiency per square mm.

At higher ISOs (lower photometrical exposures) the PQ suffers from a lot higher integrated sum of RN per mm², so large pixels still win for high ISO applications.

Both were shot at 1/60s, F5.6 - so the exact same photometric exposure. No glass in the adapter of course, and this means that the cameras were fed the EXACT same amount of light per mm² of sensor estate. Both were at ISO200, but required slightly different offsets - the canon is a large camera, giving a lot more headroom in the raw files. The PQ is really an exchangeable lens compact camera, with a small sensor - so as little headroom as technically possible is used to offset RN. The ISO 12232:2006 (and the CIPA DC-008) allows defining ISO according to the camera exposure for medium gray, so that's generally ok (even if the Canon overstates ISO by a bit to much...). I'm not sure Capture One treats any of them absolutely correctly.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 03, 2013, 09:57:02 PM
That's a very interesting post from thesuede

It would be good to see the original files and see details of the adapter to mount that lens on that camera  used I was wondering if there is a bit of an extension tube effect?

Generally if something is too good to be true there is a clue that there may be a catch. Any results that indicate that a camera like the pentax q can out perform a camera like the 5d2 is an interesting result.

Outperform is a broad word without an appropriate context. The Pentax SENSOR, from a spatial standpoint, definitely outperforms the 5D II. That is a simple function of pixels per unit area, and the Pentax plain and simply has more. There is nothing really "too good to be true" about that fact.

There are numerous areas where performance can be measured and compared, in addition to the sensor. For one, despite it's greater spatial resolution, the Pentax will suffer at higher ISO due to it's smaller pixel size (which is still much smaller, despite it being a BSI sensor). It just can't compare to the significantly greater surface area of the 5D II's pixels, which should perform well at high ISO.

The Pentax outperforms in terms of sheer spatial resolution, but I would say the 5D II outperforms in most other areas, such as ISO performance, image resolution, camera build and ergonomics, the use of a huge optical viewfinder, shutter speed range, etc.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 03, 2013, 11:42:24 PM
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.

Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.

There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here (http://http://yuzhikov.com/articles/BlurredImagesRestoration1.htm) is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.


There
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 04, 2013, 12:50:46 AM
Deconvolution is an unstable process and can be practically done to a small degree only (generally speaking, without going to details). This is textbook material.

Some software "solutions" do not recover detail, they create it.

There are other approaches to create pleasantly looking images - to basically sharpen what is left but not to recover detail which is lost.

If you have prior knowledge of what type the object is, and if the blur is not so strong, it can be done more successfully. Some algorithms might do that, to look for edges, for example. The problem is - they can "find" edges even if there are none.

In a single-pass process, I'd agree, deconvolution is unstable. However, if we take deblur tools as an example...in a single primary pass they can recover the majority of an image, from what looks like a complete and total loss, to something that at the very least you can clearly identify and garner some small details from. Analysis of the final image of that first pass could identify primary edges, objects, and shapes, allowing secondary, tertiary, etc. passes to be "more informed" than the first, and avoid artifacts and phantom edge detection from the first pass.

Again, we can't know with 100% accuracy all of the information required to perfectly reproduce an original scene from an otherwise inaccurate photograph. I do believe, however, that we can derive a lot of information from an image by processing it multiple times, utilizing the "richer" information of each pass to better-inform subsequent passes. The process wouldn't be fast, possibly quite slow, but I think a lot of "lost" information can be recovered, to a usefully accurate precision.

Convolution with a Gaussian, for example, is unstable, it is a theorem. It does not matter what you do, you just cannot recover something which is lost in the noise and in the discretization process. Such problems are known as ill-posed ones. Google the backward heat equation for example. It is a standard example in the theory of PDEs of an exponentially unstable process. The heat equation actually describes convolution with the Gausssian; and the backward one is the deconvolution.

I am not proclaiming that we can 100% perfectly recover the original state of an image. Even with regularization, there are certainly limits. However I think we can recover a lot, and with some guess work and information fabrication, we can get very close, even if information remains lost.


There are various deconvolution techniques to "solve" the problem anyway. They reverse to a small extent some of the blur done and take a very wild guess about what is lost. In one way or another, those are known as regularization techniques. If you look carefully ate what they "recover", those are not small details but rather large ones. Here (http://http://yuzhikov.com/articles/BlurredImagesRestoration1.htm) is one of the best examples I found. As impressive this may be, you can easily see that small details are gone but the process is still useable to read text, for example. There are lot of fake "detail" as well, like all those rings, etc.

That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do. That guys blurred sample image was almost completly blurred, and he recovered a lot of it. Not everyone is going to be recovering completely defocused images...the far more frequent case is slightly defocused images, in which case the error rate and magnitude is far lower (usually invisible), and the process is quite effective. I really love my 7D, but it does have it's AF quirks. There are too many times when I end up with something ever so slightly out of focus (usually a birds eye), and a deblur tool is useful (Topaz In Focus produces nearly perfect results.)

I'd point out that in the further examples from that link, the halos (waveform halos, or rings) are pretty bad. Topaz In Focus has the same problem, although not as seriously. From the description, it seems as though his PSF (blur function as he put it) is fairly basic (simple gaussian, although I think he mentioned a Laplacian function, which would probably be better). If you've ever pointed your camera at a point light source in a dark environment, defocused it as much as possible, and looked at the results at 100%, you can see the PSF is quite complex. It is clearly a waveform, but usually with artifacts (I call the effect "Rocks in the Pond" given how they affect the diffraction pattern.) I don't know what the fspecial function of Matlab can do, however I'd imagine a laplacian function would be best to model the waveform of a point light source.

Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function? I haven't tried writign my own deblur tool...so it's an honest question. The gap is information...we lack enough information about the original function that did the blurring in the first place. With further image analysis after each attempt to deblur, we could continually reinform the algorithm with richer, more accurate information. I don't see why a multi-pass debluring deconvolution process couldn't produce better results with fewer artifacts and finer details.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 04, 2013, 01:34:12 AM

That is a great article, and a good example of what deconvolution can do. I know it is not a perfect process...but you have to be somewhat amazed at what a little math and image processing can do.

I am. And I am mathematician.  :)

Quote
Is it not possible to further inform the algorithm with multiple passes, identifying kernels of what are likely improperly deconvolved pixels, and re-run the process from the original blurred image? Rinse, repeat, with better information each time...such as more insight into the PSF or noise function?

Even if you know the blur kernel, this is highly unstable. The easiest way to understand it is to consider the Fourier transform. High frequencies are attenuated and when  they get close to the noise and  the other errors level, they are gone forever. Whatever you do, they are gone.

BTW, if the blur is done with a "sharp" kernel, like a disk with a sharp edge, this is a much better behaved problem and allows better deconvolution.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 04, 2013, 08:23:47 AM
Two things very often glossed over by "pure" mathematicians regarding deconvolution is that:

1) The Bayer reconstruction interpolation destroys quite a lot of the valid information. The resulting interpolated values (2 colors per pixel) are often have a PSNR lower than 10 in detailed areas. This makes using those values in the deconvolution a highly unstable process.

2) The PSF is modulated in four (five, but curvature of field is often unimportant if small) major patterns as you go radially outwards from the optical center. The growth rate of those modulations can be higher-order functions, and the growth in one does not necessarily have the same order or even average growth as another.

Since having a good PSF model is the absolute base requirement of a good deconvolution, this gives quite a lot of problems when looking outside the central 15-20% of the image height. Deconvolution on actual images only works with really, really good lenses and images captured with low PSNR - which kind of limits the use case options.

The only real application at the moment (for normal photography) outside sharpening already "almost" sharp images is removing camera shake blur. Camera shake blur is - as long as you stay within reasonable limits - a very well defined PSF that is easy to analyze and deconvolve.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 04, 2013, 10:46:43 AM
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system (http://en.wikipedia.org/wiki/Underdetermined_system)). This is numerically unstable, and hence = impossible to deconvolve.

-UNLESS the AA filter OR the diffraction effect is so strong that the blurring induced is 2 pixel widths!

Movement sensors in mobile devices today are almost always MEMS, either capacitive or piezo based. The cheapest way in production today is to use high-speed filming and then combine the images back together after registration. There are systems on the market today capable of doing this. This method is actually computationally cheaper than deconvolution, but it increases electronic noise contamination.

Applying a center-based PSF on an entire image often does little "harm" to the edges, unless the lens has a lot of comatization or very different sagittal / meridonal focus planes (astigmatism) - which unfortunately is quite usual even in good lenses. Coma makes the PSF very assymetrical, all scatter is spread in a fan-like shape out towards the edge of the image from the optical center. Convolving this with a rotationally symmetrical PSF gives a lot of overshoot, which in turn causes massive increases in noise and exaggerated radial contrast. This can look really strange, and often worse than the original.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 04, 2013, 12:52:48 PM
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 04, 2013, 10:08:40 PM
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 04, 2013, 10:52:20 PM
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 04, 2013, 11:23:16 PM
I think that would only be the case if you were trying to remove blur. In my experience, removal of banding noise in RAW is more effective than removing it in post. That may simply be because of the nature of banding noise, which is non-image information. I would presume that bayer interpolation performed AFTER banding noise removal would produce a better image as a result, no?

Yes, of course. One exception I didn't mention, since we were only (or I thought we were only) discussing sharpening right now.

Banding has a weight effect on R vs B chroma/luminance mix in the interpolation stage. It also overstates the influence of the slightly stronger columns (mostly column errors in Canon cameras) on the total. This means that weak contrast horizontal lines in the (ideal) image gets less weight than they should, if the banding is stronger. Interpolation schemes like the one that Adobe uses (that is highly directional) react very badly to this, it almost amplifies the initial error into the final image.

Since it's a linear function - banding is separated into black offset and amplification offset, and both seem to be very linear in most cases I've seen - the influence isn't disruptive, so some of it can be repaired after interpolation too;
-but not as well as if you do it before sending the raw into the interpolation engine.

Aye, sorry. Wavelet deconvolution of a RAW was primarily used for debanding, which is why I mentioned it before. As for the rest, yes, we were talking about sharpening.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 04, 2013, 11:28:08 PM
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.
[/quote]

Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.

And many of the de-stabilizing problems in the base material (the raw image) are synthetical - they stem from results that are weakly determined (like noise) or totally undetermined (like interpolation errors from having to weak AA-filters)... The totally undetermined errors are worst, since they are totally unpredictable.

The only really valid and reasonable way we have to deal with this right now is to shoot with a camera that has "more MP than we really need", and the downsample the result. Doing deconvolution sharpening on an image that has been scaled down to 1:1.5 or 1:2.0 (9MP or 5.5MP from a 5D2) usually yields reasonably accurate results. You can choose much more aggressive parameters without risking resonance.

I wouldn't even care if the camera I bought next year has 60MP, but only gave me 15MP raw files
- as long as the internal scaling algorithms were of reasonable quality. A 15MP [almost] pixel perfect image is way more than most people need. And it's actually way more real and actual detail than what you get from a 20-24MP camera.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect =  a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 05, 2013, 12:05:59 AM
Even if you know the blur kernel, this is highly unstable.
If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

One should not confuse technical resolution with image detail - since resolution is almost always linearly defined (along one linear axis) and most often not color-accurate. And the resolution-to-detail ratio in a well engineered Bayer based camera is about sqrt(2). You have to donwsample to about half the MP of the original raw to get reasonably accurate pixel-level detail, even if the RESOLUTION of the raw might approach line-perfect.
Line-perfect =  a 5000 pixels wide sensor that can resolve 5000 vertical lines - that is most cameras today.

I am not so sure about the accurate pixel-level detail comment. I might extend that to "color-accurate pixel-level detail", given the spatial resolution of red and blue is half that of green. When I combine my 7D with the new 500 II, I get some truly amazing detail at 100% crop:

(http://i.imgur.com/VC3kIDp.jpg)

In the crop of the finch above, the per-pixel detail is exquisite. All the detail that is supposed to be there is, it is accurately reproduced, it is very sharp (that image has had zero post-process sharpening applied), contrast is great, and noise perfromace is quite good. I've always liked my 7D, but there were definitely times when IQ dissapointed. Until I popped on the 500mm f/4 L II. As I was saying originally in the conversation that started this thread...there is absolutely no alternative for high quality glass, and I think many of the complaints we have about bayer sensor technology really actually boil down to bad glass.

Which, ironically, is NOT a bad thing. I completely agree with you, that the only way to truly preserve all the detail our lenses resolve is to use a sensor that far outresolves the lens and downsample....however that takes us right back to the point we started with: People bitch when their pixel-peeping shows up "soft" detail. (Oh, I so can't wait for the days of 300ppi desktop computer screens...then people won't even be able to SEE a pixel, let alone complain about IQ at pixel level. :))
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 05, 2013, 02:31:06 AM
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system (http://en.wikipedia.org/wiki/Underdetermined_system)). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.
(http://ivrg.epfl.ch/files/content/sites/ivrg/files/supplementary_material/AlleyssonS04/images/cfafft.jpg)
If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 05, 2013, 08:06:35 AM
This is something different.

Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Note, you do not even need to have zeros. Also, deconvolution is unstable before you sample (which makes it worse), so no need of discrete models.

If you do linear convolution, stability can be guaranteed (e.g. by using a finite impulse response).
I am not sure what that means. Let it make it more precise: convolution with a smooth (and fast decaying) function is a textbook example of a unstable transform. It is like 2+2=4.

EDIT: I mean, inverting it is unstable.
http://en.wikipedia.org/wiki/Z-transform (http://en.wikipedia.org/wiki/Z-transform)
http://en.wikipedia.org/wiki/Finite_impulse_response (http://en.wikipedia.org/wiki/Finite_impulse_response)
http://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations (http://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations)
http://dsp.rice.edu/sites/dsp.rice.edu/files/md/lec15.pdf (http://dsp.rice.edu/sites/dsp.rice.edu/files/md/lec15.pdf)
"Inverse systems
Many signal processing problems can be interpreted as trying to undo the action of some system. For example, echo cancellation, channel obvolution, etc. The problem is illustrated below.
If our goal is to design a system HI that reverses the action of H, then we
clearly need H(z)HI(z) = 1. In the case where Thus, the zeros of H(z) become poles of HI(z), and the poles of H(z) become zeros of HI(z). Recall that H(z) being stable and causal implies that all poles are inside the unit circle. If we want H(z) to have a stable, causal inverse HI(z), then we must have all zeros inside the unit circle, (since they become the poles of HI(z).) Combining these, H(z) is stable and causal with a stable and causal inverse if and only if all poles and zeros of H(z) are inside the unit circle. This type of system is called a minimum phase system."


For images you usually want a linear phase system. A pole can be approximated by a large number of zeros. For image processing, large delay may not be a problem (easily compensated), so a system function of "1" can be replaced by z^(-D)

MATLAB example:
Code: [Select]
%% setup an all-pole filter
order = 2;
a = [1 -0.5 0.1];
%% generate a vector of normally distributed noise
n = randn(1024,1);
%% apply the (allpole) filter to the noise
x = filter(1,a,n);
%% apply the inverse (allzero) filter
n_hat = filter(a,1,x);
%% see what happened
sum(abs(n - n_hat))
sum(abs(n))
%% plot filter time-domain impulse responses
[h1,t1] = impz(1,a, 10);
[h2,t2] = impz(a,1, 10);
Output:
>>2.3512e-14
>>815.4913


-h
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 05, 2013, 11:22:00 AM
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 05, 2013, 02:23:14 PM
Deconvolution (with a smooth kernel) is a small denominator problem. In the Fourier domain, you divide by something very small for large frequencies (like a Gaussian). Noise and discretization errors make it impossible for large frequencies. As simple as that. Again, textbook material, nothing to discuss really.

Given that there are numerous, very effective deconvolution algorithms that operate both on the RAW bayer data as well as demosaiced RGB data, using algorithms in any number of domains including Fourier, which produce excellent results for denoising, debanding, deblurring, sharpening, etc., it would stand to reason that these problems are NOT "impossible" problems to solve.

If you say so... I am just telling you how it is. What you wrote is not related to my post.

Let us try it. I will post a blurred landscape, you will restore the detail. Deal?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 05, 2013, 02:27:09 PM

Sampling is a nonlinear process, and having denser sampling would probably make it easier to treat it as a linear problem (in lower spatial frequiencies), if it comes at no other cost (e.g. Q.E.). Heres to future cameras that "outresolves" its lenses.

I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 05, 2013, 05:58:47 PM
Well, even if you can get the convolution PROCESS stable (i.e numerically stable) there's no guarantee that the result isn't resonant. I'm guessing this is where you're talking right past each other.
Not sure that I understand what you are saying. If you apply the exact inverse to a noiseless signal, I would not expect ringing in the result. If the inverse is only approximate, it really depends on the measurement and the approximation. If you have a large, "hairy" eestimated PSF, you might like its approximate inverse to be too compact and smooth, rather than too large and irregular. If the SNR is really poor, it is perhaps not a point in doing deconvolution (good denoising may be more important)

Very deep spectral nulls is a problem that I acknowledged earlier. For anyone but (perhaps) textbook authors, they spell locally poor SNR. The OLPF tends to add a shifted version of the signal to itself, causing a comb-filter of (in principle) 0 gain.

Is this a problem in practice? That depends. Are the first zero within the band that we want to deconvolve? (I would think that they place the first zero close to Nyquist to make a crude lowpass filter, so no.). Is the total PSF dominated by OLPF, or does other contributors cause it to "blur" into something more Gaussian-like?

-h

Actually, Mr Nyquist was recently taken out behind a barn and shot... The main suspect is Mr Bayer. He made the linear sampling theorem totally irrelevant at pixel scale magnifications by adding sparse sampling into the image data. :)

To make this even more clear, imagine a strongly saturated color detail in an image. It may be a thin hairline, or a single point - doesn't matter. If that detail is smaller than two pixel widths, the information it leaves on the sensor is totally dependent on spatial relations, where the detail lands on the CFA pattern of the sensor.

**If the detail is centered on a "void", a different CFA color pixel, it will leave no trace in the raw file. We literally won't know it was there, just by looking at the raw file.
**If the detail is centered on a same-color pixel in the CFA on the other hand, it will have 100% effect.

The reason you cannot apply deconvolution to raw data (and actually not to interpolated data with low statistical reliability either...) is rather easy to see... Look at the image to the far right. It is the red channel result of letting the lens project pattern "A" on to the Bayer red pattern "B".

Can you, just by looking at result "C" get back to image "A"? Deconvolution is of no help here, since the data is sparsely sampled - 75% of the data is missing.
(https://lh4.googleusercontent.com/-KklrTfUSJn0/UTZ38BKMmLI/AAAAAAAAFBI/grLti-dt6vo/s469/NondeterminedConvolution.png)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 05, 2013, 07:18:30 PM
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
Quote

That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 05, 2013, 08:16:55 PM
I will try one more time. There is no sampling, no pixels. There is a continuous image projected on a piece of paper, blurred by a defocused lens. Sample it, or study it under an electronic microscope, whatever. "High enough" frequencies cannot be restored.

You are stuck in the discrete model. Forget about it. The loss happens BEFORE the light even hits the sensor.
While it is interesting chatting about what we would do if we had access to a continous-space, general convolver that we could insert in front of our camera sensors, we don't have one, do we?
Quote

That is the problem, we do not. Blurring happens before the discretization. Even if you had your ideal sensor, you still have a problem.
What we have is a discrete sampling sensor, and a very discrete dsp that can do operations on discrete data.

I have no idea what you mean by "high enough" frequency. Either SNR or Nyquist should put a limit on recoverable high-frequency components. The question, I believe, is what can be done up to that point, and how it will look to humans.

Like I said, I believe that I have proven you wrong in one instance. As you offer only claims, not references, how are we to believe your other claims?

I already said what "high enough" means. I can formulate it precisely but you will not be able to understand it. Yes, noise has something do do with it but also how fast the Fourier transform decays.

I do not make "claims". If you have the background, you will understand what I mean. If not, nothing helps.

BTW, I do research in related areas and know what I am talking about.

Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 05, 2013, 09:05:38 PM
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. (http://en.wikipedia.org/wiki/Optical_resolution#System_resolution) Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT (http://en.wikipedia.org/wiki/Optical_resolution#System_resolution)'s (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (http://people.maths.ox.ac.uk/trefethen/pdectb/backward2.pdf) (for Gaussian kernels).

Here (http://www.math.nsc.ru/LBRT/u2/Survey%20paper.pdf) is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution (http://www.ece.unm.edu/signals/signals/Discrete_Convolution/discrete_convolution.html). Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 06, 2013, 12:01:46 AM
Simply proclaiming that you are smarter than everyone else doesn't help the discussion. Its cheap and childish. Why not enlighten us in a way that doesn't require a Ph.D. to understand what you are attempting to explain, so the discussion can continue? I don't know as much as the three of you, however I would like to learn. Hjulenissen and TheSuede are very helpful in their part of the debate...however you, in your more recent posts, have just become snide, snarky and egotistical. Grow up a little and contribute to the discussion, rather than try to shut it down.

I did already. And I was not the first to try to shut it down. OK, last attempt.

The image projected on the sensor is blurred with some kernel (a.k.a. PSF), most often smooth (has many derivatives). One notable exception is motion blur when the kernel can be an arc of a curve - things change there. The blur is modeled by a convolution. (http://en.wikipedia.org/wiki/Optical_resolution#System_resolution) Note: no pixels here. Imagine a sensor painted over with smooth paint.

Take the Fourier transform of the convolution. You get product of FT (http://en.wikipedia.org/wiki/Optical_resolution#System_resolution)'s (next paragraph). Now, since the PSF is smooth, say something like Gaussian, its FT decays rapidly. The effect of the blur is to multiply the high frequencies with a function which is very small there. You could just divide to get a reconstruction - right? But you divide something small + noise/errors by something small again. This is the well known small denominator problem, google it. Beyond some frequency, determined by the noise level and by how fast the FT of the PSF decays you have more noise than signal. That's it, basically. The usual techniques basically cut near that frequency in one way or another.

The errors that I mentioned can have many causes. For example, not knowing the exact PSF or errors in its approximation/discretization, even if we somehow knew it. Then usual noise, etc.

This class of problems are known as ill-posed ones. There are people spending their lives and careers on them. There are journals devoted to them. Deconvolution is perhaps the simplest example; equivalent to the backward solution of the heat equation (http://people.maths.ox.ac.uk/trefethen/pdectb/backward2.pdf) (for Gaussian kernels).

Here (http://www.math.nsc.ru/LBRT/u2/Survey%20paper.pdf) is a reference from a math paper, see the example in the middle of the page. I know the author, he is a very well respected specialist. Do not expect to read more there that I told you because this is such a simple problem that can only serve as an introductory example to the theory.

Again - no need to invoke sensors and pixels at all. They can only make things worse by introducing more errors.

The main mistake so may people here make - they are so deeply "spoiled" by numerics and discrete models that they automatically assume that we have discrete convolution (http://www.ece.unm.edu/signals/signals/Discrete_Convolution/discrete_convolution.html). Well, we do not. We sample an image which is already convoluted. This makes the problem even more ill posed.

Thanks for the links. Let me read before I respond.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 06, 2013, 12:49:54 AM
Forgot to say which example in the last link - Example 3.18.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: Plamen on March 06, 2013, 07:07:22 AM
Everything that you say here have been mentioned (in some form) several times in the thread, I believe. If that is the ground-breaking flaw in prior posts that you wanted to point out, you must not have seen those posts?
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
Quote
I think that the main mistake you are doing is using a patronizing tone without actually reading or comprehending what they are writing. That makes it difficult to turn this discussion into the interesting discussion it should have been. If you really are a scientist working on deconvolution, I am sure that we could learn something from you, but learning is a lot easier when words are chosen more wisely than yours.

The discussion was civil until you decided to proclaim superiority and become patronizing. Read your own posts again.

I am done with this.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: 3kramd5 on March 06, 2013, 08:35:58 AM
Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

If that's what you're going for, perhaps a definition of all acronyms used would be of use. This thread rapidly went from fairly straightforward to deeply convoluted (pun intended) and jargon-heavy.

I think I have an approximate understanding of the current line of discussion, however I'm not sure how it relates back to the OP (seems to have shifted from whether it makes sense to have a higher spacial resolution sensor in a diffraction-limited case to whether one can always algorithmically fix images which may have a host of problems).
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: 3kramd5 on March 06, 2013, 11:14:59 AM
I believe that an "oversampling" sensor, i.e. one in which sensel density is higher than what mandated for the expected end-to-end system resolution would make problems easier if we could have it at no other cost.

That's essentially what we have with 2MP video from these 20+MP still sensors, right?
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 06, 2013, 12:48:15 PM
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
I objected to your claim that convolution with a smooth (and fast decaying) function did not have a stable inverse. I believe that I have shown an example of the opposite. If what you tried to say was that SNR and unknown PSF is a problem, I wish you would have said so.
http://www.canonrumors.com/forum/index.php?topic=13249.msg239997#msg239997 (http://www.canonrumors.com/forum/index.php?topic=13249.msg239997#msg239997)

Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

I think Plamen's point is that noise and PSF kernels ARE hard to characterize. We can't know the effect atmosphere has on the image the lens is resolving. Neither can we truly know enough about the characteristics of noise (of which there are many varieties, not just photon shot noise that follows a Poisson distribution). We can't know how the imperfections in the elements of a lens affect the PSF, etc.

Based on one of the articles he linked, the notion of an ill-posed problem is fundamentally based on how much knowledge we "can" have about all the potential sources of error, and the notion that small amounts of error in source data can result in large amounts of error in the final solution. Theoretically, assuming we have the capacity for infinite knowledge, along with the capability of infinite measurement and infinite processing power, I don't see why the notion of an ill-posed problem would even exist. However given the simple fact that we can't know all the factors that may lead to error in the fully convolved image projected by a lens (even before it is resolved and converted into a signal by an imaging sensor), we cannot PERFECTLY deconvolve that image.

To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.

Finally, to the original point that started this tangent of discussion...why higher resolution sensors are more valuable. TheSuede pointed out that because of the nature of a bayer sensor, where sampling is sparse, that only poses a problem to the final output so long as the highest input frequencies are as high or higher than the sampling frequency. When the sampling frequency outresolves the highest spatial frequencies of the image, preferably by a factor of 2x or more, the potential for rouge error introduced by the sampling process itself (i.e. Moire) approaches zero. That is basically what I was stating with my original post, and ignoring the potentials that post-process deconvolution may offer, assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 06, 2013, 04:24:48 PM
To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.
Is it not fundamentally a problem of information?

If you have perfect knowledge of the "scrambling" carried out by a _pure LTI process_, you can in principle invert it (possibly with a delay that can be compensated) or as closely as you care, and in practice you usually can come close.

Yes, it is fundamentally a problem of information. This is where everyone is on the same page, I just think the page is interpreted a bit differently. Plamen's point is that we simply can't have all the information necessary to deconvolve an image beyond certain limits, and that those limits are fairly restrictive. We can't have perfect knowledge, and I don't think that any of the processes that convolve are "pure" in any sense. Hence the notion that the problem is ill-posed.


Even with perfect knowledge of the statistics of an (e.g additive) noise corruption, you cannot usually recreate the original data perfectly. You would need deterministic knowledge of the actual noise sequence, something that is unrealistic (mother nature tends to not tell us beforehand how the dice will turn out).

Aye, again to Plamen's point...because we cannot know deterministically what the actual noise is, the problem is ill-posed. That does not mean we cannot solve the problem, it just means we cannot arrive at a perfect solution. We have to approximate, cheat, hack, fabricate, etc. to get a reasonable result...which again is subject to certain limitations. Even with a lot more knowledge and information than we have today, it is unlikely a completely blurred image from a totally defocused lens could ever be restored to artistic usefulness. We might be able to restore such an image well enough that it could be used for, say, use in a police investigation of a stolen car. Conversely, we could probably never fully restore the image to a degree that it would satisfy the need for near-perfect reproduction of a scene that could be printed large.

Even with good estimation of the corrupting PSF, practical systems tends to have variable PSF. If you try to do 30dB of gain at an assumed spectral null, and that null have moved slightly so the corruption gain is no longer -30dB but -5dB, you are in trouble.

Real systems have both variable/unknown PSF and noise. Simple theory and backs-of-envelopes are nice to have, but good and robust solutions might be expected to have all kinds of inelegant band-aids and perceptually motivated hacks to make it actually work.

Completely agree here. All we need is good enough to trick the mind into thinking we have what we want. For a police man investigating a car theft, that point may be reached when he can read a license plate from a blurry photo. For a fine art nature photographer, that point could be reached when the expected detail resolves...even if some of it is fabricated.

assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.
I agree. As density approach infinite, the discrete sampled sensor approach an ideal "analog" sensor. As we move towards a single/few-bit photon-counting device, the information present in a raw file would seem to be somewhat different. Perhaps us simple linear-system people would have to educate ourselves in quantum physics to understand how the information should be interpreted?

Yet, people seem very fixated on questions like "when will Nikon deliver lenses that outresolve the D800 36 MP FF sensor?" Perhaps it is only human to long for a flat passband up to the Nyquist frequency, no matter how big you would have to print in order to appreciate it?

I think people just want sharp results strait out of the camera. It is one thing to understand the the value behind a "soft" image that has been highly over sampled, with the expectation that you will always downsample for any kind of output...including print.

Most people don't think that way. They look at the pixels they have and think: This doesn't look sharp! That's pretty much all it really boils down to, and probably all it will ever boil down to.  :P

http://ericfossum.com/Presentations/2011%20December%20Photons%20to%20Bits%20and%20Beyond%20r7%20web.pdf (http://ericfossum.com/Presentations/2011%20December%20Photons%20to%20Bits%20and%20Beyond%20r7%20web.pdf) (slide 39 onwards)

Quanta Imaging Sensor

Does anyone understand why the 3D convolution of the "jots" in X,Y,t is a claimed to be non-linear convolution?

That link didn't load. However, given the mention of "jots", it reminds me of this paper:

Gigapixel Digital Film Sensor Proposal (http://ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 06, 2013, 10:03:02 PM
Well, with that (mostly) out of the way, I guess one could set the practical, production-related problems as being:


As long as no disruptive technology like a production-scale manufacture of the nano-dot technology or an angle-invariant type of the symmetric deflector color-splitter like in Panasonics latest patents surface (in millions-per year sample quantities) we're stuck with Bayer, like it or not. And for sharpening (deconvolution) and noise reduction (pattern recognition) the much improved per-pixel statistical quality you get by downsampling an image that originally contains more resolution than you need is actually cost- and energy efficient compared to pouring computational power on insufficient base material.

But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor. As I mentioned earlier we're only integrating about 10-15% of the light projection into electric current today. A GOOD implementation of a "Foveon-type" sensor that can use all the visible wavelengths, over the entire surface - without first sifting away more than 65% of the light in a color filter array. This would also solve many of the problems with deconvolution, since it would make the digital image continuous in information again - not sparsely sampled.

Foveon though is a dead end, a unique player in the field with very good - but limited - uses. That they managed to do as well as they did in the last generation is really impressive, but the principle in itself has serious shortcomings. Not only in the low overall efficiency of the operation principle, but also things like the very limited color accuracy.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 06, 2013, 10:10:41 PM
But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor.

I think that is the statement of the year right there. The amount of energy we waste in digital camera systems is mind blowing. The things we could do if we actually integrated 30%, 50%, 60% of the light that passed through the aperture....its probably the next revolution in digital photography. Canon had/has a patent for a layered Foveon-type sensor. I wonder if they will ever develop it into something like you described...
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 07, 2013, 08:02:27 AM
Computational cost tends to (yet) dive according to Moores law. Sensor development seems much slower. I think it is realistic to assume that we will be more dependent on fancy processing in the future than we are now.
Already today, the "average consumer" is well served by the 6-24MP in the average appliance. Normal HD, but in 3:2 format, is about 2.5MP. In 4:3 format about 2.8MP. And if the resulting image in a "Full HD" presentation size is sharp, the image quality is considered good. The average image use seems to be about 1024px width...

So we're already oversampling the images, in practice. It's slightly different for the photo enthusiast and the more discerning customer - where often more resolution is better resolution. To put this into context, a full spread ad in a normal>good quality offset magazine takes a reasonable 240 input dpi to create a good rip into print raster. That's about 10MP (add bleed, 12MP) (*1).

The added cost of a 40MP sensor isn't so much in the manufacture of the sensor plate as in the peripheral equipment. The sensor may get 10% more expensive when production has stabilized, but the ancillaries still  have to be twice as fast as before to get the same fps - meaning twice the buffer memory and off-sensor bandwidth, twice the amount of cores in the ASIC PIC and so on. That adds up to a lot more than the sensor cost increase.

Do you know the corresponding number for "3CCD" video cameras? How are they for color accuracy (the low-level sensor/optics, not the processed compressed video output)?
The trichroic prisms they use are very efficient, but to get reasonable color accuracy a thin-film additional color filter is often applied at the prism endpoints, before each sensor. To get reasonable color accuracy (actually "resistance to metamerism failures") you can approach about 75-80% light energy bandwidth preservation - visible light delivered to the sensors. (This is where the Foveon inherently fails - it has no mechanism for increasing SML separation, it HAS to use all incoming energy. It has no way to use additional filtering). Then you can multiply that with the average efficiency of energy conversion in the 500-600nm spectra, and get an end result of about 40% full-bandwidth QE. About three times higher than a normal Bayer, as expected.

The reason why you HAVE to use additional filtering to get good recorded color / human percieved color correlation is that you have to find an LTI stable way (preferably a simple matrix multiplication) to make the sensory input correspond to the biochemical light response of the human eye (SML response).
http://en.wikipedia.org/wiki/Cone_cell (http://en.wikipedia.org/wiki/Cone_cell)

The main problems with prismatic solutions aren't efficiency or color. It's the production cost (and a very much higher cost for lenses) and the angle sensitivity.
Minimum BFD (back focal distance) is about 2.2x image height, increasing the need for retrofocal wide angles to almost 10mm longer register distances than in an SLR type camera (about 55mm sensor > last lens vertex for an FF camera!). This means that anything shorter than an 85mm lens would have to be constructed basically like the 24/1.4's and 35/1.4's. And that's expensive.
Large aperture color problems. The dichroic mirror surfaces vary in separation bandwidth depending on the angle of incident light. An F1.4 lens has an absolute minimum 65º ray angle from edge to edge of the exit pupil....

The number you quoted on CFA earlier was 30-40%, so I guess that is the loss that can be attributed to Bayer alone?

I find it surprising that we still use the same basic CFA as was suggested in the 70s. Various alternative CFAs have been suggested, but have never really"caught-on". I don't know if this is because Bruce got it right the first time, or because the cost of doing anything out-of-the-ordinary is too high (see e.g. x-trans vs Adobe raw development).

Yes, 30-40% average channel response, multiplied by the average surface bandwidth - which is also around 30-40%. >>> About 10-15% overall system efficiency (compared to the not 100%, but about 75-80% maximum if you want "human perception color response").

Mr Bayer got it right, because he didn't complicate a very easily defined problem. System limitations:

Symmetrical layout: 2x2 or 3x3 (4x4 to much?) groups with square cells, triangle layout with hexagonal cells.
Luminance resolution: have more green than blue or red input area. Green cell layout has to be symmetric
Noise considerations: have approximately twice the amount of green as either red or blue input area

There aren't to many layouts to consider...

(*1)
At National Geographic (for whom I was part of designing their first in-line print quality inspection cameras, now many, many moons ago... :) ) they generally accept that their 300 dpi input recommendation for advertisement and art input is way over the top. The ABX blind tests (with loupe!) screens top out at about 175 lpi raster frequency on good quality paper. That's where the blind testers start to fail in recognizing the higher resolution image with statistical ABX comparisons in more than 50% of the samples. As software and algorithms have improved, we now use 1.33x lpi to get needed dpi input, where we had to use almost 2x before (the old "you need twice the resolution on the original to get maximum print quality" dogma).
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 07, 2013, 06:54:09 PM
My point was that I expect more of the quality to be based on fancy dsp relative to physical components (lens, sensor, electronics). Simply because dsp seems to have faster development (both algorithmically and multiply-accumulate-per-dollar). Whatever is the labour divide between those two today, we might expect that dsp can do more in 2 years for less money, while lenses will do nearly the same as today for the same (or higher) prices.

Well, as I said earlier, that might be true from a purely theoretical PoV. But the camera is a system that isn't composed of just the perfectly AA-filtered image and the Bayer pattern... And the camera users aren't "only" the crowd who are pleased with something that just looks good - some actually want the result to depict the world in front of the lens into the image as accurately as possible.

Several practical considerations have to be made. Maybe the most important of them are the aliasing problems (due to the sparse sampling, if you excuse my nagging...) and the un-neccessary blurring we have to introduce via the AA-filter to make the risky assumptions we make in the raw-conversion less risky.
Less risky = we can deconvolve with good stability. Noise not included at this point.

And if you look at the total system use case of a higher resolution sensor, you'll see that several things improve automatically due to overall system optimization.

Firstly, the user induced blur PSF and the lens aberration blur PSF (including diffraction) becomes a larger part of the Bayer group's width - making the luma/chroma support choice the raw-converter has to make a lot easier in most cases, already before considering the AA-filter.
Secondly, after including this increase in stability due to point 1, we can decrease the AA-filter strength (thickness) by a factor bigger than the resolution increase!

This gives end image result detail a double whammy towards the better. And it doesn't stop there...

The thinner AA-filter we can now use has the additional positive effect that image corners are less effected by additional SA and astigmatism, and also by less internal reflections in the filter. So the corners get a triple whammy of goodness, and large apertures lose less contrast due to internal reflections in the filter package.

And at some resolution point you can get rid of the filter completely, giving a cheaper filter package with fewer layers and better optical parameters.

So, in the corners of the image, detail resolution can actually improve by MORE than the resolution increase - just due to systematic changes.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 08, 2013, 01:28:22 PM
One way to achieve some of those goals would be to move towards smaller, higher-resolution sensors, accept the imperfections of small/inexpensive lenses, and try to keep the IQ constant by improving processing.

The PSF would tend to be larger (relative to the CFA periodicity), and deconvolution and denoising would be more important (either in-camera or outside).

-h

One thing I'd point out is the loss in editing latitude with in-camera processing. Obviously you lose a lot with JPEG. When I first got my 7D, I used mRAW for about a week or so. At first I liked the small file size and what seemed like better IQ. The reason I switched back to full RAW, though, was the loss in editing latitude. I can push a real RAW file REALLY, REALLY FAR. I can do radical whitebalance correction, excessive exposure correction (lifting shadows by stops, pulling highlights by stops, etc.), etc. When I needed to push some of my mRAW files a lot, I realized that you just plain and simply don't have the ability to correct blown or overexposed highlights, pull up shadows, correct incorrect white balance, etc. to anywhere close to the same degree as with a native RAW.

Assuming we do ever reach 200mp sensors, I would still rather have the RAW, even if it is huge (and, hopefully have the computing power to transfer those files quickly and process them without crashing my system). I would just never be happy with the limited editing latitude that a post-demosaiced image offered, even if it looked slightly better in the end. And, in the end, I would still be able to downscale my 200mp image to 50mp, 30mp, 10mp, whatever I needed, to print it at an exquisite level of quality.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 08, 2013, 03:43:47 PM
Doing "more" with processing relative to physical components does not mean that is has to be done in-camera or stored to JPEG.

True, I understand that. I guess my point was, at 200mp, data files, especially in memory data size, is going to be HUGE. Processing of said files on current computers would be fairly slow.

Especially if bit depth reaches a full 16 bits in the future (as currently rumored about the Canon big mp DSLR). The memory load of a single 200mp RAW image (factoring in ONLY the exposed RGB pixels, no masked border pixels, metadata, or anything else) would be 400MB (16 * 200,000,000 / 8)! The memory load for interpolated pixels (TIFF) would be 1.2GB (48 * 200,000,000 / 8). In contrast, the 18mp images from my 7D have a 32MB RAW memory load or 108MB for a TIFF.

I've done some 2x and 3x enlargements of my processed TIFF images for print. I think the largest image I ever had was about 650mb in terms of memory load in Photoshop for a 30x40" print at 300ppi (which was really more along the lines of an experiment in enlargement...that particular image was never printed). The largest image I ever printed was about 450mb. Working with images that large is pretty slow, even on a high powered computer. I couldn't imagine working with a 1.2Gb image on my current computer.

Now, my CPU is a little older...its an i7 920 2.6Ghz overclocked to 3.4Ghz. Memory is overclocked a little to around 1700Mhz. I don't have a full SSD setup, I have only 12Gb of memory, and my page file and working space for both Photoshop and Lightroom are actually on standard platter-based hard drives. I imagine that if I had more memory, a full load of SSD drives with a data RAID built out of SSDs, a brand spanking new 6 or 8 core processor at around 4Ghz, and some new memory running at 2133Mhz, then processing such images might not be all that bad...just really expensive. :)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 08, 2013, 05:55:37 PM
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

Actually the most reasonable type of image data is gamma-corrected, but not as steeply as for example sRGB. sRGB has an upper part of the slope reaching gamma=2.35 - this limits tonal resolution in the brighter ened of the image. There is not inherent "good" about linear data, it's just a convenience when doing some types of operations - it's not a particularly good storage or transfer format.

IF someone should implement a 10-bit image format with a (very low!) gamma of about 1.2-1.4, those ten bits would cover the entire 1Dx tonal range at base ISO with a lot of resolution to spare. The data format would have more than two-three times as much tonal resolution as the sensor information.
......

The same goes for the pixel amount in itself... There no use increasing the OUTPUT format size if there's no need... The reason that higher sensor resolutions are a very good idea right now is that the input side and the conversion to linearly populated (three colors per pixel) image data is what limits us. As long as we're stuck in the Bayer format, raw data will always have a larger resolution than the actual image content that raw data can convey.

Having a 20MP image where every pixel is PERFECT is in most cases worth more than a 40MP raw image where there's quite a lot of uncertainty.

Neither the tonal resolution OR the actual image detail resolution has to take a hit from de-Bayer and compression in the camera - as long as you don't use formats that limit tonal resolution or compression that is to lossy...

The biggest problem right now is jpg. It tends to be "either jpg or tiff" when saving intermediate images, and neither format is what I'd call flexible or well thought out from a multi-use PoV. No-one uses the obscure 12-bit jpg that is actually part of the jpg-standard (outside medical and geophysics). Tiff can be compressed with LS-JPEG (as is the DNG files), for a lossless compression - but few uses that option either.

So right now it's either 16-bit uncompressed of 8-bit compressed - and neither format actually suits digital images of intermediate-format quality. One is to bulky and unnecessarily big, and the other is tonal resolution limited.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 08, 2013, 07:01:57 PM
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 12, 2013, 03:31:01 PM
The question is, does those LSB contain any image information? If they are essentially just random numbers, then there is no reason to record, store and process them: the same (effective) result could be achieved by "extending" e.g. 12-bit raw files to 14 bits by injecting suitably chosen random numbers in e.g. Lightroom.

Well, assuming Canon can get rid of their banding noise, I do believe the lest significant of those bits DO contain information. It is not highly accurate information, but it is meaningful information. When you take an underexposed photo with a Canon sensor, then push it in post to compensate for the lack of exposure, you get visible banding. On the 7D, you primarily get vertical banding, usually red lines. BETWEEN those bands, however, is fairly rich detail that goes into darker levels than the banding itself. Eliminate the banding, and Canon cameras already have more DR than current testing would indicate, because the tests factor IN the banding noise.

Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.

The question is not whether the extra four bits over 12 are purely random or purely not random. The question is whether they can be useful, even if they do not perfectly replicate the real-world data they are supposed to represent.  Banding is a pain in the arse because its FUGLY AS HELL. Random noise, however, is something that can be delt with, and if the noise in those deeper shadows is relatively band-free...even if it has inaccurate chroma, that can be cleaned up in post and those details, perfectly accurate or not, can be recovered to some degree. I think 16 bits and two extra stops of DR could be very useful in that context.

My impression is that keeping total noise down is really hard. Keeping the saturation point high is really hard. Throwing in a larger number of bits is comparatively cheap. I.e. whenever the sensor and analog front-end people achieve improvements, the "ADC-people" are ready to bump up the number of bits.

If "the number of steps" was really the limitation, one would expect to be able to take a shot of a perfectly smooth wall/camera cap/... and see a peaky histogram (ideally only a single code).

In practice, I assume that sensor, analog front-end and ADC is becoming more and more integrated (blurred?), and the distinction may be counter-productive. An oversampled ADC might introduce "noise" on its own in order to encode more apparent level information. Perhaps we just have to estimate "black-box" camera performance, and trust the engineers that they did a reasonable cost/benefit analysis of all components?

The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image.
A cap, but not a lower limit.

Camera shake puts a cap on image sharpness, but there is little reason to believe that a camera stand made out of concrete would make my wide-angle images significantly sharper than my standard Benro stand.

Quote
I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable!
just like DXO mark can show more DR than the number of bits for images that are downsampled to 8 MP, noise reduction can potentially increase "DR" at lower spatial frequencies. Dithering moves level information into spatial noise when (re-)quantizing. When you lowpassfilter using a higher number of bits, you can have codes in-between the input codes, at the cost of a loss of detail.

The important question is: if the raw file was quantized to 12 bits, would the results be any worse (assuming that your denoise applications were equally optimized for that scenario)?

-h
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 13, 2013, 12:01:34 PM
I think 16 bits and two extra stops of DR could be very useful in that context.
I think that two extra stops of DR could be very useful, no matter how it is accomplished. I think that 16 bits at the current DR would have little value.

Oh, I generally agree. If low-ISO noise on a Canon sensor is not reduced, those extra bits would indeed be meaningless. No amount of NR would recover much useful detail. In the context of a future Canon sensor that does produce less noise, such as one with active cooling and a better readout system (perhaps a digital readout system), I can definitely see a move up to 16 bits being useful (which is generally the context I am talking about.) A rumor from a while back here on CR mentioned that Canon had prototypes of a 40mp+ camera out in the field that used active cooling of some kind, and was 16 bits.

Quote
Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.
Did you try these operations on a raw file that was artificially limited to 13 bits?
[/quote]

That I have not done. Out of curiosity, why? I would assume that the improvement in DR would still be real, since my Canon cameras don't even get 11.5 stops of DR according to DXO's Screen DR measure. At 13 bits, assuming I could eliminate banding and reduce noise to a lower St.D, DR should improve...potentially as high as almost 13 stops.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 13, 2013, 12:03:41 PM
Relevant to this topic, but perhaps not the latest posts:
http://www.maxmax.com/olpf_study.htm (http://www.maxmax.com/olpf_study.htm)
(http://www.maxmax.com/5DII_Sensor400X04a.jpg)
"In the 400X zoom pictire, you can better see the CFA, the amount of blur and the 10 micron scale.  For the Canon 5D II sensor, it appears that they displace the image approximately by one pixel.  The complete OLPF has two layers.  These pictures show 1 layer or 1/2 of the blur filter.  The 2nd part blurs the image 1 pixel in the vertical direction.  This means that for any one point of light, you end up with 4 points separated by 1 pixel or the same size as one R-G-G-B CFA square.  You have 4 points because the 1st layer gives you 2 points, and then the 2nd layer doubles those to 4 points. 

For another camera, the manufacturer might choose to displace the light differently.  For many 4/3 cameras, we see more blur than for APS and full frame sensors. Sometimes manufacturers make odd choices in the amount of blur.  For example, The APS Nikon D70 sensor had much less physical blur than the APS Nikon D200 sensors despite the D70 having a pixel pitch of 7.8 microns and the D200 having a pixel pitch of 5.8 microns. "

Very interesting. I wonder what king of OLPF the 7D II will have...with such small pixels, I imagine it wouldn't need as strong a filter as the 7D or any FF sensor. What is also interesting is how much surface area on a sensor is still wasted, despite the use of microlenses. I always thought the microlenses were square...being round, that leaves gaps of "unused surface area" at the intersection of every 2x2 set of pixels.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: jrista on March 14, 2013, 04:25:13 PM
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)

This is written by John Sheehy and as the Suede, Emil Martinec  BobN2  John  has no emotional ties to his own camera brand Canon.

Noise isn't monolithic. It comes in various types and sources.

The most universal noise is photon shot noise, which really isn't noise, per se, but is actually the texture of the signal, as light is a finite number of randomly timed events. The more light the sensor collects, the less grainy the capture and the closer it comes to a smooth thing, like you "see" in the real world (even though that smoothness is an illusion created by the brain). This type of noise will always be cleaner at ISO 100 than ISO 160, by 1/3 stop. Every stop of increased exposure increases the signal-to-noise ratio of photon noise by a half stop. This noise is only related to the sensor exposure, and has nothing directly to do with ISO settings.

Then, there is noise that is generated at the photosite while reading it. Again, this noise is independent of ISO setting, and related only to exposure. The difference between this read noise and shot noise is that it can have blotchier character and line noise or banding, usually only becoming an issue at high ISOs where it is amplified more. Also, unlike the shot noise, the SNR of read noise increases by a full stop when the sensor exposure is increased by one stop.

Then, there is late-stage noise, which occurs after amplification of the photo-site readout. This is where the camera creates its greatest anomalies. Since it occurs after amplification, it is the same strength at all analog sensor amplifications, and exists relative to the digitized values, rather than the absolute sensor signal. It is what gives Canon DSLRs the lowest DR in the industry. Canon, rather than amplifying in 1/3 stop steps at the photosite, uses a very cheesy method to get 1/3 stop ISOs; it simply under-exposes or over-exposes the full-stop ISOs by 1/3 stop, and then multiplies the RAW data by 0.8 or 1.25 to make it look like normal RAW data. The problem with this is that the total read noise for ISOs 100, 200, and 400 are about the same, so when ISO 100 gain is used for ISO 125, the read noise of ISO 125 is actually greater than the read noise of ISO 400, and closer to the read noise of ISO 640 on most Canons! Conversely, ISO 160 is ISO 200 gain multiplied by 0.8, so the read noise is about 80% of that of ISO 100.

So basically, ISO 160 is cleaner in the deep shadows than ISO 100, by about 1/3 stop. In the highlights, however, which are dominated by photon shot noise, ISO 100 is actually 1/3 stop cleaner. Chances are, however, that you would not fully appreciate the benefits of ISO 100, compared to the benefits of ISO 160 in the shadows, as photon shot noise is very aesthetic noise, and does little to obscure image detail, as opposed to read noise which is often more like a cheese grater across your eyes.

However, if are shooting RAW and "exposing to the right", you are already creating ISOs of 160, 200, 180, whatever, out of ISO 100 gain, and are moving the read noise floor down anyway. If you are shooting JPEGs, or movies, then ISO 160 is the way to go for reduced noise, as the camera was going to discard the 1/3 stop of extra DR that ISO 100 has in the highlights that ISO 160 moves to the shadows, anyway, so there is no loss.

I'm not sure what this has to do with the post of mine that you quoted. When it comes to ADC bit depth and "ADC slop", the kind of noise we are talking about is quite specifically read noise. The mechanism that Canon uses to achieve 1/3rd stop ISO settings doesn't matter, especially in a hypothetical context where Canon is using new sensor technology and active cooling.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: TheSuede on March 15, 2013, 10:42:17 PM
Yes, point - not line! - resolution is ultimately noise limited, not optically limited. Until you get to astro-size fixed installations, that is...

Canon's effective pixel area has never been 100%. In the 5D classic, I think we measured 47%, with the newest "100% coverage microlenses" I'd guess they reach maybe 80%. The collimated angle light efficiency depends on how strong you make the micro-lens, and how far above the sensor surface proper it is situated. The actual light-sensitive area on the sensor is smaller than 50% in most CMOS cameras (excepting back-lit of course...). The microlens has to be absolutely centered above this area, with an angle compensation for an estimated average main lens exit pupil distance (usually around 70-80mm) in the corners.

As soon as the ray angles stray outside of the optimal, the microlens starts to both reflect (due to a very high incident light angle) on the far side of the dome, and project outside the sensor active surface on the near side of the dome. In Canon sensors, this is usually at about F2.4. From F1.6 down to F1.2 region of angles, less than half of the original light amount reaches the active pixel surface. There's a built-in compensation for this in firmware, that you can trace by looking at the gaps in the raw file caused by integer multiplications...

BTW, the MaxMax microscopy images show a line spread of about 0.4 pixels, not one full pixel... The full spread is about 0.8-0.9px, giving a +/-0,3px line spread after subtracting birefringency loss (the ghost image is almost 2Ev down from the non-refracted image). That is usually enough to give the interpolation engine some neighboring-area support to work with. This small increase in support can increase the interpolation accuracy by several hundred percent.
Title: Re: Pixel density, resolution, and diffraction in cameras like the 7D II
Post by: 9VIII on March 18, 2013, 05:03:03 PM
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system (http://en.wikipedia.org/wiki/Underdetermined_system)). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.
(http://ivrg.epfl.ch/files/content/sites/ivrg/files/supplementary_material/AlleyssonS04/images/cfafft.jpg)
If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.

Off topic, sorry but I just had to scratch the itch. We know the negatives (more processing, incompatibility with current standards, more focus should be spent on different sensor types), but every time I read something like this I can't help but think a RGBW array would be awesome.