Pixel density, resolution, and diffraction in cameras like the 7D II

Status
Not open for further replies.
Well, with that (mostly) out of the way, I guess one could set the practical, production-related problems as being:

  • Increasing pixel resolution - without increasing efficiency loss or making angle sensitivity worse
  • Presenting this to the unbeknowing average "user" as a good thing
  • Implementing better versions of the m- and sRaw type "smaller than original" raw files for those who want it

As long as no disruptive technology like a production-scale manufacture of the nano-dot technology or an angle-invariant type of the symmetric deflector color-splitter like in Panasonics latest patents surface (in millions-per year sample quantities) we're stuck with Bayer, like it or not. And for sharpening (deconvolution) and noise reduction (pattern recognition) the much improved per-pixel statistical quality you get by downsampling an image that originally contains more resolution than you need is actually cost- and energy efficient compared to pouring computational power on insufficient base material.

But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor. As I mentioned earlier we're only integrating about 10-15% of the light projection into electric current today. A GOOD implementation of a "Foveon-type" sensor that can use all the visible wavelengths, over the entire surface - without first sifting away more than 65% of the light in a color filter array. This would also solve many of the problems with deconvolution, since it would make the digital image continuous in information again - not sparsely sampled.

Foveon though is a dead end, a unique player in the field with very good - but limited - uses. That they managed to do as well as they did in the last generation is really impressive, but the principle in itself has serious shortcomings. Not only in the low overall efficiency of the operation principle, but also things like the very limited color accuracy.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor.

I think that is the statement of the year right there. The amount of energy we waste in digital camera systems is mind blowing. The things we could do if we actually integrated 30%, 50%, 60% of the light that passed through the aperture....its probably the next revolution in digital photography. Canon had/has a patent for a layered Foveon-type sensor. I wonder if they will ever develop it into something like you described...
 
Upvote 0
hjulenissen said:
Computational cost tends to (yet) dive according to Moores law. Sensor development seems much slower. I think it is realistic to assume that we will be more dependent on fancy processing in the future than we are now.
Already today, the "average consumer" is well served by the 6-24MP in the average appliance. Normal HD, but in 3:2 format, is about 2.5MP. In 4:3 format about 2.8MP. And if the resulting image in a "Full HD" presentation size is sharp, the image quality is considered good. The average image use seems to be about 1024px width...

So we're already oversampling the images, in practice. It's slightly different for the photo enthusiast and the more discerning customer - where often more resolution is better resolution. To put this into context, a full spread ad in a normal>good quality offset magazine takes a reasonable 240 input dpi to create a good rip into print raster. That's about 10MP (add bleed, 12MP) (*1).

The added cost of a 40MP sensor isn't so much in the manufacture of the sensor plate as in the peripheral equipment. The sensor may get 10% more expensive when production has stabilized, but the ancillaries still have to be twice as fast as before to get the same fps - meaning twice the buffer memory and off-sensor bandwidth, twice the amount of cores in the ASIC PIC and so on. That adds up to a lot more than the sensor cost increase.

hjulenissen said:
Do you know the corresponding number for "3CCD" video cameras? How are they for color accuracy (the low-level sensor/optics, not the processed compressed video output)?
The trichroic prisms they use are very efficient, but to get reasonable color accuracy a thin-film additional color filter is often applied at the prism endpoints, before each sensor. To get reasonable color accuracy (actually "resistance to metamerism failures") you can approach about 75-80% light energy bandwidth preservation - visible light delivered to the sensors. (This is where the Foveon inherently fails - it has no mechanism for increasing SML separation, it HAS to use all incoming energy. It has no way to use additional filtering). Then you can multiply that with the average efficiency of energy conversion in the 500-600nm spectra, and get an end result of about 40% full-bandwidth QE. About three times higher than a normal Bayer, as expected.

The reason why you HAVE to use additional filtering to get good recorded color / human percieved color correlation is that you have to find an LTI stable way (preferably a simple matrix multiplication) to make the sensory input correspond to the biochemical light response of the human eye (SML response).
http://en.wikipedia.org/wiki/Cone_cell

The main problems with prismatic solutions aren't efficiency or color. It's the production cost (and a very much higher cost for lenses) and the angle sensitivity.
Minimum BFD (back focal distance) is about 2.2x image height, increasing the need for retrofocal wide angles to almost 10mm longer register distances than in an SLR type camera (about 55mm sensor > last lens vertex for an FF camera!). This means that anything shorter than an 85mm lens would have to be constructed basically like the 24/1.4's and 35/1.4's. And that's expensive.
Large aperture color problems. The dichroic mirror surfaces vary in separation bandwidth depending on the angle of incident light. An F1.4 lens has an absolute minimum 65º ray angle from edge to edge of the exit pupil....

hjulenissen said:
The number you quoted on CFA earlier was 30-40%, so I guess that is the loss that can be attributed to Bayer alone?

I find it surprising that we still use the same basic CFA as was suggested in the 70s. Various alternative CFAs have been suggested, but have never really"caught-on". I don't know if this is because Bruce got it right the first time, or because the cost of doing anything out-of-the-ordinary is too high (see e.g. x-trans vs Adobe raw development).

Yes, 30-40% average channel response, multiplied by the average surface bandwidth - which is also around 30-40%. >>> About 10-15% overall system efficiency (compared to the not 100%, but about 75-80% maximum if you want "human perception color response").

Mr Bayer got it right, because he didn't complicate a very easily defined problem. System limitations:
[list type=decimal]
[*]Use a production practical layout for photocells; that's square or hexagonal cells. Square/octagonal combinations have been found to be counterproductive.
[*]Maximize the luminance resolution - that is mostly based on the green spectra (M-cones at ~550nm, perceptually achromatic rod cells at ~500nm
[*]Make it rotationally invariant and preferably in symmetric layout schemes.
[*]Make the system balanced between luminance and chrominance statistical accuracy (noise types)
[/list]

Symmetrical layout: 2x2 or 3x3 (4x4 to much?) groups with square cells, triangle layout with hexagonal cells.
Luminance resolution: have more green than blue or red input area. Green cell layout has to be symmetric
Noise considerations: have approximately twice the amount of green as either red or blue input area

There aren't to many layouts to consider...

(*1)
At National Geographic (for whom I was part of designing their first in-line print quality inspection cameras, now many, many moons ago... :) ) they generally accept that their 300 dpi input recommendation for advertisement and art input is way over the top. The ABX blind tests (with loupe!) screens top out at about 175 lpi raster frequency on good quality paper. That's where the blind testers start to fail in recognizing the higher resolution image with statistical ABX comparisons in more than 50% of the samples. As software and algorithms have improved, we now use 1.33x lpi to get needed dpi input, where we had to use almost 2x before (the old "you need twice the resolution on the original to get maximum print quality" dogma).
 
Upvote 0
hjulenissen said:
My point was that I expect more of the quality to be based on fancy dsp relative to physical components (lens, sensor, electronics). Simply because dsp seems to have faster development (both algorithmically and multiply-accumulate-per-dollar). Whatever is the labour divide between those two today, we might expect that dsp can do more in 2 years for less money, while lenses will do nearly the same as today for the same (or higher) prices.

Well, as I said earlier, that might be true from a purely theoretical PoV. But the camera is a system that isn't composed of just the perfectly AA-filtered image and the Bayer pattern... And the camera users aren't "only" the crowd who are pleased with something that just looks good - some actually want the result to depict the world in front of the lens into the image as accurately as possible.

Several practical considerations have to be made. Maybe the most important of them are the aliasing problems (due to the sparse sampling, if you excuse my nagging...) and the un-neccessary blurring we have to introduce via the AA-filter to make the risky assumptions we make in the raw-conversion less risky.
Less risky = we can deconvolve with good stability. Noise not included at this point.

And if you look at the total system use case of a higher resolution sensor, you'll see that several things improve automatically due to overall system optimization.

Firstly, the user induced blur PSF and the lens aberration blur PSF (including diffraction) becomes a larger part of the Bayer group's width - making the luma/chroma support choice the raw-converter has to make a lot easier in most cases, already before considering the AA-filter.
Secondly, after including this increase in stability due to point 1, we can decrease the AA-filter strength (thickness) by a factor bigger than the resolution increase!

This gives end image result detail a double whammy towards the better. And it doesn't stop there...

The thinner AA-filter we can now use has the additional positive effect that image corners are less effected by additional SA and astigmatism, and also by less internal reflections in the filter. So the corners get a triple whammy of goodness, and large apertures lose less contrast due to internal reflections in the filter package.

And at some resolution point you can get rid of the filter completely, giving a cheaper filter package with fewer layers and better optical parameters.

So, in the corners of the image, detail resolution can actually improve by MORE than the resolution increase - just due to systematic changes.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
One way to achieve some of those goals would be to move towards smaller, higher-resolution sensors, accept the imperfections of small/inexpensive lenses, and try to keep the IQ constant by improving processing.

The PSF would tend to be larger (relative to the CFA periodicity), and deconvolution and denoising would be more important (either in-camera or outside).

-h

One thing I'd point out is the loss in editing latitude with in-camera processing. Obviously you lose a lot with JPEG. When I first got my 7D, I used mRAW for about a week or so. At first I liked the small file size and what seemed like better IQ. The reason I switched back to full RAW, though, was the loss in editing latitude. I can push a real RAW file REALLY, REALLY FAR. I can do radical whitebalance correction, excessive exposure correction (lifting shadows by stops, pulling highlights by stops, etc.), etc. When I needed to push some of my mRAW files a lot, I realized that you just plain and simply don't have the ability to correct blown or overexposed highlights, pull up shadows, correct incorrect white balance, etc. to anywhere close to the same degree as with a native RAW.

Assuming we do ever reach 200mp sensors, I would still rather have the RAW, even if it is huge (and, hopefully have the computing power to transfer those files quickly and process them without crashing my system). I would just never be happy with the limited editing latitude that a post-demosaiced image offered, even if it looked slightly better in the end. And, in the end, I would still be able to downscale my 200mp image to 50mp, 30mp, 10mp, whatever I needed, to print it at an exquisite level of quality.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
hjulenissen said:
Doing "more" with processing relative to physical components does not mean that is has to be done in-camera or stored to JPEG.

True, I understand that. I guess my point was, at 200mp, data files, especially in memory data size, is going to be HUGE. Processing of said files on current computers would be fairly slow.

Especially if bit depth reaches a full 16 bits in the future (as currently rumored about the Canon big mp DSLR). The memory load of a single 200mp RAW image (factoring in ONLY the exposed RGB pixels, no masked border pixels, metadata, or anything else) would be 400MB (16 * 200,000,000 / 8)! The memory load for interpolated pixels (TIFF) would be 1.2GB (48 * 200,000,000 / 8). In contrast, the 18mp images from my 7D have a 32MB RAW memory load or 108MB for a TIFF.

I've done some 2x and 3x enlargements of my processed TIFF images for print. I think the largest image I ever had was about 650mb in terms of memory load in Photoshop for a 30x40" print at 300ppi (which was really more along the lines of an experiment in enlargement...that particular image was never printed). The largest image I ever printed was about 450mb. Working with images that large is pretty slow, even on a high powered computer. I couldn't imagine working with a 1.2Gb image on my current computer.

Now, my CPU is a little older...its an i7 920 2.6Ghz overclocked to 3.4Ghz. Memory is overclocked a little to around 1700Mhz. I don't have a full SSD setup, I have only 12Gb of memory, and my page file and working space for both Photoshop and Lightroom are actually on standard platter-based hard drives. I imagine that if I had more memory, a full load of SSD drives with a data RAID built out of SSDs, a brand spanking new 6 or 8 core processor at around 4Ghz, and some new memory running at 2133Mhz, then processing such images might not be all that bad...just really expensive. :)
 
Upvote 0
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

Actually the most reasonable type of image data is gamma-corrected, but not as steeply as for example sRGB. sRGB has an upper part of the slope reaching gamma=2.35 - this limits tonal resolution in the brighter ened of the image. There is not inherent "good" about linear data, it's just a convenience when doing some types of operations - it's not a particularly good storage or transfer format.

IF someone should implement a 10-bit image format with a (very low!) gamma of about 1.2-1.4, those ten bits would cover the entire 1Dx tonal range at base ISO with a lot of resolution to spare. The data format would have more than two-three times as much tonal resolution as the sensor information.
......

The same goes for the pixel amount in itself... There no use increasing the OUTPUT format size if there's no need... The reason that higher sensor resolutions are a very good idea right now is that the input side and the conversion to linearly populated (three colors per pixel) image data is what limits us. As long as we're stuck in the Bayer format, raw data will always have a larger resolution than the actual image content that raw data can convey.

Having a 20MP image where every pixel is PERFECT is in most cases worth more than a 40MP raw image where there's quite a lot of uncertainty.

Neither the tonal resolution OR the actual image detail resolution has to take a hit from de-Bayer and compression in the camera - as long as you don't use formats that limit tonal resolution or compression that is to lossy...

The biggest problem right now is jpg. It tends to be "either jpg or tiff" when saving intermediate images, and neither format is what I'd call flexible or well thought out from a multi-use PoV. No-one uses the obscure 12-bit jpg that is actually part of the jpg-standard (outside medical and geophysics). Tiff can be compressed with LS-JPEG (as is the DNG files), for a lossless compression - but few uses that option either.

So right now it's either 16-bit uncompressed of 8-bit compressed - and neither format actually suits digital images of intermediate-format quality. One is to bulky and unnecessarily big, and the other is tonal resolution limited.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
TheSuede said:
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)
 
Upvote 0
hjulenissen said:
The question is, does those LSB contain any image information? If they are essentially just random numbers, then there is no reason to record, store and process them: the same (effective) result could be achieved by "extending" e.g. 12-bit raw files to 14 bits by injecting suitably chosen random numbers in e.g. Lightroom.

Well, assuming Canon can get rid of their banding noise, I do believe the lest significant of those bits DO contain information. It is not highly accurate information, but it is meaningful information. When you take an underexposed photo with a Canon sensor, then push it in post to compensate for the lack of exposure, you get visible banding. On the 7D, you primarily get vertical banding, usually red lines. BETWEEN those bands, however, is fairly rich detail that goes into darker levels than the banding itself. Eliminate the banding, and Canon cameras already have more DR than current testing would indicate, because the tests factor IN the banding noise.

Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.

The question is not whether the extra four bits over 12 are purely random or purely not random. The question is whether they can be useful, even if they do not perfectly replicate the real-world data they are supposed to represent. Banding is a pain in the arse because its FUGLY AS HELL. Random noise, however, is something that can be delt with, and if the noise in those deeper shadows is relatively band-free...even if it has inaccurate chroma, that can be cleaned up in post and those details, perfectly accurate or not, can be recovered to some degree. I think 16 bits and two extra stops of DR could be very useful in that context.

hjulenissen said:
My impression is that keeping total noise down is really hard. Keeping the saturation point high is really hard. Throwing in a larger number of bits is comparatively cheap. I.e. whenever the sensor and analog front-end people achieve improvements, the "ADC-people" are ready to bump up the number of bits.

If "the number of steps" was really the limitation, one would expect to be able to take a shot of a perfectly smooth wall/camera cap/... and see a peaky histogram (ideally only a single code).

In practice, I assume that sensor, analog front-end and ADC is becoming more and more integrated (blurred?), and the distinction may be counter-productive. An oversampled ADC might introduce "noise" on its own in order to encode more apparent level information. Perhaps we just have to estimate "black-box" camera performance, and trust the engineers that they did a reasonable cost/benefit analysis of all components?

jrista said:
The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image.
A cap, but not a lower limit.

Camera shake puts a cap on image sharpness, but there is little reason to believe that a camera stand made out of concrete would make my wide-angle images significantly sharper than my standard Benro stand.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable!
just like DXO mark can show more DR than the number of bits for images that are downsampled to 8 MP, noise reduction can potentially increase "DR" at lower spatial frequencies. Dithering moves level information into spatial noise when (re-)quantizing. When you lowpassfilter using a higher number of bits, you can have codes in-between the input codes, at the cost of a loss of detail.

The important question is: if the raw file was quantized to 12 bits, would the results be any worse (assuming that your denoise applications were equally optimized for that scenario)?

-h
 
Upvote 0
hjulenissen said:
jrista said:
I think 16 bits and two extra stops of DR could be very useful in that context.
I think that two extra stops of DR could be very useful, no matter how it is accomplished. I think that 16 bits at the current DR would have little value.

Oh, I generally agree. If low-ISO noise on a Canon sensor is not reduced, those extra bits would indeed be meaningless. No amount of NR would recover much useful detail. In the context of a future Canon sensor that does produce less noise, such as one with active cooling and a better readout system (perhaps a digital readout system), I can definitely see a move up to 16 bits being useful (which is generally the context I am talking about.) A rumor from a while back here on CR mentioned that Canon had prototypes of a 40mp+ camera out in the field that used active cooling of some kind, and was 16 bits.

Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.
Did you try these operations on a raw file that was artificially limited to 13 bits?
[/quote]

That I have not done. Out of curiosity, why? I would assume that the improvement in DR would still be real, since my Canon cameras don't even get 11.5 stops of DR according to DXO's Screen DR measure. At 13 bits, assuming I could eliminate banding and reduce noise to a lower St.D, DR should improve...potentially as high as almost 13 stops.
 
Upvote 0
hjulenissen said:
Relevant to this topic, but perhaps not the latest posts:
http://www.maxmax.com/olpf_study.htm
5DII_Sensor400X04a.jpg

"In the 400X zoom pictire, you can better see the CFA, the amount of blur and the 10 micron scale. For the Canon 5D II sensor, it appears that they displace the image approximately by one pixel. The complete OLPF has two layers. These pictures show 1 layer or 1/2 of the blur filter. The 2nd part blurs the image 1 pixel in the vertical direction. This means that for any one point of light, you end up with 4 points separated by 1 pixel or the same size as one R-G-G-B CFA square. You have 4 points because the 1st layer gives you 2 points, and then the 2nd layer doubles those to 4 points.

For another camera, the manufacturer might choose to displace the light differently. For many 4/3 cameras, we see more blur than for APS and full frame sensors. Sometimes manufacturers make odd choices in the amount of blur. For example, The APS Nikon D70 sensor had much less physical blur than the APS Nikon D200 sensors despite the D70 having a pixel pitch of 7.8 microns and the D200 having a pixel pitch of 5.8 microns. "

Very interesting. I wonder what king of OLPF the 7D II will have...with such small pixels, I imagine it wouldn't need as strong a filter as the 7D or any FF sensor. What is also interesting is how much surface area on a sensor is still wasted, despite the use of microlenses. I always thought the microlenses were square...being round, that leaves gaps of "unused surface area" at the intersection of every 2x2 set of pixels.
 
Upvote 0
ankorwatt said:
jrista said:
TheSuede said:
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)

This is written by John Sheehy and as the Suede, Emil Martinec BobN2 John has no emotional ties to his own camera brand Canon.

Noise isn't monolithic. It comes in various types and sources.

The most universal noise is photon shot noise, which really isn't noise, per se, but is actually the texture of the signal, as light is a finite number of randomly timed events. The more light the sensor collects, the less grainy the capture and the closer it comes to a smooth thing, like you "see" in the real world (even though that smoothness is an illusion created by the brain). This type of noise will always be cleaner at ISO 100 than ISO 160, by 1/3 stop. Every stop of increased exposure increases the signal-to-noise ratio of photon noise by a half stop. This noise is only related to the sensor exposure, and has nothing directly to do with ISO settings.

Then, there is noise that is generated at the photosite while reading it. Again, this noise is independent of ISO setting, and related only to exposure. The difference between this read noise and shot noise is that it can have blotchier character and line noise or banding, usually only becoming an issue at high ISOs where it is amplified more. Also, unlike the shot noise, the SNR of read noise increases by a full stop when the sensor exposure is increased by one stop.

Then, there is late-stage noise, which occurs after amplification of the photo-site readout. This is where the camera creates its greatest anomalies. Since it occurs after amplification, it is the same strength at all analog sensor amplifications, and exists relative to the digitized values, rather than the absolute sensor signal. It is what gives Canon DSLRs the lowest DR in the industry. Canon, rather than amplifying in 1/3 stop steps at the photosite, uses a very cheesy method to get 1/3 stop ISOs; it simply under-exposes or over-exposes the full-stop ISOs by 1/3 stop, and then multiplies the RAW data by 0.8 or 1.25 to make it look like normal RAW data. The problem with this is that the total read noise for ISOs 100, 200, and 400 are about the same, so when ISO 100 gain is used for ISO 125, the read noise of ISO 125 is actually greater than the read noise of ISO 400, and closer to the read noise of ISO 640 on most Canons! Conversely, ISO 160 is ISO 200 gain multiplied by 0.8, so the read noise is about 80% of that of ISO 100.

So basically, ISO 160 is cleaner in the deep shadows than ISO 100, by about 1/3 stop. In the highlights, however, which are dominated by photon shot noise, ISO 100 is actually 1/3 stop cleaner. Chances are, however, that you would not fully appreciate the benefits of ISO 100, compared to the benefits of ISO 160 in the shadows, as photon shot noise is very aesthetic noise, and does little to obscure image detail, as opposed to read noise which is often more like a cheese grater across your eyes.

However, if are shooting RAW and "exposing to the right", you are already creating ISOs of 160, 200, 180, whatever, out of ISO 100 gain, and are moving the read noise floor down anyway. If you are shooting JPEGs, or movies, then ISO 160 is the way to go for reduced noise, as the camera was going to discard the 1/3 stop of extra DR that ISO 100 has in the highlights that ISO 160 moves to the shadows, anyway, so there is no loss.

I'm not sure what this has to do with the post of mine that you quoted. When it comes to ADC bit depth and "ADC slop", the kind of noise we are talking about is quite specifically read noise. The mechanism that Canon uses to achieve 1/3rd stop ISO settings doesn't matter, especially in a hypothetical context where Canon is using new sensor technology and active cooling.
 
Upvote 0
Yes, point - not line! - resolution is ultimately noise limited, not optically limited. Until you get to astro-size fixed installations, that is...

Canon's effective pixel area has never been 100%. In the 5D classic, I think we measured 47%, with the newest "100% coverage microlenses" I'd guess they reach maybe 80%. The collimated angle light efficiency depends on how strong you make the micro-lens, and how far above the sensor surface proper it is situated. The actual light-sensitive area on the sensor is smaller than 50% in most CMOS cameras (excepting back-lit of course...). The microlens has to be absolutely centered above this area, with an angle compensation for an estimated average main lens exit pupil distance (usually around 70-80mm) in the corners.

As soon as the ray angles stray outside of the optimal, the microlens starts to both reflect (due to a very high incident light angle) on the far side of the dome, and project outside the sensor active surface on the near side of the dome. In Canon sensors, this is usually at about F2.4. From F1.6 down to F1.2 region of angles, less than half of the original light amount reaches the active pixel surface. There's a built-in compensation for this in firmware, that you can trace by looking at the gaps in the raw file caused by integer multiplications...

BTW, the MaxMax microscopy images show a line spread of about 0.4 pixels, not one full pixel... The full spread is about 0.8-0.9px, giving a +/-0,3px line spread after subtracting birefringency loss (the ghost image is almost 2Ev down from the non-refracted image). That is usually enough to give the interpolation engine some neighboring-area support to work with. This small increase in support can increase the interpolation accuracy by several hundred percent.
 
Upvote 0
Feb 8, 2013
1,843
0
jrista said:
hjulenissen said:
TheSuede said:
Deconvolution in the Bayer domain (before interpolation, "raw conversion") is actually counterproductive, and totally destructive to the underlying information.

The raw Bayer image is not continuous, it is sparsely sampled. This makes deconvolution impossible, even in continuous hue object areas containing "just" brightness changes. If the base signal is sparsely sampled and the underlying material is higher resolution than the sampling, you get an under-determined system (http://en.wikipedia.org/wiki/Underdetermined_system). This is numerically unstable, and hence = impossible to deconvolve.
There is no doubt that the CFA introduce uncertainty compared to sampling all colors at each site. I believe I was thinking about cases where we have some prior knowledge, or where an algorithm or photoshop-dude can make correct guesses afterwards. Perhaps what I am suggesting is that debayer and deconvolution ideally should be done jointly.
cfafft.jpg

If the scene is achromatic, then "demosaic" should amount to something ala a global WB, filtering might destroy recoverable detail - the CFA in itself does not reduce the amount of spatial information compared to a filterless sensor. If the channels are nicely separated in the 2-d DFT, you want to follow those segments when deconvoluting?

-h

On a per-pixel level, a bayer is only receiving 30-40% of the information a achromatic sensor is getting. That implies a LOSS of information is occuring due to the filtering of the CFA. You have spatial information, for the same number of samples over the same area...but the information in each sample is anemic compared to what you get with an achromatic sensor. That is the very reason we need to demosaic and interpolate information at all...that can't be a meaningless factor.

Off topic, sorry but I just had to scratch the itch. We know the negatives (more processing, incompatibility with current standards, more focus should be spent on different sensor types), but every time I read something like this I can't help but think a RGBW array would be awesome.
 
Upvote 0
Status
Not open for further replies.