October 30, 2014, 08:58:01 AM

Author Topic: Pixel density, resolution, and diffraction in cameras like the 7D II  (Read 27381 times)

Plamen

  • Canon AE-1
  • ***
  • Posts: 78
    • View Profile
    • Math and Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #75 on: March 06, 2013, 07:07:22 AM »
Everything that you say here have been mentioned (in some form) several times in the thread, I believe. If that is the ground-breaking flaw in prior posts that you wanted to point out, you must not have seen those posts?
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
Quote
I think that the main mistake you are doing is using a patronizing tone without actually reading or comprehending what they are writing. That makes it difficult to turn this discussion into the interesting discussion it should have been. If you really are a scientist working on deconvolution, I am sure that we could learn something from you, but learning is a lot easier when words are chosen more wisely than yours.

The discussion was civil until you decided to proclaim superiority and become patronizing. Read your own posts again.

I am done with this.
« Last Edit: March 06, 2013, 07:08:58 AM by Plamen »

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #75 on: March 06, 2013, 07:07:22 AM »

3kramd5

  • Canon 7D MK II
  • *****
  • Posts: 457
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #76 on: March 06, 2013, 08:35:58 AM »
Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

If that's what you're going for, perhaps a definition of all acronyms used would be of use. This thread rapidly went from fairly straightforward to deeply convoluted (pun intended) and jargon-heavy.

I think I have an approximate understanding of the current line of discussion, however I'm not sure how it relates back to the OP (seems to have shifted from whether it makes sense to have a higher spacial resolution sensor in a diffraction-limited case to whether one can always algorithmically fix images which may have a host of problems).
5D3, 5D2, 40D; Various lenses

3kramd5

  • Canon 7D MK II
  • *****
  • Posts: 457
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #77 on: March 06, 2013, 11:14:59 AM »
I believe that an "oversampling" sensor, i.e. one in which sensel density is higher than what mandated for the expected end-to-end system resolution would make problems easier if we could have it at no other cost.

That's essentially what we have with 2MP video from these 20+MP still sensors, right?
5D3, 5D2, 40D; Various lenses

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #78 on: March 06, 2013, 12:48:15 PM »
Why did you object my previous posts then? So all this has been mentioned before, I did mention it again, but you felt the need to deny it?
I objected to your claim that convolution with a smooth (and fast decaying) function did not have a stable inverse. I believe that I have shown an example of the opposite. If what you tried to say was that SNR and unknown PSF is a problem, I wish you would have said so.
http://www.canonrumors.com/forum/index.php?topic=13249.msg239997#msg239997

Now, if all thread contributors agree that noise and hard-to-characterize PSF kernels are the main practical obstacles to deconvolution (along with the sampling process and color filtering), this thread can be more valuable to the reader.

I think Plamen's point is that noise and PSF kernels ARE hard to characterize. We can't know the effect atmosphere has on the image the lens is resolving. Neither can we truly know enough about the characteristics of noise (of which there are many varieties, not just photon shot noise that follows a Poisson distribution). We can't know how the imperfections in the elements of a lens affect the PSF, etc.

Based on one of the articles he linked, the notion of an ill-posed problem is fundamentally based on how much knowledge we "can" have about all the potential sources of error, and the notion that small amounts of error in source data can result in large amounts of error in the final solution. Theoretically, assuming we have the capacity for infinite knowledge, along with the capability of infinite measurement and infinite processing power, I don't see why the notion of an ill-posed problem would even exist. However given the simple fact that we can't know all the factors that may lead to error in the fully convolved image projected by a lens (even before it is resolved and converted into a signal by an imaging sensor), we cannot PERFECTLY deconvolve that image.

To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.

Finally, to the original point that started this tangent of discussion...why higher resolution sensors are more valuable. TheSuede pointed out that because of the nature of a bayer sensor, where sampling is sparse, that only poses a problem to the final output so long as the highest input frequencies are as high or higher than the sampling frequency. When the sampling frequency outresolves the highest spatial frequencies of the image, preferably by a factor of 2x or more, the potential for rouge error introduced by the sampling process itself (i.e. Moire) approaches zero. That is basically what I was stating with my original post, and ignoring the potentials that post-process deconvolution may offer, assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #79 on: March 06, 2013, 04:24:48 PM »
To Hjulenissen's point, I don't think anyone here is actually claiming we can perfectly deconvolve any image. The argument is that we can use deconvolution to closely approximate, to a level "good enough", the original image such that it satisfies viewers....in most circumstances. Can we perfectly and completely deconvolve a totally blurred image? No. With further research, the gathering of further knowledge, better estimations, more advanced algorithms, and more/faster compute cycles, I think we could deconvolve an image that is unusably blurred into something that is more usable, if not completely usable. That image would not be 100% exactly what actually existed in the real-world 3D scene...but it could be good enough. There will always be limits to how far we can push deconvolution...beyond a certain degree, the error in the solution to any of the problems we try to solve with deconvolution will eventually become too large to be acceptable.
Is it not fundamentally a problem of information?

If you have perfect knowledge of the "scrambling" carried out by a _pure LTI process_, you can in principle invert it (possibly with a delay that can be compensated) or as closely as you care, and in practice you usually can come close.

Yes, it is fundamentally a problem of information. This is where everyone is on the same page, I just think the page is interpreted a bit differently. Plamen's point is that we simply can't have all the information necessary to deconvolve an image beyond certain limits, and that those limits are fairly restrictive. We can't have perfect knowledge, and I don't think that any of the processes that convolve are "pure" in any sense. Hence the notion that the problem is ill-posed.


Even with perfect knowledge of the statistics of an (e.g additive) noise corruption, you cannot usually recreate the original data perfectly. You would need deterministic knowledge of the actual noise sequence, something that is unrealistic (mother nature tends to not tell us beforehand how the dice will turn out).

Aye, again to Plamen's point...because we cannot know deterministically what the actual noise is, the problem is ill-posed. That does not mean we cannot solve the problem, it just means we cannot arrive at a perfect solution. We have to approximate, cheat, hack, fabricate, etc. to get a reasonable result...which again is subject to certain limitations. Even with a lot more knowledge and information than we have today, it is unlikely a completely blurred image from a totally defocused lens could ever be restored to artistic usefulness. We might be able to restore such an image well enough that it could be used for, say, use in a police investigation of a stolen car. Conversely, we could probably never fully restore the image to a degree that it would satisfy the need for near-perfect reproduction of a scene that could be printed large.

Even with good estimation of the corrupting PSF, practical systems tends to have variable PSF. If you try to do 30dB of gain at an assumed spectral null, and that null have moved slightly so the corruption gain is no longer -30dB but -5dB, you are in trouble.

Real systems have both variable/unknown PSF and noise. Simple theory and backs-of-envelopes are nice to have, but good and robust solutions might be expected to have all kinds of inelegant band-aids and perceptually motivated hacks to make it actually work.

Completely agree here. All we need is good enough to trick the mind into thinking we have what we want. For a police man investigating a car theft, that point may be reached when he can read a license plate from a blurry photo. For a fine art nature photographer, that point could be reached when the expected detail resolves...even if some of it is fabricated.

assuming we eventually do end up with sensors that outresolve the lenses by more than a factor of two (i.e. the system is always operating in a diffraction or aberration limited state), image quality should be BETTER than when the system has the potential to operate at a state of normalized frequencies. Additionally, a sensor that outesolves, or oversamples, should make it easier to deconvolve....sharpen, denoise, deband, deblur, correct defocus, etc.
I agree. As density approach infinite, the discrete sampled sensor approach an ideal "analog" sensor. As we move towards a single/few-bit photon-counting device, the information present in a raw file would seem to be somewhat different. Perhaps us simple linear-system people would have to educate ourselves in quantum physics to understand how the information should be interpreted?

Yet, people seem very fixated on questions like "when will Nikon deliver lenses that outresolve the D800 36 MP FF sensor?" Perhaps it is only human to long for a flat passband up to the Nyquist frequency, no matter how big you would have to print in order to appreciate it?

I think people just want sharp results strait out of the camera. It is one thing to understand the the value behind a "soft" image that has been highly over sampled, with the expectation that you will always downsample for any kind of output...including print.

Most people don't think that way. They look at the pixels they have and think: This doesn't look sharp! That's pretty much all it really boils down to, and probably all it will ever boil down to.  :P

http://ericfossum.com/Presentations/2011%20December%20Photons%20to%20Bits%20and%20Beyond%20r7%20web.pdf (slide 39 onwards)

Quanta Imaging Sensor

Does anyone understand why the 3D convolution of the "jots" in X,Y,t is a claimed to be non-linear convolution?

That link didn't load. However, given the mention of "jots", it reminds me of this paper:

Gigapixel Digital Film Sensor Proposal

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #80 on: March 06, 2013, 10:03:02 PM »
Well, with that (mostly) out of the way, I guess one could set the practical, production-related problems as being:

  • Increasing pixel resolution - without increasing efficiency loss or making angle sensitivity worse
  • Presenting this to the unbeknowing average "user" as a good thing
  • Implementing better versions of the m- and sRaw type "smaller than original" raw files for those who want it

As long as no disruptive technology like a production-scale manufacture of the nano-dot technology or an angle-invariant type of the symmetric deflector color-splitter like in Panasonics latest patents surface (in millions-per year sample quantities) we're stuck with Bayer, like it or not. And for sharpening (deconvolution) and noise reduction (pattern recognition) the much improved per-pixel statistical quality you get by downsampling an image that originally contains more resolution than you need is actually cost- and energy efficient compared to pouring computational power on insufficient base material.

But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor. As I mentioned earlier we're only integrating about 10-15% of the light projection into electric current today. A GOOD implementation of a "Foveon-type" sensor that can use all the visible wavelengths, over the entire surface - without first sifting away more than 65% of the light in a color filter array. This would also solve many of the problems with deconvolution, since it would make the digital image continuous in information again - not sparsely sampled.

Foveon though is a dead end, a unique player in the field with very good - but limited - uses. That they managed to do as well as they did in the last generation is really impressive, but the principle in itself has serious shortcomings. Not only in the low overall efficiency of the operation principle, but also things like the very limited color accuracy.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #81 on: March 06, 2013, 10:10:41 PM »
But what we want in the end is to find something other than Bayer - that actually uses more of the energy the lens actually sends through to the sensor.

I think that is the statement of the year right there. The amount of energy we waste in digital camera systems is mind blowing. The things we could do if we actually integrated 30%, 50%, 60% of the light that passed through the aperture....its probably the next revolution in digital photography. Canon had/has a patent for a layered Foveon-type sensor. I wonder if they will ever develop it into something like you described...

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #81 on: March 06, 2013, 10:10:41 PM »

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #82 on: March 07, 2013, 08:02:27 AM »
Computational cost tends to (yet) dive according to Moores law. Sensor development seems much slower. I think it is realistic to assume that we will be more dependent on fancy processing in the future than we are now.
Already today, the "average consumer" is well served by the 6-24MP in the average appliance. Normal HD, but in 3:2 format, is about 2.5MP. In 4:3 format about 2.8MP. And if the resulting image in a "Full HD" presentation size is sharp, the image quality is considered good. The average image use seems to be about 1024px width...

So we're already oversampling the images, in practice. It's slightly different for the photo enthusiast and the more discerning customer - where often more resolution is better resolution. To put this into context, a full spread ad in a normal>good quality offset magazine takes a reasonable 240 input dpi to create a good rip into print raster. That's about 10MP (add bleed, 12MP) (*1).

The added cost of a 40MP sensor isn't so much in the manufacture of the sensor plate as in the peripheral equipment. The sensor may get 10% more expensive when production has stabilized, but the ancillaries still  have to be twice as fast as before to get the same fps - meaning twice the buffer memory and off-sensor bandwidth, twice the amount of cores in the ASIC PIC and so on. That adds up to a lot more than the sensor cost increase.

Do you know the corresponding number for "3CCD" video cameras? How are they for color accuracy (the low-level sensor/optics, not the processed compressed video output)?
The trichroic prisms they use are very efficient, but to get reasonable color accuracy a thin-film additional color filter is often applied at the prism endpoints, before each sensor. To get reasonable color accuracy (actually "resistance to metamerism failures") you can approach about 75-80% light energy bandwidth preservation - visible light delivered to the sensors. (This is where the Foveon inherently fails - it has no mechanism for increasing SML separation, it HAS to use all incoming energy. It has no way to use additional filtering). Then you can multiply that with the average efficiency of energy conversion in the 500-600nm spectra, and get an end result of about 40% full-bandwidth QE. About three times higher than a normal Bayer, as expected.

The reason why you HAVE to use additional filtering to get good recorded color / human percieved color correlation is that you have to find an LTI stable way (preferably a simple matrix multiplication) to make the sensory input correspond to the biochemical light response of the human eye (SML response).
http://en.wikipedia.org/wiki/Cone_cell

The main problems with prismatic solutions aren't efficiency or color. It's the production cost (and a very much higher cost for lenses) and the angle sensitivity.
Minimum BFD (back focal distance) is about 2.2x image height, increasing the need for retrofocal wide angles to almost 10mm longer register distances than in an SLR type camera (about 55mm sensor > last lens vertex for an FF camera!). This means that anything shorter than an 85mm lens would have to be constructed basically like the 24/1.4's and 35/1.4's. And that's expensive.
Large aperture color problems. The dichroic mirror surfaces vary in separation bandwidth depending on the angle of incident light. An F1.4 lens has an absolute minimum 65º ray angle from edge to edge of the exit pupil....

The number you quoted on CFA earlier was 30-40%, so I guess that is the loss that can be attributed to Bayer alone?

I find it surprising that we still use the same basic CFA as was suggested in the 70s. Various alternative CFAs have been suggested, but have never really"caught-on". I don't know if this is because Bruce got it right the first time, or because the cost of doing anything out-of-the-ordinary is too high (see e.g. x-trans vs Adobe raw development).

Yes, 30-40% average channel response, multiplied by the average surface bandwidth - which is also around 30-40%. >>> About 10-15% overall system efficiency (compared to the not 100%, but about 75-80% maximum if you want "human perception color response").

Mr Bayer got it right, because he didn't complicate a very easily defined problem. System limitations:
  • Use a production practical layout for photocells; that's square or hexagonal cells. Square/octagonal combinations have been found to be counterproductive.
  • Maximize the luminance resolution - that is mostly based on the green spectra (M-cones at ~550nm, perceptually achromatic rod cells at ~500nm
  • Make it rotationally invariant and preferably in symmetric layout schemes.
  • Make the system balanced between luminance and chrominance statistical accuracy (noise types)

Symmetrical layout: 2x2 or 3x3 (4x4 to much?) groups with square cells, triangle layout with hexagonal cells.
Luminance resolution: have more green than blue or red input area. Green cell layout has to be symmetric
Noise considerations: have approximately twice the amount of green as either red or blue input area

There aren't to many layouts to consider...

(*1)
At National Geographic (for whom I was part of designing their first in-line print quality inspection cameras, now many, many moons ago... :) ) they generally accept that their 300 dpi input recommendation for advertisement and art input is way over the top. The ABX blind tests (with loupe!) screens top out at about 175 lpi raster frequency on good quality paper. That's where the blind testers start to fail in recognizing the higher resolution image with statistical ABX comparisons in more than 50% of the samples. As software and algorithms have improved, we now use 1.33x lpi to get needed dpi input, where we had to use almost 2x before (the old "you need twice the resolution on the original to get maximum print quality" dogma).
« Last Edit: March 07, 2013, 08:05:58 AM by TheSuede »

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #83 on: March 07, 2013, 06:54:09 PM »
My point was that I expect more of the quality to be based on fancy dsp relative to physical components (lens, sensor, electronics). Simply because dsp seems to have faster development (both algorithmically and multiply-accumulate-per-dollar). Whatever is the labour divide between those two today, we might expect that dsp can do more in 2 years for less money, while lenses will do nearly the same as today for the same (or higher) prices.

Well, as I said earlier, that might be true from a purely theoretical PoV. But the camera is a system that isn't composed of just the perfectly AA-filtered image and the Bayer pattern... And the camera users aren't "only" the crowd who are pleased with something that just looks good - some actually want the result to depict the world in front of the lens into the image as accurately as possible.

Several practical considerations have to be made. Maybe the most important of them are the aliasing problems (due to the sparse sampling, if you excuse my nagging...) and the un-neccessary blurring we have to introduce via the AA-filter to make the risky assumptions we make in the raw-conversion less risky.
Less risky = we can deconvolve with good stability. Noise not included at this point.

And if you look at the total system use case of a higher resolution sensor, you'll see that several things improve automatically due to overall system optimization.

Firstly, the user induced blur PSF and the lens aberration blur PSF (including diffraction) becomes a larger part of the Bayer group's width - making the luma/chroma support choice the raw-converter has to make a lot easier in most cases, already before considering the AA-filter.
Secondly, after including this increase in stability due to point 1, we can decrease the AA-filter strength (thickness) by a factor bigger than the resolution increase!

This gives end image result detail a double whammy towards the better. And it doesn't stop there...

The thinner AA-filter we can now use has the additional positive effect that image corners are less effected by additional SA and astigmatism, and also by less internal reflections in the filter. So the corners get a triple whammy of goodness, and large apertures lose less contrast due to internal reflections in the filter package.

And at some resolution point you can get rid of the filter completely, giving a cheaper filter package with fewer layers and better optical parameters.

So, in the corners of the image, detail resolution can actually improve by MORE than the resolution increase - just due to systematic changes.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #84 on: March 08, 2013, 01:28:22 PM »
One way to achieve some of those goals would be to move towards smaller, higher-resolution sensors, accept the imperfections of small/inexpensive lenses, and try to keep the IQ constant by improving processing.

The PSF would tend to be larger (relative to the CFA periodicity), and deconvolution and denoising would be more important (either in-camera or outside).

-h

One thing I'd point out is the loss in editing latitude with in-camera processing. Obviously you lose a lot with JPEG. When I first got my 7D, I used mRAW for about a week or so. At first I liked the small file size and what seemed like better IQ. The reason I switched back to full RAW, though, was the loss in editing latitude. I can push a real RAW file REALLY, REALLY FAR. I can do radical whitebalance correction, excessive exposure correction (lifting shadows by stops, pulling highlights by stops, etc.), etc. When I needed to push some of my mRAW files a lot, I realized that you just plain and simply don't have the ability to correct blown or overexposed highlights, pull up shadows, correct incorrect white balance, etc. to anywhere close to the same degree as with a native RAW.

Assuming we do ever reach 200mp sensors, I would still rather have the RAW, even if it is huge (and, hopefully have the computing power to transfer those files quickly and process them without crashing my system). I would just never be happy with the limited editing latitude that a post-demosaiced image offered, even if it looked slightly better in the end. And, in the end, I would still be able to downscale my 200mp image to 50mp, 30mp, 10mp, whatever I needed, to print it at an exquisite level of quality.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #85 on: March 08, 2013, 03:43:47 PM »
Doing "more" with processing relative to physical components does not mean that is has to be done in-camera or stored to JPEG.

True, I understand that. I guess my point was, at 200mp, data files, especially in memory data size, is going to be HUGE. Processing of said files on current computers would be fairly slow.

Especially if bit depth reaches a full 16 bits in the future (as currently rumored about the Canon big mp DSLR). The memory load of a single 200mp RAW image (factoring in ONLY the exposed RGB pixels, no masked border pixels, metadata, or anything else) would be 400MB (16 * 200,000,000 / 8)! The memory load for interpolated pixels (TIFF) would be 1.2GB (48 * 200,000,000 / 8). In contrast, the 18mp images from my 7D have a 32MB RAW memory load or 108MB for a TIFF.

I've done some 2x and 3x enlargements of my processed TIFF images for print. I think the largest image I ever had was about 650mb in terms of memory load in Photoshop for a 30x40" print at 300ppi (which was really more along the lines of an experiment in enlargement...that particular image was never printed). The largest image I ever printed was about 450mb. Working with images that large is pretty slow, even on a high powered computer. I couldn't imagine working with a 1.2Gb image on my current computer.

Now, my CPU is a little older...its an i7 920 2.6Ghz overclocked to 3.4Ghz. Memory is overclocked a little to around 1700Mhz. I don't have a full SSD setup, I have only 12Gb of memory, and my page file and working space for both Photoshop and Lightroom are actually on standard platter-based hard drives. I imagine that if I had more memory, a full load of SSD drives with a data RAID built out of SSDs, a brand spanking new 6 or 8 core processor at around 4Ghz, and some new memory running at 2133Mhz, then processing such images might not be all that bad...just really expensive. :)

TheSuede

  • PowerShot G1 X II
  • ***
  • Posts: 54
    • View Profile
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #86 on: March 08, 2013, 05:55:37 PM »
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

Actually the most reasonable type of image data is gamma-corrected, but not as steeply as for example sRGB. sRGB has an upper part of the slope reaching gamma=2.35 - this limits tonal resolution in the brighter ened of the image. There is not inherent "good" about linear data, it's just a convenience when doing some types of operations - it's not a particularly good storage or transfer format.

IF someone should implement a 10-bit image format with a (very low!) gamma of about 1.2-1.4, those ten bits would cover the entire 1Dx tonal range at base ISO with a lot of resolution to spare. The data format would have more than two-three times as much tonal resolution as the sensor information.
......

The same goes for the pixel amount in itself... There no use increasing the OUTPUT format size if there's no need... The reason that higher sensor resolutions are a very good idea right now is that the input side and the conversion to linearly populated (three colors per pixel) image data is what limits us. As long as we're stuck in the Bayer format, raw data will always have a larger resolution than the actual image content that raw data can convey.

Having a 20MP image where every pixel is PERFECT is in most cases worth more than a 40MP raw image where there's quite a lot of uncertainty.

Neither the tonal resolution OR the actual image detail resolution has to take a hit from de-Bayer and compression in the camera - as long as you don't use formats that limit tonal resolution or compression that is to lossy...

The biggest problem right now is jpg. It tends to be "either jpg or tiff" when saving intermediate images, and neither format is what I'd call flexible or well thought out from a multi-use PoV. No-one uses the obscure 12-bit jpg that is actually part of the jpg-standard (outside medical and geophysics). Tiff can be compressed with LS-JPEG (as is the DNG files), for a lossless compression - but few uses that option either.

So right now it's either 16-bit uncompressed of 8-bit compressed - and neither format actually suits digital images of intermediate-format quality. One is to bulky and unnecessarily big, and the other is tonal resolution limited.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #87 on: March 08, 2013, 07:01:57 PM »
16 bits is totally useless for digital imaging, there are some few large-cell sensors in the scientific field that can use 15bits fully. They are usually actively cooled and have cells larger than 10x10µm.

This is another part of the digital image pipeline that is sorely misunderstood... Just getting more bits of data does not in any way mean that the image contains more actual information... No Canon camera today can actually use more than 12 bits fully - the last two bits are just A/D conversion "slop" margin and noise dither.

I guess I'd dispute that. The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image. DXO "Screen DR" numbers are basically the "hardware" dynamic range numbers for the cameras they test. The D800 and D600 get something around 13.5 stops, thanks to the fact that they don't have nearly as much "AD conversion slop" as Canon sensors. Canon sensors definitely have a crapload of "AD conversion slop", which increases at lower ISO settings (ISO 100, 200, and usually 400 all have much more read noise than higher ISO settings on Canon cameras), which is why they have been unable to break the 12-stop DR barrier. Assuming Canon can flatten their read noise curve like Nikon and Sony have with Exmor, additional bit depth raises the ceiling on photographic DR in the RAW files.

I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable! I wouldn't say you have around 13.5 stops like a D800, but you definitely have a stop, maybe a stop and a half, more shadow recoverability than you did before...which might put you as high as 12.5 stops of DR.

If we had 16-bit ADC, we could, theoretically, have over 15 stops of dynamic range. With Exmor technology, I don't doubt that a camera with a 16-bit ADC could achieve 15.3-15.5 stops of "Screen DR" on a DXO test. If Canon did such a thing, assuming they don't fix their horrid "AD conversion slop"...well, at least we might get 14 stops of DR out of a Canon camera, while the last two bits of information are riddled with noise. With some quality post-process debanding, we might get 15 stopd of DR.

While most of what I do is bird and wildlife photography, and dynamic range is usually limited to 9 stops or less anyway...I do some landscape work. I'd probably do more landscapes if I had 30-50mp and 15 stops of DR, though. I could certainly see the benefits of having a high resolution 16-bit camera for landscape photography work, and it is the sole reason I would like to see full 16-bit ADC in the near future (hopefully with the big megapixel Canon camera that is forthcoming!)

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #87 on: March 08, 2013, 07:01:57 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #88 on: March 12, 2013, 03:31:01 PM »
The question is, does those LSB contain any image information? If they are essentially just random numbers, then there is no reason to record, store and process them: the same (effective) result could be achieved by "extending" e.g. 12-bit raw files to 14 bits by injecting suitably chosen random numbers in e.g. Lightroom.

Well, assuming Canon can get rid of their banding noise, I do believe the lest significant of those bits DO contain information. It is not highly accurate information, but it is meaningful information. When you take an underexposed photo with a Canon sensor, then push it in post to compensate for the lack of exposure, you get visible banding. On the 7D, you primarily get vertical banding, usually red lines. BETWEEN those bands, however, is fairly rich detail that goes into darker levels than the banding itself. Eliminate the banding, and Canon cameras already have more DR than current testing would indicate, because the tests factor IN the banding noise.

Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.

The question is not whether the extra four bits over 12 are purely random or purely not random. The question is whether they can be useful, even if they do not perfectly replicate the real-world data they are supposed to represent.  Banding is a pain in the arse because its FUGLY AS HELL. Random noise, however, is something that can be delt with, and if the noise in those deeper shadows is relatively band-free...even if it has inaccurate chroma, that can be cleaned up in post and those details, perfectly accurate or not, can be recovered to some degree. I think 16 bits and two extra stops of DR could be very useful in that context.

My impression is that keeping total noise down is really hard. Keeping the saturation point high is really hard. Throwing in a larger number of bits is comparatively cheap. I.e. whenever the sensor and analog front-end people achieve improvements, the "ADC-people" are ready to bump up the number of bits.

If "the number of steps" was really the limitation, one would expect to be able to take a shot of a perfectly smooth wall/camera cap/... and see a peaky histogram (ideally only a single code).

In practice, I assume that sensor, analog front-end and ADC is becoming more and more integrated (blurred?), and the distinction may be counter-productive. An oversampled ADC might introduce "noise" on its own in order to encode more apparent level information. Perhaps we just have to estimate "black-box" camera performance, and trust the engineers that they did a reasonable cost/benefit analysis of all components?

The bit depth puts an intrinsic cap on the photographic dynamic range of the digital image.
A cap, but not a lower limit.

Camera shake puts a cap on image sharpness, but there is little reason to believe that a camera stand made out of concrete would make my wide-angle images significantly sharper than my standard Benro stand.

Quote
I would also dispute that Canon sensors can't get more than 12 bits of information. If you run Topaz DeNoise 5 on a Canon RAW file, the most heinous noise, horizontal and vertical banding, can be nearly eliminated. Before debanding, a Canon RAW usually has less than 11 stops, in some cases less than 10 stops, of DR ("Screen DR"-type DR, for correlating with DXO.) AFTER debanding with Topaz, a lot of information that would otherwise be "unrecoverable" because it was riddled with banding noise is now recoverable!
just like DXO mark can show more DR than the number of bits for images that are downsampled to 8 MP, noise reduction can potentially increase "DR" at lower spatial frequencies. Dithering moves level information into spatial noise when (re-)quantizing. When you lowpassfilter using a higher number of bits, you can have codes in-between the input codes, at the cost of a loss of detail.

The important question is: if the raw file was quantized to 12 bits, would the results be any worse (assuming that your denoise applications were equally optimized for that scenario)?

-h

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4492
  • EOL
    • View Profile
    • Nature Photography
Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #89 on: March 13, 2013, 12:01:34 PM »
I think 16 bits and two extra stops of DR could be very useful in that context.
I think that two extra stops of DR could be very useful, no matter how it is accomplished. I think that 16 bits at the current DR would have little value.

Oh, I generally agree. If low-ISO noise on a Canon sensor is not reduced, those extra bits would indeed be meaningless. No amount of NR would recover much useful detail. In the context of a future Canon sensor that does produce less noise, such as one with active cooling and a better readout system (perhaps a digital readout system), I can definitely see a move up to 16 bits being useful (which is generally the context I am talking about.) A rumor from a while back here on CR mentioned that Canon had prototypes of a 40mp+ camera out in the field that used active cooling of some kind, and was 16 bits.

Quote
Even assuming Canon does not eliminate banding at a hardware level...the fact that you can wavelet deconvolve them in post and recover the rest of the meaningful detail between them indicates to me that if we could move up to 16 bit ADC, we COULD benefit from the extra DR with adequate debanding.
Did you try these operations on a raw file that was artificially limited to 13 bits?
[/quote]

That I have not done. Out of curiosity, why? I would assume that the improvement in DR would still be real, since my Canon cameras don't even get 11.5 stops of DR according to DXO's Screen DR measure. At 13 bits, assuming I could eliminate banding and reduce noise to a lower St.D, DR should improve...potentially as high as almost 13 stops.

canon rumors FORUM

Re: Pixel density, resolution, and diffraction in cameras like the 7D II
« Reply #89 on: March 13, 2013, 12:01:34 PM »