More Sensor Technology Talk [CR1]

jrista said:
ScottyP said:
I think they lost me at "Foveon-like".

So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.

Am I missing something wonderful about Foveon? If so, then so is everyone else based on the failure of Sigma's Foveon bodies to fly off the shelves. Why not copy FUJI sensors instead? That more complex, non-bayer pixel, no filter thing sounds much more interesting to me, anyway.

Crud.

Your making a LOT of assumptions. The "negatives" of high MP cameras can be mitigated. With on-die CP-ADC (which canon does have a patent for), they can dramatically improve readout speed (they already proved they could read out a 120mp APS-H sensor at 9.5fps). With CFast 2 technology, we'll have faster write to memory, so the buffer won't necessarily be a problem. With Foveon, we get full color information at every single pixel, full spatial information, no longer need AA filters that are nearly as strong as is usually necessary with Bayer, etc.

Sigma's failure is that they market their product with lies and misleading information, and their bodies/firmware have never been very good (in comparison to Canon and Nikon bodies anyway.) Basing the success of ALL layered sensor designs on Sigma's success is a fallacy.

Fuji's 6x6 pixel interpolation is just another way of blurring high frequency data, only it is LESS effective than a standard AA filter. I covered this in very great detail in a long topic a while back, and the impact of the 6x6 pixel interpolation is quite obvious when comparing fine detail (i.e. hairs, telephone wires, etc.) between Fuji's X-Trans sensor and pretty much any bayer sensor.)

I could care less about what technology "sounds" more interesting. I care about what technology DELIVERS better results. Canon is a very conservative company...if they are going to move to a Foveon-like sensor design, then they must have solved some of the more significant problems that Sigma has encountered, and made it a viable design. They wouldn't bet on it if they hadn't. (And the chances hey HAVE solved many of those problems is very high, Canon has a couple patents on layered foveon-like pixels that use a different structure both for the photodiodes themselves, as well as readout; throw in their patents for on-die per-column dual-rate ramp ADC, and Canon could have a real powerhouse sensor in development that could really give the competition a run for the money...especially if it hits at a literal 40mp (i.e. 120 million photosites in 40 million actual pixels, not a trumped up 40mp like Sigma's Foveon.))
+1


I like how people can passionately ridicule or dismiss a new technology sight unseen.....
 
Upvote 0
Nitroman said:
Canon ... please stop f*rting about and just give me a higher megapixel camera !

My 21Mp 1Ds3 is six years old, tired and itching to be replaced.

We've waited long enough ...

I've got a friend in the same position: he likes the 1-series bodies and now wants to upgrade to higher resolution and better noise control. I believe that Keith Cooper over at Northlight Images is also waiting... Neither find the 1D X the right solution, as they are unwilling to spend the money on a camera that doesn't improve on the resolution they get from the 1Ds MkIII.

People assume that their needs are the same as everyone else's and that people who want a high resolution body would prefer a smaller camera. For some, this is indeed the case, but I also believe that there is a significant proportion of 1Ds owners that are happy with their cameras' configuration. How many of these will continue to wait if Canon further delays a replacement, and how many will be tempted to migrate to the likes of the Pentax 645Z?
 
Upvote 0
http://www.luminous-landscape.com/reviews/cameras/sigma_sd1_review.shtml


What We See

There has been a lot of nonsense promulgated over the so-called 3D qualities of Foveon / X3 images. I now understand (I think) what people have been talking about, but there really is no magic involved. There are also some issues that are relevant to the current version of SPP software.
•The SD1 does not have a blurring (anti-aliasing) filter. When used with a very good lens this allows extremely fine micro-detail to be recorded, creating prints and on-screen images (sometimes) with a feeling of greater depth and dimensionality. This isn't unique to the SD1 or other Foveon / X3 cameras because it isn't a function of this sensor technology; it is simply a result of not having a softened image caused by an AA filter. This is also seen with the Leica M8 / M9, which similarly do not have an AA filter, and which many users claim have a comparable 3Dish quality to their files. Indeed the absence of an AA filter is part of the appeal of medium format cameras and backs, and in the above comparison series is seen as well with the Pentax 645D.

•The X3 technology of the SD1's sensor means that there is no colour aliasing. Fine if you're shooting fabrics, but not that critical for most users. Where it does seem to play a role is in not requiring any Chroma noise reduction, since Chroma noise is an artifact of the Bayer sensor. Even at moderate to low ISOs there is some chroma noise, possibly even below normal visual sensitivity. But (and this is a conjecture on my part) its lack may play a role in the "look" that Foveon / X3 fans enjoy.

•It's worth noting that while a Bayer filter camera interpolates its image data, so too does a Foveon X3 sensor when one wants to make prints larger than native size. For the first time with the new SD1 model though, with its usable native 15MP size, up-ressing may not be needed when making all except large prints.



Direct Colour vs. Colour Filter Array

Other than sensors that use Foveon X3 technology all sensors (CCD and CMOS) use what is called a Bayer Matrix so as to be able to reproduce colour. Silicon photo sites are not able to record colour directly, and so various Filter Array technogies have been developed to make this possible. A Bayer matrix is by far the most common, and is used in virtually every camera on the market, from the smallest point-and-shoot to the largest medium format backs.

A Foveon sensor stacks R, G and B sensitive photosites vertically instead of horizontally. The advantages are that no colour array is needed, and this means that there is no colour aliasing. No colour aliasing means no need for a blurring filter (AA filter), and thus higher apparent resolution. And, though we are all used to counting the total number of photo sites in a Bayer sensor as contributing to spatial resolution, they don't. It's just that this is what we're used to.


Doing the Numbers


Bayer Filter Array Foveon X3





Over the years, since Sigma started championing (and now owning) the Foveon X3 technology, the world has had a problem with the way in which each sensor's megapixel count is stated. It is an understandable problem because in a Bayer sensor there may be, say, 20 million photosites but not all of them are used for luminance information. There are two greens for every red and blue combination and only the greens bare primary responsability for spacial resolution. The reds and blues are primarily for colour information. But, in an X3 sensors the sensors are stacked vertically and so each one contributes to luminance as well as colour information. So, take the new Sigma DP2M. It has a 46 Megapixel sensor according to Sigma. But since two thirds of these lie in the same spacial position as the other third there is no actual net grain in spacial resolution.

It's complicated though. A Bayer sensor doesn't record as much spacial resolution as its basic pixel count (say, 20MP) would indicate. It's actually about a third less than this, but since virtually every other sensor in every other camera on the market suffers the same handicap, no one fusses about it. It's a level playing field; well almost, if its weren't for Foveon and Sigma's X3.

Sigma made things problematic until recently by claiming that their sensor was 46 Megapixels. Yes, it is, but not when compared to a Bayer. With this new generation of cameras though Sigma needs to be given credit for, for the first time in their promotional literature, instead of just boldly stating 46MP, the say that this sensor is equivalent to a 30 Megapixel Bayer. While this is still a push, it's a lot closer to the truth.

The problem also comes in the Sigma X3 cameras do not have anti-aliasing filters. They don't need them because there is no colour aliasing because there is no Bayer matrix. This gives an X3 sensor a clear resolution advantage over any Bayer camera that does have an AA filter. Of course there are some Bayer cameras that don't have AA filters either, such as the Leica M9, S2, Nikon D800e and others, and so as you can see it isn't a simple discussion.

In the end this also isn't a discussion that I particularly care much about. It still riles up the debaters on net forums, but those are about the only ones still paying attention. Most serious photographers that I know are much more interested in a sensor / camera's real-world performance than just its megpixelage, because they know that there is a lot more to image quality than just numbers. In any event, once camera got above about 11 Megapixels only a relatively few photographers who make extremely large exhibition prints really cared any more. Yes, 40, 60 even 80MP is nice to have, but it's a sensor's other imaging qualities that make the real difference when it comes to IQ.

So, Sigma calls the DP2M a 46 Megapixel sensor but then qualifies it in the small print and says that it's roughly equivalent to a 30 Megapixel Bayer equipped sensor. The actual spacial resolution is 15 Megapixels (not at all shabby in its own right in an APS-C sized sensor). If I had to pick a number I would judge the DP2M to be roughly equivalent to about a 24–28 MP Bayer camera. But, as I wrote, this is the stuff of web forum fights, not something that serious photographers really spend that much time fussing about.
 
Upvote 0
Someone asked what a 5D3 owner would want to induce them to upgrade?

Maybe I'm more relaxed than most people here seem to be, because my two desires - more megapixels and better high ISO performance - are almost certain to be delivered, simply following past progression and what pretty much every other manufacturer has done (my bird work almost always requires cropping, and rarely allows shooting below ISO 400). That's just me, I understand why others' needs differ.

A minor concern but it would be nice to see a higher rated shutter life - I take a lot of photos and have always had that niggling worry at the back of my mind, although replacing the shutter isn't too much of a problem. But the difference between 150,000 and 400,000 is massive.

I don't actually care what sensor technology Canon uses, just the end results (sorry).
 
Upvote 0
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.
 
Upvote 0
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.
 
Upvote 0
jrista said:
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.

I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).
 
Upvote 0
scyrene said:
jrista said:
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.

I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).

The 1D X does not even get one full stop. It's largely a perceptual thing regarding how good the 1D X looks, but technically, the 1D X is only a fraction of a stop better, and the 5D III when downsampled gets similar results (not quite as good due to less total photodiode area).

When it comes to smaller pixels and croppability, your going to lose high ISO noise performance. I've mentioned this in other topics, but overall, high ISO performance is fundamentally due to total sensor area and quantum efficiency. It is a higher Q.E. and a larger sensor area that makes the 1D X better in the long run, not it's pixel size. Once you bring cropping into the picture, especially with smaller pixels, then you start to experience worsening high ISO performance. Your not only putting fewer pixels on subject, your using a smaller area of the sensor, which means less total light for your subject.

There really isn't any way that a FF sensor with smaller pixels will produce better results than a FF sensor with bigger pixels. It will have more detail, but per-pixel noise will be higher, so cropping means more noise. Cropping a 1D X means less per-pixel noise, but also less detail. It's a tradeoff...low noise, or more detail. For any given sensor area, the only way to improve noise performance is to improve Q.E. The 1D X has 47% Q.E., which means to actually double high ISO noise performance with the 1D neXt, you need 94% Q.E. The 5D III actually has 49% Q.E., which means you need 98% to double it's noise performance. That's not going to happen...not with consumer-grade devices. The highest grade Astro CCD sensors that have 82% Q.E. or more, Grade 1, are exceptionally expensive. They also require significant cooling (usually with two- or three-stage peltier, or themo-electric, cooling), which requires SIGNIFICANT power over what a DSLR normally draws.

Hopes of a super high resolution sensor that performs as well or better than a 1D X when cropped is just a pipe dream. It will resolve more detail, but that detail will be more noisy, not less noisy.
 
Upvote 0
I am both a Canon and a Sigma X3F (Foveon) user, and there are good arguments for both sensors. There is no question that the Sigma CAMERAS are deficient in many areas, some not related to the sensor demands, but that for specialized uses, mostly landscapes, the Sigma Foveon cameras have unique qualities making it worthwhile to put up with annoyances. Canon cameras are good all-around cameras, Sigma cameras are specialty cameras.

The Sigma Foveon sensors are perceptually sharper, per pixel, than the Canon Bayer sensors, 15 Mp APS-C DP#M sensor is sharper (slightly) than my FF Canon 6D 20 Mp Bayer sensor, and users of both Sigma and Sony/Nikon FF 24 Mp sensor rank them as similar, with the Sony sensor minimally better in sharpness.

The major rendition difference in the Foveon and Bayer sensors in the current iterations is that there is a certain color subtlety in the Foveon sensor RAW files, sometimes called "film-like", that is not present in the Bayer sensor RAW files. Low local contrast, hue-restricted areas are considerably more detailed on the Foveon sensor files than the Bayer sensor files.

To my mind, the combination of color subtlety and acutance is the one and only reason to go for the Foveon sensor. Foveon sensors excel at landscapes and floral portraits, and are "too sharp" for most portrait use - one is likely to need to do more blemish-removal post-processing than for Bayer sensor files.

Canon Bayer sensors: very well developed computational protocols mean fast high-throughput processing. Great for action because you can get very high still frame rates and you can pack more frames into the buffer.
Sigma Foveon sensors: fewer generations of computational protocols, new ones being tested, current (non-Quattro) generation is s-l-o-w, still frame rate is maybe 3 fps, and it takes several seconds to clear a full (7 photos at ~55 Mb each for the 15 Mp DP#M/ SD1M sensors) buffer.

Canon Bayer sensors: nearly "infinite" post-processing software options that work with RAW files. Seamless integration of your RAW developing program with external programs and plug-ins.
Foveon sensors: 2 RAW processors, Sigma Photo Pro and Iridient Developer. If I want to make a panorama, I need to do the RAW color / contrast / exposure adjustments in Sigma Photo Pro, then export as a 16 bit tif into my pano program. Ditto for HDR program. SPP gives good results but lacks some exceedingly simple local editing maneuvers such as cropping (!). On the other hand, SPP monochrome mode makes gorgeous B&W images from your color-adjusted (fake filter) RAW files.

I have two parallel workflows, and this is a PITA. I have two parallel file trees, one for Bayer, one for Foveon files. I use Lightroom as my organizer and RAW developer, and LR does not recognize (Adobe likely NEVER will recognize) the X3F Foveon files. So, if I want to catalog my Sigma images, I have to export a small jpg next to its parent file, and acquire that proxy jpg in LR. Then, I can score, tag, keyword, etc., but in order to work with the X3F RAW file, I have to leave LR and manually go to the physical location of the X3F file, launch SPP, etc.
P-I-T-A!

I will be lining up for the Canon 7D2 for bird photography (replacing 60D), and will be very interested in the Canon Foveon-like FF sensor.
 
Upvote 0
NancyP said:
I am both a Canon and a Sigma X3F (Foveon) user, and there are good arguments for both sensors. There is no question that the Sigma CAMERAS are deficient in many areas, some not related to the sensor demands, but that for specialized uses, mostly landscapes, the Sigma Foveon cameras have unique qualities making it worthwhile to put up with annoyances. Canon cameras are good all-around cameras, Sigma cameras are specialty cameras.

The Sigma Foveon sensors are perceptually sharper, per pixel, than the Canon Bayer sensors, 15 Mp APS-C DP#M sensor is sharper (slightly) than my FF Canon 6D 20 Mp Bayer sensor, and users of both Sigma and Sony/Nikon FF 24 Mp sensor rank them as similar, with the Sony sensor minimally better in sharpness.

This is a complete fallacy, and easily demonstrable with actual images. I've disproven this concept many times on these forums, recently. I'm happy to use your own images even, but more pixels, even with an AA filter, still leads to greater sharpness. The difference becomes clear when you downsample say the 6D 20mp images to the same image dimensions as the Foveon sensors. Even when your talking about a 15mp (what Sigma calls a 46mp) Foveon, on a normalized basis it isn't as sharp as a bayer.

Foveons strengths are not in the spatial resolution/sharpness category. They are in the color fidelity and moire departments.

NancyP said:
The major rendition difference in the Foveon and Bayer sensors in the current iterations is that there is a certain color subtlety in the Foveon sensor RAW files, sometimes called "film-like", that is not present in the Bayer sensor RAW files. Low local contrast, hue-restricted areas are considerably more detailed on the Foveon sensor files than the Bayer sensor files.

This is indeed where the Foveon's strengths truly lie...in color fidelity...richness, saturation, rendition, etc. That's expected, given that each pixel has full color data.

NancyP said:
To my mind, the combination of color subtlety and acutance is the one and only reason to go for the Foveon sensor. Foveon sensors excel at landscapes and floral portraits, and are "too sharp" for most portrait use - one is likely to need to do more blemish-removal post-processing than for Bayer sensor files.

Compare a full-resolution D800 (non-E) image with a Foveon image. The D800 will trounce the Foveon (even the 15mp version) in terms of sharpness. The foveon cannot touch "too sharp", the D800 is often so sharp that aliasing becomes a problem (even with the non-E version.)
 
Upvote 0
jrista said:
ScottyP said:
I think they lost me at "Foveon-like".

So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.

Am I missing something wonderful about Foveon? If so, then so is everyone else based on the failure of Sigma's Foveon bodies to fly off the shelves. Why not copy FUJI sensors instead? That more complex, non-bayer pixel, no filter thing sounds much more interesting to me, anyway.

Crud.

Your making a LOT of assumptions. The "negatives" of high MP cameras can be mitigated. With on-die CP-ADC (which canon does have a patent for), they can dramatically improve readout speed (they already proved they could read out a 120mp APS-H sensor at 9.5fps). With CFast 2 technology, we'll have faster write to memory, so the buffer won't necessarily be a problem. With Foveon, we get full color information at every single pixel, full spatial information, no longer need AA filters that are nearly as strong as is usually necessary with Bayer, etc.

Sigma's failure is that they market their product with lies and misleading information, and their bodies/firmware have never been very good (in comparison to Canon and Nikon bodies anyway.) Basing the success of ALL layered sensor designs on Sigma's success is a fallacy.

Fuji's 6x6 pixel interpolation is just another way of blurring high frequency data, only it is LESS effective than a standard AA filter. I covered this in very great detail in a long topic a while back, and the impact of the 6x6 pixel interpolation is quite obvious when comparing fine detail (i.e. hairs, telephone wires, etc.) between Fuji's X-Trans sensor and pretty much any bayer sensor.)

I could care less about what technology "sounds" more interesting. I care about what technology DELIVERS better results. Canon is a very conservative company...if they are going to move to a Foveon-like sensor design, then they must have solved some of the more significant problems that Sigma has encountered, and made it a viable design. They wouldn't bet on it if they hadn't. (And the chances hey HAVE solved many of those problems is very high, Canon has a couple patents on layered foveon-like pixels that use a different structure both for the photodiodes themselves, as well as readout; throw in their patents for on-die per-column dual-rate ramp ADC, and Canon could have a real powerhouse sensor in development that could really give the competition a run for the money...especially if it hits at a literal 40mp (i.e. 120 million photosites in 40 million actual pixels, not a trumped up 40mp like Sigma's Foveon.))

We can hope, a LOT to have overcome, but if they have as well as keeping DR high with that style.... :D could be that Exmor folks are suddenly looking over in envy at Canon sensors for some years to come.
 
Upvote 0
LetTheRightLensIn said:
jrista said:
ScottyP said:
I think they lost me at "Foveon-like".

So it will have all the negatives of a high MP camera, like massive files to store, and a slowed FPS, and a faster-clogging buffer, but none of what you actually want from all those MP's, namely higher resolution and more detail to spare when doing things like shooting at high ISO, or cropping heavily.

Am I missing something wonderful about Foveon? If so, then so is everyone else based on the failure of Sigma's Foveon bodies to fly off the shelves. Why not copy FUJI sensors instead? That more complex, non-bayer pixel, no filter thing sounds much more interesting to me, anyway.

Crud.

Your making a LOT of assumptions. The "negatives" of high MP cameras can be mitigated. With on-die CP-ADC (which canon does have a patent for), they can dramatically improve readout speed (they already proved they could read out a 120mp APS-H sensor at 9.5fps). With CFast 2 technology, we'll have faster write to memory, so the buffer won't necessarily be a problem. With Foveon, we get full color information at every single pixel, full spatial information, no longer need AA filters that are nearly as strong as is usually necessary with Bayer, etc.

Sigma's failure is that they market their product with lies and misleading information, and their bodies/firmware have never been very good (in comparison to Canon and Nikon bodies anyway.) Basing the success of ALL layered sensor designs on Sigma's success is a fallacy.

Fuji's 6x6 pixel interpolation is just another way of blurring high frequency data, only it is LESS effective than a standard AA filter. I covered this in very great detail in a long topic a while back, and the impact of the 6x6 pixel interpolation is quite obvious when comparing fine detail (i.e. hairs, telephone wires, etc.) between Fuji's X-Trans sensor and pretty much any bayer sensor.)

I could care less about what technology "sounds" more interesting. I care about what technology DELIVERS better results. Canon is a very conservative company...if they are going to move to a Foveon-like sensor design, then they must have solved some of the more significant problems that Sigma has encountered, and made it a viable design. They wouldn't bet on it if they hadn't. (And the chances hey HAVE solved many of those problems is very high, Canon has a couple patents on layered foveon-like pixels that use a different structure both for the photodiodes themselves, as well as readout; throw in their patents for on-die per-column dual-rate ramp ADC, and Canon could have a real powerhouse sensor in development that could really give the competition a run for the money...especially if it hits at a literal 40mp (i.e. 120 million photosites in 40 million actual pixels, not a trumped up 40mp like Sigma's Foveon.))

We can hope, a LOT to have overcome, but if they have as well as keeping DR high with that style.... :D could be that Exmor folks are suddenly looking over in envy at Canon sensors for some years to come.

Yeah, it's definitely a LOT to overcome. There is no question at this point that Canon is behind the curve on sensor tech. I watch patents pretty closely these days, and Canon is practically non-existent in the new patent arena. Now, that is not to say that they don't have any. They do...they have Dual-Scale CP-ADC (basically a CP-ADC with two alternate readout rates, as slower readout, when possible (i.e. long enough exposure time to support a slower readout rate) usually results in cleaner read noise (they switch to a lower frequency counter); they have a few patents on layered foveon-style pixels; they have a number of patents on low-noise readout concepts, such as power disconnection (supposedly that can nearly eliminate dark current noise...I'm much more interested in that for astrophotography applications, but it could help a little for very high ISO readout as well).

The big question is, given Canon HAS these patents...will they actually use them in a sensor, and when.
 
Upvote 0
jrista said:
scyrene said:
jrista said:
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.

I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).

The 1D X does not even get one full stop. It's largely a perceptual thing regarding how good the 1D X looks, but technically, the 1D X is only a fraction of a stop better, and the 5D III when downsampled gets similar results (not quite as good due to less total photodiode area).

When it comes to smaller pixels and croppability, your going to lose high ISO noise performance. I've mentioned this in other topics, but overall, high ISO performance is fundamentally due to total sensor area and quantum efficiency. It is a higher Q.E. and a larger sensor area that makes the 1D X better in the long run, not it's pixel size. Once you bring cropping into the picture, especially with smaller pixels, then you start to experience worsening high ISO performance. Your not only putting fewer pixels on subject, your using a smaller area of the sensor, which means less total light for your subject.

There really isn't any way that a FF sensor with smaller pixels will produce better results than a FF sensor with bigger pixels. It will have more detail, but per-pixel noise will be higher, so cropping means more noise. Cropping a 1D X means less per-pixel noise, but also less detail. It's a tradeoff...low noise, or more detail. For any given sensor area, the only way to improve noise performance is to improve Q.E. The 1D X has 47% Q.E., which means to actually double high ISO noise performance with the 1D neXt, you need 94% Q.E. The 5D III actually has 49% Q.E., which means you need 98% to double it's noise performance. That's not going to happen...not with consumer-grade devices. The highest grade Astro CCD sensors that have 82% Q.E. or more, Grade 1, are exceptionally expensive. They also require significant cooling (usually with two- or three-stage peltier, or themo-electric, cooling), which requires SIGNIFICANT power over what a DSLR normally draws.

Hopes of a super high resolution sensor that performs as well or better than a 1D X when cropped is just a pipe dream. It will resolve more detail, but that detail will be more noisy, not less noisy.

Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?
 
Upvote 0
scyrene said:
jrista said:
scyrene said:
jrista said:
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.

I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).

The 1D X does not even get one full stop. It's largely a perceptual thing regarding how good the 1D X looks, but technically, the 1D X is only a fraction of a stop better, and the 5D III when downsampled gets similar results (not quite as good due to less total photodiode area).

When it comes to smaller pixels and croppability, your going to lose high ISO noise performance. I've mentioned this in other topics, but overall, high ISO performance is fundamentally due to total sensor area and quantum efficiency. It is a higher Q.E. and a larger sensor area that makes the 1D X better in the long run, not it's pixel size. Once you bring cropping into the picture, especially with smaller pixels, then you start to experience worsening high ISO performance. Your not only putting fewer pixels on subject, your using a smaller area of the sensor, which means less total light for your subject.

There really isn't any way that a FF sensor with smaller pixels will produce better results than a FF sensor with bigger pixels. It will have more detail, but per-pixel noise will be higher, so cropping means more noise. Cropping a 1D X means less per-pixel noise, but also less detail. It's a tradeoff...low noise, or more detail. For any given sensor area, the only way to improve noise performance is to improve Q.E. The 1D X has 47% Q.E., which means to actually double high ISO noise performance with the 1D neXt, you need 94% Q.E. The 5D III actually has 49% Q.E., which means you need 98% to double it's noise performance. That's not going to happen...not with consumer-grade devices. The highest grade Astro CCD sensors that have 82% Q.E. or more, Grade 1, are exceptionally expensive. They also require significant cooling (usually with two- or three-stage peltier, or themo-electric, cooling), which requires SIGNIFICANT power over what a DSLR normally draws.

Hopes of a super high resolution sensor that performs as well or better than a 1D X when cropped is just a pipe dream. It will resolve more detail, but that detail will be more noisy, not less noisy.

Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?

Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.

Fundamentally, though, pretty much every algorithm suffers from the same core problem, to varying degrees: They blur detail. Your most basic denoising algorithm takes high frequency data and blurs it by a certain amount...for each pixel, it takes some component of the surrounding pixels, generates an averaged result (with some given weight, usually attenuated by some UI control somewhere), and replaces the original pixel value with the weighted average value. Do that for each and every pixel, and each and every pixel ends up blending itself with it's neighboring pixels. There are varying matrix sizes, i.e. 3x3, 6x6, that can be used when performing a very basic noise reduction, that will spread the effect out more or less.

Wavelets and deconvolution tend to be more intelligent about how they reduce noise. They either try to generate a "kernel" based on the information in the image, or try to break up the image into multiple spatial frequency levels, and apply different degrees of noise reduction on each wavelet level, in an attempt to preserve certain frequencies while blurring others, with the ultimate goal of preserving detail. Problem with these algorithms is that, while they can reduce noise without blurring detail as much, they often suffer from greater artifact introduction...halos or excessive acutance or blotching, things like that.

Noise reduction is best applied in extreme moderation, in which case it will always have very significant limitations. It can only take you so far, and the less noisy your images start out as, the better the results will be. This is one of the reasons why the "low" resolution images from the 1D X clean up so well...1D X pixels start out with significantly more dynamic range than sensors with smaller pixels, so there is less per-pixel noise to start with, so a minimal amount of NR is perceived as being more effective (it really isn't, there was less noise to start with, so less noise to remove, so a small amount of NR is has a greater relative effect than with images that start with more noise to remove. In other words, to ridiculously simplify things down to simple numbers, if a 1D X has noise of 7, and a 5D III has noise of 12, and you reduce noise by 5, the 1D X is left with noise of 2, where as the 5D III is left with 7...it's as bad after NR as the 1D X was before NR.)

Noise reduction algorithms are already extremely powerful and extremely intelligent. I recently purchased software called PixInsight, which is primarily an astrophotography processing program, but it's tools can be used on regular photos as well. It has a whole suite of noise reduction tools that work in different ways. Depending on the kind of noise you have, and the region of your image that you wish to denoise, PixInsights noise reduction tools can be more effective than any other tool...but as advanced as they are, they are still not perfect. Wavelets still introduce mottling and blotching, deconvolution can still introduce halos, median sharpening and denoise can still introduce sparkles and panda eyes, etc.

The best way to reduce noise is to increase the rate of conversion of light to charge in a pixel, increase the maximum charge of each pixel, increase the total maximum charge of the sensor, etc. The more light you can convert into charge in a given time, the less noise you will have. I don't expect to see a major jump in Q.E. any time soon....I suspect Canon's next round of sensors will be around 51-53%, maybe 56% at most, up from the current 47-49%. That will certainly help in the noise department, but it is no where even remotely close to supporting a true one-stop improvement in noise. It's less than a third stop improvement in noise (less than a tenth stop improvement in noise, even!) Elimination of color filters in favor of color splitting, a reduction in heat conversion (i.e. with light pipes or BSI), reduction in reflection (i.e. with black silicon), etc. can all increase the rate at which photons convert to charge, and increase Q.E. These technologies exist, lots of patents exist, however I don't see any patents for these specific kinds of technology from Canon, so I don't expect them to show up in Canon's next sensor designs. A layered sensor is capable of converting more light to charge per pixel, however that charge is divvied up amongst different color channels, so it's effectiveness is attenuated...a foveon-like design from Canon is a step in the right direction, but I don't expect the impact on noise to be all that much (and we'll see a conversion of which color channels are noisiest...instead of blue being noisiest, red is likely to become noisiest, and green will become noisier, while blue would likely experience a modest drop in noise levels.)
 
Upvote 0
scyrene said:
bbasiaga said:
I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!

::)

High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.

I'll second that, although I am not into birding. Give me a clean ISO 25k and I am a happy camper ;-) Low light is my passion! But as jrista said: lower pixel count is the base for better high ISOs. So hopefully they keep at least to the current 22.3 MP if not I guess we're in the same boat again despite new sensor tech. Then a 6D in my case looks much more attractive in a next cycle, as long as they keep the same MP count...Wouldn't consider it as a downgrade, as I am only looking for best and affordable low light/high ISO IQ.
 
Upvote 0
jrista said:
scyrene said:
Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?

Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.

Fundamentally, though, pretty much every algorithm suffers from the same core problem, to varying degrees: They blur detail. Your most basic denoising algorithm takes high frequency data and blurs it by a certain amount...for each pixel, it takes some component of the surrounding pixels, generates an averaged result (with some given weight, usually attenuated by some UI control somewhere), and replaces the original pixel value with the weighted average value. Do that for each and every pixel, and each and every pixel ends up blending itself with it's neighboring pixels. There are varying matrix sizes, i.e. 3x3, 6x6, that can be used when performing a very basic noise reduction, that will spread the effect out more or less.

Wavelets and deconvolution tend to be more intelligent about how they reduce noise. They either try to generate a "kernel" based on the information in the image, or try to break up the image into multiple spatial frequency levels, and apply different degrees of noise reduction on each wavelet level, in an attempt to preserve certain frequencies while blurring others, with the ultimate goal of preserving detail. Problem with these algorithms is that, while they can reduce noise without blurring detail as much, they often suffer from greater artifact introduction...halos or excessive acutance or blotching, things like that.

Noise reduction is best applied in extreme moderation, in which case it will always have very significant limitations. It can only take you so far, and the less noisy your images start out as, the better the results will be. This is one of the reasons why the "low" resolution images from the 1D X clean up so well...1D X pixels start out with significantly more dynamic range than sensors with smaller pixels, so there is less per-pixel noise to start with, so a minimal amount of NR is perceived as being more effective (it really isn't, there was less noise to start with, so less noise to remove, so a small amount of NR is has a greater relative effect than with images that start with more noise to remove. In other words, to ridiculously simplify things down to simple numbers, if a 1D X has noise of 7, and a 5D III has noise of 12, and you reduce noise by 5, the 1D X is left with noise of 2, where as the 5D III is left with 7...it's as bad after NR as the 1D X was before NR.)

Noise reduction algorithms are already extremely powerful and extremely intelligent. I recently purchased software called PixInsight, which is primarily an astrophotography processing program, but it's tools can be used on regular photos as well. It has a whole suite of noise reduction tools that work in different ways. Depending on the kind of noise you have, and the region of your image that you wish to denoise, PixInsights noise reduction tools can be more effective than any other tool...but as advanced as they are, they are still not perfect. Wavelets still introduce mottling and blotching, deconvolution can still introduce halos, median sharpening and denoise can still introduce sparkles and panda eyes, etc.

The best way to reduce noise is to increase the rate of conversion of light to charge in a pixel, increase the maximum charge of each pixel, increase the total maximum charge of the sensor, etc. The more light you can convert into charge in a given time, the less noise you will have. I don't expect to see a major jump in Q.E. any time soon....I suspect Canon's next round of sensors will be around 51-53%, maybe 56% at most, up from the current 47-49%. That will certainly help in the noise department, but it is no where even remotely close to supporting a true one-stop improvement in noise. It's less than a third stop improvement in noise (less than a tenth stop improvement in noise, even!) Elimination of color filters in favor of color splitting, a reduction in heat conversion (i.e. with light pipes or BSI), reduction in reflection (i.e. with black silicon), etc. can all increase the rate at which photons convert to charge, and increase Q.E. These technologies exist, lots of patents exist, however I don't see any patents for these specific kinds of technology from Canon, so I don't expect them to show up in Canon's next sensor designs. A layered sensor is capable of converting more light to charge per pixel, however that charge is divvied up amongst different color channels, so it's effectiveness is attenuated...a foveon-like design from Canon is a step in the right direction, but I don't expect the impact on noise to be all that much (and we'll see a conversion of which color channels are noisiest...instead of blue being noisiest, red is likely to become noisiest, and green will become noisier, while blue would likely experience a modest drop in noise levels.)

I'll look into that software, thanks for the tip! And you've given me increased respect for what noise reduction is doing - it sounds hugely complicated. I know nothing about programming, but I wonder how intelligent it could be made - my eye can tell what is noise and what is detail by parsing the scene, knowing what the photograph is *of*. I wonder if machine intelligence can move in that direction? Even if it was just a matter of cases - telling it 'this area is feathers, so expect lots of fine linear detail' etc. Asking a lot, no doubt :)

I think the conclusion is at this point, have a large megapixel camera for good light (for cropping), and a lower-MP camera with better low light noise for dusk and dawn (I'm intrigued by the A7s in this regard), and accept I won't have the same reach :(

(I should stress I think the current technology is still amazing).
 
Upvote 0
Jrista, I was comparing DP2M to 6D - 15 Mp physical pixels (which Sigma has called "46 Mp" in the past) to 20 physical Bayer Mp. In low-ISO situations, the DP2M does look a tad sharper than the 6D, despite the 5 Mp disadvantage. I attribute this to the color fidelity, because my subjects are generally landscapes with subtle color variation in leaves, grasses, etc. It is not 100% fair comparison because my current 50mm lens is a pre-computer-design manual AIS Nikkor 50mm f/1.2 used on an adapter on the 6D, which does look pretty darn sharp at f/4 to f/5.6 and still sharp f/8. The DP2M's fixed Sigma 30mm f/2.8 lens (45 mm equivalent) at same f stops looks sharper, but then again, the lens is 30 years younger. The real test would be the new Sigma 50mm f/1.4 Art - new design, no adapter, best affordable lens resolution-wise for the EF mount.
 
Upvote 0
NancyP said:
Jrista, I was comparing DP2M to 6D - 15 Mp physical pixels (which Sigma has called "46 Mp" in the past) to 20 physical Bayer Mp. In low-ISO situations, the DP2M does look a tad sharper than the 6D, despite the 5 Mp disadvantage. I attribute this to the color fidelity, because my subjects are generally landscapes with subtle color variation in leaves, grasses, etc. It is not 100% fair comparison because my current 50mm lens is a pre-computer-design manual AIS Nikkor 50mm f/1.2 used on an adapter on the 6D, which does look pretty darn sharp at f/4 to f/5.6 and still sharp f/8. The DP2M's fixed Sigma 30mm f/2.8 lens (45 mm equivalent) at same f stops looks sharper, but then again, the lens is 30 years younger. The real test would be the new Sigma 50mm f/1.4 Art - new design, no adapter, best affordable lens resolution-wise for the EF mount.

Thanks for the additional facts! That's always helpful when trying to understand things like this.

One of the things photographers don't quite understand, possibly in no small part to Sigma's marketing of Foveon, is that DSLR's have full luminance data...they only really suffer a loss in color resolution, and therefor color fidelity. Standard bayer interpolation uses 2x2 matrices of RGBG sensor pixels, in overlapping fashion, to produce RGB pixels in an image rendered to screen or saved to a file (i.e. TIFF). Effectively, the dimensions of an image from a bayer sensor is a count of the intersections between 2x2 pixel matrices in the sensor. This tends to cost you a little in luminance spatial resolution, and a fair bit in chroma spatial resolution, and is prone to artifacts like stair stepping.

More advanced algorithms, like AHDD or Adaptive Homogeneity-Directed Demosaicing, aim to maximize the luminance detail in each and every individual pixel (so it uses the luma information directly, rather than interpolating), while concurrently interpolating chroma data from pixels in such a manner that it maximizes chroma spatial resolution while eliminating stairstepping and other artifacts. Lightroom, Apple Aperture, RawThearapy, DarkTable all use/support AHDD, which means that generally speaking, demosaiced RAW images always have nearly the full resolution of modern bayer sensors.

A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.) It will have lower color fidelity, but because of the higher resolution luminance detail, it won't matter all that much. Color depth on a bayer can be extremely high. Canon sensors don't have the best color fidelity, but Sony Exmor sensors have very high color fidelity.
 
Upvote 0
The very last real input for the new version of the 7D II was that Canon was having problems with manufacturing the new sensor for the 7D II. From then, nothing. There is no need to further discuss this new cameras engineering. The discussion should focus on production problems.
 
Upvote 0