Canon Announces the Development of New High Sensitivity Sensor

Status
Not open for further replies.
mrsfotografie said:
AprilForever said:
I see a grim trend in a direction I do not like. As the old saaying goes, the squeaking wheel gets the oil. I remember a canon experimental camera from a while ago, a weird white spaceship looking thing, talking about the future of photography being the imager videoing the subject then selecting the best frames. This is NOT a direction I want things going; therefore, I make my voice known. Moreover, I am certain that I am not alone in this.

Same here, I still find it strange when I see someone using a DSLR as a videocamera. Ergonomically it's hopeless so I'm hoping for a spit in stills and video hardware. As for the original post: happy to see some report that Canon is continuing the development advancements in sensor technology, as we should expect.

I stronly support your opinion. And yes I hope, the new tech relates to still sensors as well and I expect them to announce it. Hopefully within this year...
 
Upvote 0
I wonder if this is a supercooled sensor. As pixel area grows, so does the amount of dark current in the pixel, which means higher read noise. The best way that I know of to reduce noise from dark current is by cooling the sensor. There have been rumors in the past that Canon was working on some kind of active cooling technology...maybe this is the first glimpse of the future to come? If the sensor can detect 0.03 lux at high ISO, that has to mean a proportional drop in noise overall, which should mean, even though it is probably higher at low ISO...it would be much lower than Canon sensors today.
 
Upvote 0
chauncey said:
Can anyone explain why DSLRs sensors are not square which would provide more viewing area?
This has been gone over more times on this forum than I would ever count. Simple answer: a square sensor with a diagonal of 43.3mm (same as 36x24mm) would not work due to the extra height needed for the reflex mirror and the flange distance of the EOS system (the taller mirror would hit the rear element or mount of the lens). And as others have pointed out, not all lenses have round baffles to produce the full image circle (only to cover the portion of the image circle that contains the 36x24mm frame.
 
Upvote 0
Jackson_Bill said:
TrumpetPower! said:
Freelancer said:
the sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases.

im not nativ english speaking but. that sounds wrong.. not?

more noise with larger photosites?

I believe the point they're making is that, though the noise per pixel generally goes down with larger sensors, the noise per unit of area generally goes up.

So, with your low megapickle large area per pixel sensor, at 100% resolution (pixel peeping) things will look cleaner, but there'll be more total noise in the image as a whole than with a high megapickle small area per pixel sensor.

I'm not sure I follow the "noise per unit of area" thing.

I guess we really have to think of this in the context of very low light levels and not as compared to our dslr sensors. Certainly, for the same level of illumination each of these 19 micron sensels would pick up more photons as compared to a 4.39 micron sensel (like the 7D), and have lower shot noise as compared to the smaller sensel. I'm assuming, (but don't know for sure) that you'd have the same amount of read noise for the two sensels, and hence a better signal to noise ratio for the bigger sensel.
However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

Like I said, I'd be interested to see what neuro and jrista have to say.

I do not have the background to speak to how read noise scales with sensor size but, for the same illumination, photon shot noise will certainly increase for a larger pixel. Specifically, this noise will increase with the square root of the area. However, the signal will increase in proportion to the area, leading to a noise-to-signal ratio that decreases for an individual larger pixel (like 1/sqrt(size)), as conventional wisdom of internet message boards expects.

However, the noise(-to-signal-ratio) of an image is not that of an individual pixel. For reasons I and others have gone into before, if you want to compare the noise between two sensors with different resolutions, you need to divide the per-pixel noise to signal ratio by the square root of the number of pixels to get a figure you can compare between the two sensors. In other words, there are cases where a lot of low SNR pixels is a lot better than a smaller number of higher SNR pixels.

Long story short, if you have two sensors with the same overall sensor size, quantum efficiency, and a full well capacity and read noise (*) that scales (i.e. increases) with the photosite size, an image converted to a given resolution made from the two sensors will have exactly the same SNR and dynamic range, even if the higher-resolution sensor has a worse SNR if you only look at one pixel. (But, under the right conditions, the high resolution sensor can obviously give a ... higher resolution final image. If storage and processing is "cheap", then under these assumed conditions you always want all the megapixels you can get.)


Now, if this new sensor does something like allow large pixels with the same (or lower) read noise per pixel than small sensors, then we have something (remember in the analysis above, you got the same overall picture if the read noise per pixel increased with pixel area). But I will have to wait for someone more knowledgeable than me about that to chime in.

(*) There might be a square root of the photosite size missing in here; I haven't had my coffee yet this morning, and am too lazy to go looking for it.
 
Upvote 0
Is it just me or does this seem like a bunch of made up nonsense?

We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.

These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.
 
Upvote 0
Jackson_Bill said:
TrumpetPower! said:
Freelancer said:
the sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases.

im not nativ english speaking but. that sounds wrong.. not?

more noise with larger photosites?

I believe the point they're making is that, though the noise per pixel generally goes down with larger sensors, the noise per unit of area generally goes up.

So, with your low megapickle large area per pixel sensor, at 100% resolution (pixel peeping) things will look cleaner, but there'll be more total noise in the image as a whole than with a high megapickle small area per pixel sensor.

I'm not sure I follow the "noise per unit of area" thing.

I guess we really have to think of this in the context of very low light levels and not as compared to our dslr sensors. Certainly, for the same level of illumination each of these 19 micron sensels would pick up more photons as compared to a 4.39 micron sensel (like the 7D), and have lower shot noise as compared to the smaller sensel. I'm assuming, (but don't know for sure) that you'd have the same amount of read noise for the two sensels, and hence a better signal to noise ratio for the bigger sensel.
However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

Like I said, I'd be interested to see what neuro and jrista have to say.

There are inverse factors at play. Read noise is initially caused by dark current flowing through the sensor (with secondary downstream contributors as well). With a larger pixel area we have a larger photodiode, which means more area for current flow. That increases the contribution to read noise. By how much I can't say...depends on the materials used for the sensor, doping, and a number of other factors. I don't have enough information to offer specific numbers.

On the flip side, the larger sensor area means exponentially greater signal. The 1D X has a 90,000+ electrons in full well capacity (FWC). Assuming a 7.2x larger sensor area and the same Q.E., full well capacity should be somewhere around 650,000 electrons FWC. So, even at the lowest signal levels, there should be a far greater potential charge, simply because there is so much physical area for photons to strike per pixel. Assuming the sensor has a greater Q.E. than the 1D X sensor, then the potential for true sensitivity is even greater, however the FWC is fixed by area, so a higher sensitivity simply means the sensor saturates faster.

The interesting thing about dark current, the prime contributor to read noise at the time of readout, is that it doubles with every 10°C increase in temperature. Conversely, it halves with every 10°C drop in temperature. Assuming a "room temperature" sensor (~23°C), a 10° drop in temperature should improve read noise by a factor of two. Now, it is unlikely a sensor will operate at room temperature, their density and the amount of current used for readout will increase the temperature by a certain amount. Lets say normal usage increases the sensor temperature 10-20°. To get any real benefit, we would need to cool by at least 30° to double read noise performance. According to the specifications of scientific-grade sensors, which use peltier cooling on CCD sensors, by around -80°C dark current is ~200x lower than at normal operating temperatures. That is a drop of ~125°C, so the improvement in dark current is non-linear as you keep cooling (otherwise one would expect a drop of ~1000x in dark current.)

(Aside: For those who wish to test this fact, you can try it with night sky photography on a very cold night. Anyone who does night sky or aurora photography in the northern (or southern) latitudes, you probably know that while your camera's battery performance drops significantly at low (sub-zero) temepratures, your night sky photos have very little, almost no noise. That is all thanks to the fact that dark current is proportional to temperature.)

Dark current today is already mitigated by using CDS, or correlated double sampling, which samples the charge in each pixel when the sensor is reset, and subtracts that charge when the sensor is read for an exposure, effectively eliminating dark current. Analog per-pixel CDS circuitry seems to be a contributor to banding noise, however, which is what lead Sony to move to an on-die, column-parallel Digital CDS approach in Exmor. Regardless, it is possible Canon has developed significantly more efficient CDS circuitry, which, when combined with moderate active cooling to keep the sensor below room temperature, could produce some considerable gains in read noise performance.

That said, if Canon still uses high frequency off-die moderately parallel ADCs in DIGIC chips, I would still suspect the sensor still has banding noise problems. I guess the off-die DIGICs could be cooled as well, and/or the frequency of the ADCs lowered (which should actually be more than possible with a 2.4mp sensor), both of which should lower the banding noise contribution from A/D conversion.

Jackson_Bill said:
However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

This is true...photon shot noise becomes a problem at higher ISOs (actually, photon shot noise is the primary cause of noise at high ISO...increasing ISO itself does not actually contribute more noise). Nowever, the ratio of signal to read noise is MUCH smaller as well, which is why reducing dark current in the sensor is important. By reducing dark current, you increase efficiency, which supports a higher Q.E., which means that a greater percentage of photons incident on the photodiode itself actually free and electron. By reducing electron contribution to the photodiode from dark current, you increase "true sensitivity", thus making higher ISO settings more effective, with less noise. Combine that with a larger pixel area, and for any given unit of time, SNR should be much higher than with any current Canon sensor, at all signal levels.
 
Upvote 0
Radiating said:
Is it just me or does this seem like a bunch of made up nonsense?

We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.

These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.

Actually, if something mentioned by TheSuede recently is correct, we are capturing only about 16-18% of the light entering a camera. We capture between 40% to 60% of the light incident on the photodiode. That means, 40-60% of the photons that pass through the lens, through the IR cut and AA filter, through the CFA, and actually reach the photodiode effectively free an electron. However, only 30-40% of the light that actually reaches the CFA makes it through...as the CFA is explicitly designed to filter out light of certain frequencies. So...50% of 35% is 17.5%...modern cameras are currently working with VERY LITTLE light. We have a long, long way to go before we are recording as much light as we can...and in a bayer type sensor, that would still be at most 40% of the light that makes it through the lens. The lens itself, assuming a multicoating, can cost as much as 15% light loss or more (depending on the angle to a bright light source). Nanocoating improves that, reducing the loss to only a few percent. The IR cut and AA filters cost a few percent as well.

The only way we could preserve more of the light that makes it through the lens would be to either move to grayscale sensors (eliminate the CFA), or use some kind of color splitting in place of a CFA. Combined with nanocoatings on lens elements and an efficient filter stack over the sensor, total light loss could drop to 10% or less, meaning the Q.E. of the photodiode itself determines the rest. 50% of 90% means we would preserve ~45% of the light reaching and passing through the lens on the camera.

As for "noise in the incoming light"...that is kind of a misnomer. Photon shot noise is caused by the random distribution of photon strikes on the sensor's pixels. With larger pixels, noise caused by that physical fact is reduced, as for any given level of light, each pixel on a large-pixel sensor picks up more light than in a small-pixel sensor. To some degree, assuming the same physical characteristics of the silicon used in both a high density vs. very low density sensor, the high density sensor will sense almost the same amount of light in total as the low density sensor...minus small losses due to a greater amount of wiring which reduces the total surface area that is sensitive to light (and yes, losses will occur despite the use of microlenses.) On a size-normal basis (i.e. scaling the higher resolution image down to the same image dimensions of the lower resolution image), the higher resolution image should perform nearly as well as the lower resolution image....assuming the physical characteristics of the sensors are otherwise identical (same temperature, same Q.E., same CFA efficiency, etc.)
 
Upvote 0
AprilForever said:
Axilrod said:
AprilForever said:
So, this and the thing about the guy using video to capture stills.... Canon, don't forget photographers. Also, don't forget that photography is not videography. Crank out somrthing to tickle the video industry, then get back to stills...

Lol you never fail to complain when something you're not interested in shows up on here. Can't you just ignore it and be happy for the people that shoot video? And yes there were 2 articles about video, but what about the 10 photo related posts in a row before that? Oh right, you're not interested in it so therefore Canon has forgotten about photographers. You're assuming that one takes away from the other, when in reality they are separate divisions. The Motion Picture industry is just as big or bigger than photography, just because you aren't a part of it doesn't mean they don't deserve any new gear.

I'm curious as to what piece of gear you are looking for that you feel is holding you back so much, because clearly you are looking for something specific and not seeing it. So what is it?

I see a grim trend in a direction I do not like. As the old saaying goes, the squeaking wheel gets the oil. I remember a canon experimental camera from a while ago, a weird white spaceship looking thing, talking about the future of photography being the imager videoing the subject then selecting the best frames. This is NOT a direction I want things going; therefore, I make my voice known. Moreover, I am certain that I am not alone in this.

You are assuming that progress on the video front results in zero benefit on the stills front. Sensors are sensors, and they read out the same way regardless. Even if this technology will initially be applied to a low-resolution video camera's sensor does not mean it cannot be applied to high resolution photography sensors in the future. Progress on the sensor technology front is progress on the sensor technology front, and it should benefit Canon products regardless of what category they fall under in the end.
 
Upvote 0
hjulenissen said:
1. Suddenly all of the web discussion forum people demanding "lower MP count as this is bound to give better IQ" got more ammunition.

OTOH they said: "In addition, the sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases."

So for read noise (and total DR) as opposed to photon capture noise....

2. 16:9 AR using the largest possible image circle in an EF lens. What are they saying?

Just take the largest 16:9 rectangle that can fit inside an EF image circle.

3. Is this Canon using their silicon fabs advantage/disadvantage where it can be best used (coarser geometry, specialized application)

-h

perhaps, hard to say
 
Upvote 0
chauncey said:
Can anyone explain why DSLRs sensors are not square which would provide more viewing area?

sensor cost goes by size, if they made them square then APS_C, to hit same cost, would've been made less width across and since many photos do end up closer to rectangular they figure it is a waste of space- my guess

some older lenses were also masked off to large rectangles too and at a certain size the mirror because trouble
 
Upvote 0
Radiating said:
Is it just me or does this seem like a bunch of made up nonsense?

We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.

These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.

you also need to compare 8MP normalization of DxO to the 2MP of this

and you forgot about read noise
 
Upvote 0
I don't ever get the angst over video advancements from still photographers. What do people think a DSLR is anyway? It is nothing but a video camera optimized for stills.

It's kind of like those who complain that they don't want to "pay" for video because they never use it. It's been explained over and over again – video capability makes DSLRs cheaper not more expensive. Unless you are using film, video enhancements inevitably makes stills photography better.

ddashti said:
Wow. This might go head-to-head (if not surpass) Nikon's current lineup of legendary low-light performance sensors.

Nikon has a "lineup of legendary low-light performance sensors?" I must have missed those. Seriously, EVERY review I've read and every comparison I've looked at makes it clear that Canon's lineup of sensors far outperforms Nikon's in high ISO performance. Nikon has been emphasizing megapixels, while Canon has focused on high ISO performance. A few years ago it was the other way around, but since the introduction of the 1D-X Canon has captured the high ISO field.

Take a close look at comparison shots on any of the reputable test sites and it's clear that at higher ISOs Canon outperforms Nikon and Nikon/Sony sensors.
 
Upvote 0
unfocused said:
Seriously, EVERY review I've read and every comparison I've looked at makes it clear that Canon's lineup of sensors far outperforms Nikon's in high ISO performance.

???

The 1DX is probably only barely better at high ISO than the D4 and it also, I think, has a weaker CFA.
Canon isn't worse at high ISO now but far better???
 
Upvote 0
LetTheRightLensIn said:
unfocused said:
Seriously, EVERY review I've read and every comparison I've looked at makes it clear that Canon's lineup of sensors far outperforms Nikon's in high ISO performance.

???

The 1DX is probably only barely better at high ISO than the D4 and it also, I think, has a weaker CFA.
Canon isn't worse at high ISO now but far better???

I wouldn't say the Canon is "far" better...things are limited largely by physics at that point. In real-world examples, I've noticed more color noise from the D4 at ISO 25600 and 51200 (probably because those settings are digitally amplified, vs. Canon's primarily analog amplification). Outside of that very slight difference, the two cameras are definitely comparable at those levels...you would be hard pressed to notice any real differences in most situations, I think.
 
Upvote 0
LetTheRightLensIn said:
Jackson_Bill said:
I'm not sure I follow the "noise per unit of area" thing.
Like I said, I'd be interested to see what neuro and jrista have to say.

Probably because you've read too much jrista ;).
He doesn't seem to believe in the concepts of noise per area or normalization (or at least hadn't for a long time).

I do believe in noise per area normalization. I don't believe that doing so improves photographic dynamic range in any meaningful way. Signal dynamic range, sure, but the process of scaling destroys other image information while normalizing noise...the loss of real-world detail in favor of less apparent noise is a negative tradeoff as far as I am concerned. I don't know how long I'll have to clarify my position on that front...but I don't dispute the benefit purely from a noise standpoint. And when comparing the noise between two cameras, sure, normalize size. If you want to compare photographic dynamic range, normalization destroys detail that might be recoverable with the cameras' native dynamic range at the original image size.
 
Upvote 0
Status
Not open for further replies.