December 21, 2014, 05:35:57 PM

Author Topic: Another my Stupid question = Sensor Sizes  (Read 6000 times)

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Another my Stupid question = Sensor Sizes
« Reply #30 on: August 22, 2014, 01:49:24 PM »
@sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area.

That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor. However, it SEVERELY handicaps the larger sensors. The fair comparison is when your subject is framed the same, which means that for progressively larger sensors, a greater absolute area of sensor covers the subject. In that case, everything Orangutang, Lee Jay, and myself have stated is true. There is no circumstance where smaller sensors, regardless of their pixel size, can ever outperform a larger sensor.

There are real-world use cases where a limited reach is an actual problem. I already posted a topic on that, demonstrating the differences between a 5D III and a 7D, and the 7D does indeed maintain the IQ edge (I really need to try that on a day with better seeing, or find a good terrestrial subject to compare.) But in the "normative" case, you buy a larger sensor to use the greater area to get better IQ. I mean, that's the entire point. That's where improved IQ comes from.

Before I found the equivalence article, I used to think the same thing...that pixel size mattered. But it simply doesn't. Not at lower and midrange ISO settings anyway. At really high ISO settings, then the game does change a bit. Spatially, information in an incomming wavefront is sparser when your working in really low light, or at a really small aperture, or any other circumstance where you NEED something like ISO 12800 or higher. Sparser data ultimately renders smaller pixels useless, since you just don't have complete enough information to render a whole picture. Then, pixel size really does start to matter. Or, conversely, downsampling your image becomes more important to reducing noise.

For the ultra high ISO use cases, I would actually love to see Canon create a sensor that had some kind of dynamic binning. At low ISO, use full maximum resolution, then have a configurable option to switch to a hardware binning mode of 2x2 for say ISO 6400 through 26500 and maybe even have an additional 4x4 binning option for ISO 51200 through 400k or whatever. I think that would be awesome, since you can't really get clean high resolution at ultra high ISO anyway.

However, fundamentally, in a fair or normative situation where your utilizing all the sensor area you can (i.e. assuming identical framing) and and for the same aperture used, larger sensors gather more light per subject area. If you read the equivalence article, when he gets down into the myths, he clearly covers how with a larger sensor, you need to use a narrower aperture on larger sensors for a given FoV to make image quality equivalent. For 80mm f/4 FF, you would need 50mm f/2.5 APS-C (that is 4 divided by 1.6, the scale factor between FF and Canon APS-C...it does not take pixel size into account at all), 40mm f/2 4/3rds:

http://www.josephjamesphotography.com/equivalence/#1

Quote
1) f/2 = f/2 = f/2

This is perhaps the single most misunderstood concept when comparing formats.  Saying "f/2 = f/2 = f/2" is like saying "50mm = 50mm = 50mm".  Just as the effect of 50mm is not the same on different formats, the effect of f/2 is not the same on different formats.

Everyone knows what the effect of the focal length is -- in combination with the sensor size, it tells us the AOV (diagonal angle-of-view).  Many are also aware that  f-ratio affects both DOF and exposure.  It is important, however, to understand that the exposure (the density of light falling on the sensor -- photons / mm²) is merely a component of the total amount of light falling on the sensor (photons):  Total Light = Exposure x Effective Sensor Area, and it is the total amount of light falling on the sensor, as opposed to the exposure, which is the relevant measure.

Within a format, the same exposure results in the same total light, so the two terms can be used interchangeably, much like mass and weight when measuring in the same acceleration field.  For example, it makes no difference whether I say weigh 180 pounds or have a mass of 82 kg, as long as all comparisons are done on Earth.  But if makes no sense at all to say that, since I weigh 180 lbs on Earth, that I'm more massive than an astronaut who weighs 30 lbs on the moon, since we both have a mass of 82 kg.

The reason that the total amount of light falling on the sensor, as opposed to the density of light falling on the sensor (exposure), is the relevant measure is because the total amount of light falling on the sensor, combined with the sensor efficiency, determines the amount of noise and DR (dynamic range) of the photo.

For a given scene, perspective (subject-camera distance), framing (AOV), and shutter speed, both the DOF and the total amount of light falling on the sensor are determined by the diameter of the aperture.  For example, 80mm on FF,  50mm on 1.6x, and 40mm on 4/3 will have the same AOV (40mm x 2 = 50mm x 1.6 = 80mm).  Likewise, 80mm f/4, 50mm f/2.5, and 40mm f/2 will have the same aperture diameter (80mm / 4 = 50mm / 2.5 = 40mm / 2 = 20mm).  Thus, if we took a pic of the same scene from the same position with those settings, all three systems would produce a photo with the same perspective, framing, DOF, and put the same total amount of light on the sensor, which would result in the same total noise for equally efficient sensors (the role of the ISO in all this is simply to adjust the brightness of the LCD playback and/or OOC jpg).

Thus, settings that have the same AOV and aperture diameter are called "Equivalent" since they result in Equivalent photos.  Hence, saying f/2 on one format is the same as f/2 on another format is just like saying that 50mm on one format is the same as 50mm on another format.

canon rumors FORUM

Re: Another my Stupid question = Sensor Sizes
« Reply #30 on: August 22, 2014, 01:49:24 PM »

sagittariansrock

  • 1D X
  • *******
  • Posts: 1537
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #31 on: August 22, 2014, 02:01:32 PM »
@sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area. That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor.

That is exactly right. Except, here we are discussing the capacity of a pixel to collect light, which is why this scenario should be used- that is, where the incident light is exactly the same in terms of intensity and quality.

The fair comparison is when your subject is framed the same, which means that for progressively larger sensors, a greater absolute area of sensor covers the subject. In that case, everything Orangutang, Lee Jay, and myself have stated is true. There is no circumstance where smaller sensors, regardless of their pixel size, can ever outperform a larger sensor.

I don't know if I will call it a fair comparison, but I can call it a real-world comparison. And as I said before I am sure everyone agrees with what you, Lee Jay and Orangutang are contending here- larger sensors have better IQ. No way can a smaller sensor collect the same amount of light. Except that is not the point here- the point is, would a smaller pixel collect less light than a larger pixel? Yes. Would a pixel collect the same amount of light whether its part of a large sensor or a small sensor? Of course!
This is why I said you all are disputing each other while everyone being right at the same time ;)
EOS 5DIII, EOS 6D | Rokinon 14mm f/2.8, TS-E 17mm f/4L, EF 24-70mm f/2.8L II USM, EF 35mm f/1.4L USM, EF 100mm f/2.8 Macro USM, EF 135mm f/2L USM, EF 70-200 f/2.8L IS II USM, 1.4x III, 2x III | 600-EX-RT x3 | EOS M + EF-M 22mm f/2

Lee Jay

  • 1D X
  • *******
  • Posts: 1347
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #32 on: August 22, 2014, 02:03:37 PM »
I am not talking of the same image. I am talking of the same subject distance, therefore I am talking of a different FoV.

In which case, you're only talking about a focal-length or magnification-limited situation.  That happens, and I work with it regularly, but it's not the normal situation when comparing the use of different formats in the same conditions.

Constant framing is the norm.

Lee Jay

  • 1D X
  • *******
  • Posts: 1347
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #33 on: August 22, 2014, 02:05:56 PM »
Exactly, that is completely irrelevant here. An APS-C sensor will never have the same or even close IQ to a same-generation FF sensor, while in reach-limited circumstances a higher resolution will demonstrate advantages. In this case, we are NOT talking about that. We are NOT talking about APS-C sensor being better than FF. This is a very focused argument: the size of pixel defines its light-gathering capacity. This capacity will be the same whether the same pixel resides in an FF sensor, an APS-C sensor or an MF sensor.

In which case, smaller pixels covering the same area almost always win in a comparison of final images.  There are edge cases where larger pixels win (generally, exceptionally photon-starved conditions with effective ISOs into the 6 and 7 digits), but even those are because of specific limitations of certain technologies.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Another my Stupid question = Sensor Sizes
« Reply #34 on: August 22, 2014, 02:16:29 PM »
@sagittariansrock: There is a difference in your explanation than the standard one: Your explanation does not utilize the full sensor area of larger sensors. Your explanation is based on the subject filling the same absolute area of the sensor, regardless of the total sensor area. That is the reach-limited argument. That is the ONE AND ONLY case where smaller sensors can achieve the same image quality as a larger sensor.

That is exactly right. Except, here we are discussing the capacity of a pixel to collect light, which is why this scenario should be used- that is, where the incident light is exactly the same in terms of intensity and quality.

The fair comparison is when your subject is framed the same, which means that for progressively larger sensors, a greater absolute area of sensor covers the subject. In that case, everything Orangutang, Lee Jay, and myself have stated is true. There is no circumstance where smaller sensors, regardless of their pixel size, can ever outperform a larger sensor.

I don't know if I will call it a fair comparison, but I can call it a real-world comparison. And as I said before I am sure everyone agrees with what you, Lee Jay and Orangutang are contending here- larger sensors have better IQ. No way can a smaller sensor collect the same amount of light. Except that is not the point here- the point is, would a smaller pixel collect less light than a larger pixel? Yes. Would a pixel collect the same amount of light whether its part of a large sensor or a small sensor? Of course!
This is why I said you all are disputing each other while everyone being right at the same time ;)

I'm sorry, but I beg to differ, given that this is the title of the thread:

Another my Stupid question = Sensor Sizes

And this is the actual question asked:

Dear Teachers and Friends.
Well, Yes, I can take a SoSo-or Good Photos, Because of I take the photos so long time. But for the High Tech of Digital Photography, I almost know nothing a bout this New Technology.
My Stupid Question are :
1)  Are the Size of the Sensor Matter ?---Or the MP. count are matter ?
Such As  the Tiny Sensor on Nokia Lumia 1020 = 41 MP, compare to Canon 1DX  FF = 18.1 MP, and Canon 5D MK II FF = 22.3 MP
2) If the Sensor are same Size and Same MP-----The  Camera company are matter or not that can claim , My Ca--- are Sharpper than your Ni--- ???
3) What Make the Same size of Sensor ( of this Company ) to be better than another Company Sensor ?
Thanks for your Answers, That will make me up to date of new Technology.
Have a great day, Sir/ Madam.
Surapon

The question is whether the size of the sensor matters or not. So, the point here is NOT about whether smaller pixels will collect more light...the point, very specifically, is whether differences in sensor size matter. Within the scope of the original question asked by Surapon, the proper context to discuss comparisons in is a normalized context...on in which the subject is framed identically, and, to be truly fair, where all the output images are resampled to the same dimensions.

That is the standard context for comparing images. It's the requirement of using ISO 12233 test charts, the standard test chart that pretty much every lens and camera tester, with maybe the exception of DXO, use to compare the IQ of different camera systems, that all framing be identical regardless of sensor size.

The reach-limited comparison is not wrong, however it does place a handicap on larger sensors, a handicap that becomes increasingly severe the larger the discrepancy between the small sensor and the large sensor. In a reach limited scenario, I'd rather have an APS-C sensor, or maybe even smaller, with a small pixel, than a FF sensor with large pixels. But when I have the option of getting closer, or using a longer lens, I'll take the FF every time. I'd certainly rather have more pixels in the FF than fewer, as I can always downsample if I need less noise...but when I have the option of framing identically, larger sensors trounce smaller sensors.

sagittariansrock

  • 1D X
  • *******
  • Posts: 1537
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #35 on: August 23, 2014, 01:33:45 AM »
jrista: Of course, sensor size does matter. Pixel size matters too, but that is not the topic of the OP's question. Now I can see how that got transformed over the few pages.
Lee Jay: You are right in considering a real world situation where an equally framed image should be the parameter of comparison, whereas I am supporting Don Haines' theoretical consideration that a larger pixel will gather more light than a smaller pixel, and that the same sized pixel will gather equal amount of light irrespective of the total area of the sensor it is a part of.

I think we all understand the physics, essentially, and also the real world fact that a larger sensor provides a ton of benefits under certain conditions, and a smaller sensor provides benefit in one specific situation. Good for us...

EOS 5DIII, EOS 6D | Rokinon 14mm f/2.8, TS-E 17mm f/4L, EF 24-70mm f/2.8L II USM, EF 35mm f/1.4L USM, EF 100mm f/2.8 Macro USM, EF 135mm f/2L USM, EF 70-200 f/2.8L IS II USM, 1.4x III, 2x III | 600-EX-RT x3 | EOS M + EF-M 22mm f/2

surapon

  • Canon EF 300mm f/2.8L IS II
  • ********
  • Posts: 2472
  • 80% BY HEART, 15% BY LENSES AND ONLY 5% BY CAMERA
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #36 on: August 23, 2014, 09:17:16 PM »
Wow, Wow, Wow.
Thank you Sir/ Madam for Complete Explain  and answer my question to me---Yes, Thousand Thanks   that I just Learn  some thing New from all of my Teachers and All of My Friends.
Have a great Night, Sir/ Madam.
Surapon

canon rumors FORUM

Re: Another my Stupid question = Sensor Sizes
« Reply #36 on: August 23, 2014, 09:17:16 PM »

sgs8r

  • Power Shot G7X
  • **
  • Posts: 22
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #37 on: August 23, 2014, 10:08:46 PM »
You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor.

This is totally false.  If you make the 7D sensor bigger, you'll have more of the same pixels AND about a stop and a third better noise performance, assuming constant f-stop and constant framing.  That means, for the same image, you're going to have to either get closer or use a longer focal length.

The right-hand column of this image demonstrates this.  It's all the same sensor (and so all the same pixels) just using different sized portions of that sensor, and reframing to keep the final image framing constant.  According to what you said above, the noise performance should all be the same.  It isn't, and it isn't even close.  The left column demonstrates by just how much.  It's exactly how much you would think - the light you've lost with cropping is the amount of noise performance you've lost.


Well, how do you re-frame to keep the image on the larger sensor the same? With the same lens/optics, you need to get closer. But then you're getting more light on the lens, and thus on the sensor. Let's assume the large sensor is twice the diagonal size of the small sensor. You'll have to halve the distance to the subject.  So 4 times as much light but also 4 times as many pixels. So same light on each pixel. Each pixel on the large sensor thus has the same SNR as those on the small sensor, but you could downsample, combining groups of 4 pixels to get the same image and number of pixels as the smaller sensor, but with better noise performance by a factor of two (averaging N pixels drops noise by sqrt(N)). But really this is because you've moved closer and thereby increased the light (signal).

On the other hand, without changing position, you could use a different lens to fill the larger sensor with the same view (so keeping the framing the same). This implies an increase in the focal length, which, for the same aperture, implies an increase in the f-stop, i.e. a reduction in the light density on the sensor. We've kept the total captured light the same but spread it over a larger area with more pixels, so the per-pixel SNR would decrease with the larger sensor. Again, you could combine pixels, downsampling, to improve the SNR. But I think only by a factor of two (again assuming the large sensor diagonal is twice that of the smaller sensor). So worse SNR as compared to the small sensor (but higher resolution due to more pixels).

One difficulty with this whole discussion it that one wants to say "Keeping everything else the same, here's what happens when you change the pixel size...". But it's actually impossible to keep everything else the same. Same optics, same lens , same shooting location, same framing, same viewing size, etc. One issue raised with the original post (which was excellent, by the way) was the upsampling applied to the full-frame image. But if you want to view them so the moon is the same size on your screen in both images, you need to either upsample one or downsample the other. Otherwise one image will be bigger than the other, making comparison problematic.

One other thing. Based on my somewhat crude calculation above (maybe this is well-known to the rest of you), it seems like from an SNR standpoint (with sensor size fixed), you are better off using bigger pixels, rather than subdividing each big pixel into smaller pixels, then averaging/downsampling them to recover the same number of pixels (as with the big pixels). I'm assuming that the noise comes from the electronics downstream of the light-gathering component, so that a big pixel has the same absolute amount of noise as a small pixel (but more signal), so 4 times the area means 4 times the SNR, whereas combining pixels will add the 4 light values, but also the 4 noise values. Assuming the noise is random and independent, you'll get some noise cancellation but only a Sqrt(4)=2 factor reduction, so lower SNR than the big pixel. To put this in practical terms, you get better SRN from the HTC One's 4 MP camera than downsampling the Nokia 1020's 40 MP image to 4 MP (assuming the same sensor size and optics, which may not be the case, but you get my point).

Lee Jay

  • 1D X
  • *******
  • Posts: 1347
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #38 on: August 23, 2014, 10:58:17 PM »
One difficulty with this whole discussion it that one wants to say "Keeping everything else the same, here's what happens when you change the pixel size...". But it's actually impossible to keep everything else the same. Same optics, same lens , same shooting location, same framing, same viewing size, etc.

Same focal length, same shutter speed, same f-stop, same ISO, same lighting, same shooting position, shot in raw, same raw processor, pixel area different by a factor of 16 (small pixels on the left).

« Last Edit: August 23, 2014, 11:54:50 PM by Lee Jay »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Another my Stupid question = Sensor Sizes
« Reply #39 on: August 23, 2014, 11:18:22 PM »
You say "with identical technology, size of sensor is all that matters". I disagree. If you simply made a bigger 7D sensor, with the same technology and same pixel density, then it will have the same noise characteristics as the 7D sensor.

This is totally false.  If you make the 7D sensor bigger, you'll have more of the same pixels AND about a stop and a third better noise performance, assuming constant f-stop and constant framing.  That means, for the same image, you're going to have to either get closer or use a longer focal length.

The right-hand column of this image demonstrates this.  It's all the same sensor (and so all the same pixels) just using different sized portions of that sensor, and reframing to keep the final image framing constant.  According to what you said above, the noise performance should all be the same.  It isn't, and it isn't even close.  The left column demonstrates by just how much.  It's exactly how much you would think - the light you've lost with cropping is the amount of noise performance you've lost.


Well, how do you re-frame to keep the image on the larger sensor the same? With the same lens/optics, you need to get closer. But then you're getting more light on the lens, and thus on the sensor. Let's assume the large sensor is twice the diagonal size of the small sensor. You'll have to halve the distance to the subject.  So 4 times as much light but also 4 times as many pixels. So same light on each pixel. Each pixel on the large sensor thus has the same SNR as those on the small sensor, but you could downsample, combining groups of 4 pixels to get the same image and number of pixels as the smaller sensor, but with better noise performance by a factor of two (averaging N pixels drops noise by sqrt(N)). But really this is because you've moved closer and thereby increased the light (signal).

On the other hand, without changing position, you could use a different lens to fill the larger sensor with the same view (so keeping the framing the same). This implies an increase in the focal length, which, for the same aperture, implies an increase in the f-stop, i.e. a reduction in the light density on the sensor. We've kept the total captured light the same but spread it over a larger area with more pixels, so the per-pixel SNR would decrease with the larger sensor. Again, you could combine pixels, downsampling, to improve the SNR. But I think only by a factor of two (again assuming the large sensor diagonal is twice that of the smaller sensor). So worse SNR as compared to the small sensor (but higher resolution due to more pixels).

One difficulty with this whole discussion it that one wants to say "Keeping everything else the same, here's what happens when you change the pixel size...". But it's actually impossible to keep everything else the same. Same optics, same lens , same shooting location, same framing, same viewing size, etc. One issue raised with the original post (which was excellent, by the way) was the upsampling applied to the full-frame image. But if you want to view them so the moon is the same size on your screen in both images, you need to either upsample one or downsample the other. Otherwise one image will be bigger than the other, making comparison problematic.

One other thing. Based on my somewhat crude calculation above (maybe this is well-known to the rest of you), it seems like from an SNR standpoint (with sensor size fixed), you are better off using bigger pixels, rather than subdividing each big pixel into smaller pixels, then averaging/downsampling them to recover the same number of pixels (as with the big pixels). I'm assuming that the noise comes from the electronics downstream of the light-gathering component, so that a big pixel has the same absolute amount of noise as a small pixel (but more signal), so 4 times the area means 4 times the SNR, whereas combining pixels will add the 4 light values, but also the 4 noise values. Assuming the noise is random and independent, you'll get some noise cancellation but only a Sqrt(4)=2 factor reduction, so lower SNR than the big pixel. To put this in practical terms, you get better SRN from the HTC One's 4 MP camera than downsampling the Nokia 1020's 40 MP image to 4 MP (assuming the same sensor size and optics, which may not be the case, but you get my point).

Downsampling uses averaging, not adding. If you added, then you would end up with a bunch of blown pixels. Averaging reduces noise, where adding does not. So downsampling has the exact same effect on SNR as using larger pixels or binning smaller pixels in hardware. Additionally, noise is poisson. If you have a pixel with twice the pixel pitch, you have four times the area, and you have four times the SNR...but you still have SQRT(4) noise. A pixel twice the pitch still only has half the noise. It doesn't matter if you use a larger pixel, or bin/average smaller pixels together. It doesn't even matter if you integrate four separate frames with the same noise together. It's always the same noise in the end. A pixel four times the area, averaging four pixels together, integrating four separate frames, all have SQRT(4) the amount of noise.

Also, your not quite right about a smaller aperture reducing SNR to a level below that of the smaller sensor. If you really do have two sensors, one with half the diagonal, then you could use a 100mm f/4 on the larger sensor and a 50mm f/2.8 on the smaller. That would get you identical framing. In that case, the total amount of light reaching the sensor is also identical. THAT right there is exactly what equivalence is all about. But...pixel size isn't a factor. Because downsampling averages (which involves first adding, yes...but then dividing) pixels together, when you NORMALIZE, pixel size doesn't matter. Two large sensor cameras with different pixel sizes are still going to gather the same amount of light for any absolute area of the subject. A small sensor camera, for an identically framed subject (50mm f/2.8 instead of 100mm f/4) is going to gather the same amount of light for the same absolute area as the larger sensor camera...however it's only gathering the same amount of light because of the wider aperture. Slap a 100mm f/2.8 lens on your larger sensor, and it is now gathering twice the amount of light. (Plus, there are other benefits with the larger sensor...narrower depth of field, or a wider field of view, etc.)

Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

sagittariansrock

  • 1D X
  • *******
  • Posts: 1537
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #40 on: August 24, 2014, 12:30:22 AM »
Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

If that is so, what is stopping Canon from making a 46 MP FF camera with the same sensor tech as, say 7D?
I am not really an expert on this, but I think every pipeline (pixel-->signal processor) must add its own bit of noise. So noise from 4 1x1micron pixels > noise from 1 2x2 micron pixel.
It also has a bearing on processor power, but that's another topic.
Maybe an expert can chime in on this?
EOS 5DIII, EOS 6D | Rokinon 14mm f/2.8, TS-E 17mm f/4L, EF 24-70mm f/2.8L II USM, EF 35mm f/1.4L USM, EF 100mm f/2.8 Macro USM, EF 135mm f/2L USM, EF 70-200 f/2.8L IS II USM, 1.4x III, 2x III | 600-EX-RT x3 | EOS M + EF-M 22mm f/2

Lee Jay

  • 1D X
  • *******
  • Posts: 1347
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #41 on: August 24, 2014, 01:15:16 AM »
Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

If that is so, what is stopping Canon from making a 46 MP FF camera with the same sensor tech as, say 7D?
I am not really an expert on this, but I think every pipeline (pixel-->signal processor) must add its own bit of noise. So noise from 4 1x1micron pixels > noise from 1 2x2 micron pixel.
It also has a bearing on processor power, but that's another topic.
Maybe an expert can chime in on this?

You already got the correct answer.

Aglet

  • 1D X
  • *******
  • Posts: 1088
    • View Profile
Re: Another my Stupid question = Sensor Sizes
« Reply #42 on: August 24, 2014, 01:51:39 AM »
Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

Uhmm... except when pixel size is not irrelevant.
I have to disaggree with you, somewhat, on one point; dynamic range will become limited when pixels become too small, and hence their full-well capacity decreases by more than just the ratio of their surface area.
I say this because, I suspect, the vertical dimension of the photodiode will have some aspect ratio limit with regards to the surface area.  When the surface area becomes too small, the other dimension will have to shrink also, and that will iimit the full-capacity/surface area, decreasing maximum DR.  You'll still be able to reduce noise levels quite effectively by binning/averaging, either hardware or software, but you'll reach a lower maximum when the pixel geometry gets too small.
I suspect something like 40MP smartphone camera may be an example.

EDIT:  Actually, we're already there in varying degrees.
Since many sensor systems are already counting individual electrons, smaller pixels are just gonna be DR-limited.  14bits at 1 bit per electron is only 16384 e-
Small pixels are useful even with full well counts well below that, like 2^10, but then that's already a 10-stop or less DR.  When you start averaging them, you're not gonna gain quite all of that DR back.  And then when you hit the aspect ratio limit for the photo-diode, the DR curve will really drop off.
Perhaps a resident math-whiz could graph that curve for a demo.... (nudge, hint-hint ;) )
« Last Edit: August 24, 2014, 03:07:11 AM by Aglet »

canon rumors FORUM

Re: Another my Stupid question = Sensor Sizes
« Reply #42 on: August 24, 2014, 01:51:39 AM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Another my Stupid question = Sensor Sizes
« Reply #43 on: August 24, 2014, 04:39:09 AM »
Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

If that is so, what is stopping Canon from making a 46 MP FF camera with the same sensor tech as, say 7D?
I am not really an expert on this, but I think every pipeline (pixel-->signal processor) must add its own bit of noise. So noise from 4 1x1micron pixels > noise from 1 2x2 micron pixel.
It also has a bearing on processor power, but that's another topic.
Maybe an expert can chime in on this?

I think Canon's 500nm process is stopping them. They could make a 46mp FF sensor today...but I don't think it would perform as well as an Exmor. As I've said...pixel size, and therefor pixel count, don't really matter. It's primarily the sensor size that matters. When the sensor sizes are the same, then it's the core technology that matters. A D800 is better not because of it's pixel size or sensor size...it's better because of the higher Q.E., because of the lower read noise, and because of the clean, random nature of the tiny bit of read noise that does exist.

You are correct that electronics throughout the whole pipeline add noise. How you design that pipeline can have a big impact on how much noise is added and where. Based on Roger Clark's work, Canon's sensors themselves are actually not that bad. They suffer from the large 500nm transistor size and lower Q.E., but from an electronic noise standpoint, the noise introduced by the sensor itself is quite low. It's the downstream components, the high frequency ones that all have to process huge numbers of pixels, that add most of the read noise. Canon cameras have both a secondary downstream amplifier, which is used to amplify the signal post-read for really high ISO settings (i.e. to get ISO 12800, Canon first amplifies to ISO 3200 strait off the pixel with the per-pixel amplifiers, then amplifies another two stops using their downstream amp...the downstream amp processes all the pixels, and must operate at a much higher frequency, which produces more heat (so more dark current), and the higher frequency of the oscillations results in high frequency noise being introduced into the signal.) Canon also places it's ADC units off the sensor die in the DIGIC chips. There are either 8 or 16 ADC channels, depending on whether the camera has one or two DIGIC chips. Those ADC channels each have to process tens of thousands to millions of pixels, and again must operate at a higher frequency, which introduces more noise.

Canon's competitors have moved to on-die ADC units. Most use a column-parallel ADC design, one unit per column of pixels. Most are also fabricated with smaller transistors, which reduces power consumption and reduces energy dissipation. Since each CP-ADC unit processes fewer pixels, they can operate at a lower frequency, which reduces heat and introduces less noise. In Sony Exmor's case, the high frequency clock was also located on a remote corner of the sensor die, away from the ADC units, to avoid any high frequency noise from being introduced.

In practice, read noise is actually higher the larger the pixel. Look at Sensorgen.info pages for the 7D and 5D III:

http://sensorgen.info/CanonEOS_7D.html
http://sensorgen.info/CanonEOS_5D_MkIII.html

The 7D has 8e- read noise, while the 5D III has 35e- read noise. That's the amount of read noise introduced into each pixel during the readout and ADC pipeline. I'm not exactly sure why that is. Even if you compute the relative areas of the pixels for both cameras, and multiply the 7D's RN by that ratio, it still only comes out to 16e-. So on an absolute area basis, the 7D has less read noise per area than the 5D III. The 1D X has slightly more read noise than the 5D III. The main difference in the read pipelines are the DIGIC chips...the 7D uses a DIGIC 4, where as the 5D III and 1D X use DIGIC 5 generation chips. The DIGIC 5's use much higher frequency ADC units...but, I'm just speculating that's the sole or primary cause of the higher read noise.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Another my Stupid question = Sensor Sizes
« Reply #44 on: August 24, 2014, 04:43:55 AM »
Pixel size is irrelevant. SNR, and therefor dynamic range (assuming you have no other source of noise than what is inherent to the image signal itself) and noise are ultimately relative to total sensor area. That's it.

Uhmm... except when pixel size is not irrelevant.
I have to disaggree with you, somewhat, on one point; dynamic range will become limited when pixels become too small, and hence their full-well capacity decreases by more than just the ratio of their surface area.
I say this because, I suspect, the vertical dimension of the photodiode will have some aspect ratio limit with regards to the surface area.  When the surface area becomes too small, the other dimension will have to shrink also, and that will iimit the full-capacity/surface area, decreasing maximum DR.  You'll still be able to reduce noise levels quite effectively by binning/averaging, either hardware or software, but you'll reach a lower maximum when the pixel geometry gets too small.
I suspect something like 40MP smartphone camera may be an example.

EDIT:  Actually, we're already there in varying degrees.
Since many sensor systems are already counting individual electrons, smaller pixels are just gonna be DR-limited.  14bits at 1 bit per electron is only 16384 e-
Small pixels are useful even with full well counts well below that, like 2^10, but then that's already a 10-stop or less DR.  When you start averaging them, you're not gonna gain quite all of that DR back.  And then when you hit the aspect ratio limit for the photo-diode, the DR curve will really drop off.
Perhaps a resident math-whiz could graph that curve for a demo.... (nudge, hint-hint ;) )

Your absolutely right...at some point, fill factor becomes an issue. I've talked about fill factor in many of my posts in the past. The fill factor issue is why we have BSI sensor designs, and why pretty much every very small sensor, ones using 1.2µm pixels and smaller, are BSI. The BSI design maximizes fill factor, effectively creating an ideal pixel, thereby exhibiting the ideal behavior I've described.

Large sensors don't use BSI, however I think that Canon's APS-C sensors could actually benefit from it. There is most certainly a small loss to fill factor, so you are correct in that smaller pixels, on a 500nm process, are not going to be able to gain back 100% of the DR during the downsampling process. I don't know that the pixels are quite small enough for that to result in a difference in noise that can be determined with anything other than a computer algorithm, though.

canon rumors FORUM

Re: Another my Stupid question = Sensor Sizes
« Reply #44 on: August 24, 2014, 04:43:55 AM »