Full Frame and Bigger Pixels vs. APS-C and Smaller Pixels - The Reach War

weixing said:
Anyway, the sky is cloudy and the bird is under the shade... so I think the details are a bit more difficult to resolve under this flat lighting condition.

For sure, flat lighting can indeed make it difficult to resolve fine detail...there just isn't any shading for it. I'd be interested to see you redo the test with better lighting. It's nice having a cooporative bird as a subject for that...none of the birds around here, except maybe Night Herons, are willing to remain still for long periods of time, but they are also very jittery, and the slightest thing sets them aflight.

If you get another chance during better light, I'd love to see you try again. There probably isn't much of a resolving power difference between the two at high ISO...however I would expect the 6D to take the lead in overall IQ.
 
Upvote 0
Act444 said:
58Special said:
I use both the 5D mk III and the 7D. I like have both, it is like have two sets of lenses. That being said if i am close enough i will always go to the 5D.

Same here. If I am not reach-limited, it will almost always be the 5D/6D. If I am, then the 7D offers more reach (at the cost of more noise).

I would be really interested in seeing the difference between the 7D, 70D, and a 5D III/6D in a reach limited situation. The 7D is old tech, so it is going to be noisier. Theoretically, with sensors that use the same generation of technology, in a reach-limited situation the noise should not be different once the results are normalized. I am willing to bet good money that the 70D performs markedly better than the 7D in such a situation, and when downsampled to the same size as the 5D III, there would not be any significant difference in noise.
 
Upvote 0

Lee Jay

EOS 7D Mark II
Sep 22, 2011
2,250
175
AlanF said:
Lee Jay said:
AlanF said:
It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area:

But the signal to noise ratio of a given sensor area does not increase with increasing pixel size.
The dynamic range is also greater for large pixels than can accommodate a large number of electrons.

This is also untrue or the G15 wouldn't have more base ISO DR than the 1Dx despite having pixels with 1/14th as much area..

http://www.sensorgen.info/CanonPowershot_G15.html
http://www.sensorgen.info/CanonEOS-1D_X.html

There are factors other than pixel size that determine DR, which become the limiting factors for larger sensors - if size were the only factor then a Sony sensor would have the same DR as a Canon. However, it is basic physics that DR will eventually decrease with decreasing pixel size because of the number of electrons that can be accommodated in a well.

The noise of individual pixels is important as well as the overall noise of a particular area of sensor. That is, the overall signal to noise might be independent of the number of pixels, but the variation of signal within that area is what you actually see as noise. Suppose you take a photo of a pure blue background. With a very low pixel density, you will see a very flat blue image. With very high pixel density, you would see lots of colour variation when you pixel peep.

Nope, you're still off the rails. Smaller pixels can and often do have the same DR as bigger pixels, in good light. Bigger pixels tend to win in extremely photon starved conditions for secondary reasons but we're not talking about those extremes here.

Look at it this way. If you slice up one pixel into four, what have you done? You've got the same DR because read noise drops with well capacity. You've got smaller wells but they collect from a smaller area so they fill at the same rate. You're still collecting all the same light so you've got the same SnR due to shot noise. All you've really done is increase detail. If you want that detail, you can have it. If you don't, you can be stupid and block average those four pixels back down into one big one. Or, you can apply far more sophisticated noise reduction techniques than simple block averaging and end up with both more detail and less noise.
 
Upvote 0
jrista said:
scyrene said:
Is that software Windows-only? I did a bit of searching, and couldn't find system requirements anywhere. For some reason, most of this sort of software doesn't run on Macs, so I had to use the only stacker I could find that did, called 'Keith's Image Stacker'. It's pretty good but I have no understanding of how different modes produce different results. I guess I should read up on it more.

Yeah, windows is pretty much the operating system of choice for astrophotography software. There are some new apps available for iOS devices, but overall, not much of the software we use runs on Macs. I think most astrophotographers either use a dual-boot (virtualization tends to be problematic and too slow) with macs so they can run windows when they need to...or, they simply have a windows based laptop for their astrophotography stuff.

If your interested in AP, then I highly recommend you pick up a windows box of some kind. The vast majority of the software out there, like BackyardEOS, is only available for windows.

scyrene said:
As for video - this is a question I've had for a while. Even HD video is only 2MP. No matter how much resolution you're gaining through stacking, surely you're losing 90% (of the 5DIII's potential) versus stills? Any thoughts? When I stacked my moon (I've included a crop of a much reduced-size below) I had to shoot lots of stills manually and use those instead. You're clearly doing something better though, as you seem to be pulling out a similar level of detail even though I was at a much higher focal length (5600mm).

BackyardEOS has some unique features. It seems to be able to use the 5x and 10x zoom features of live view, then it records a 720p video from those zoomed in views. So, your actually getting more resolution than if you recorded a 720p video at 1x.

For superresolution algorithms to work, you need your frames to be pretty close together. You want some separation, a minimal mount of time to allow for the subject to "jitter" between frames, as it's the jitter that allows an algorithm like drizzle to work in the first place.

At 5600mm, you should be able to pull out some extreme detail of a small area of the moon's surface. I'd love to have 5600mm at my disposal! :D If you pick up a windows laptop, install BackyardEOS, and play around with the planetary imaging feature...you'll start to see how it all works, and you'll start getting amazing results.

Thanks! I have an old desktop PC, I might see if I can use that for some of this. The primary problems at that focal length without a tracker (I have a tracking mount, but it can't take the weight of the supertelephoto) are wobble (even the slightest breeze catches the lens) and especially the movement of the moon across the frame.

I don't want to bog you down if it's a huge subject, but what is drizzle roughly, and why does a gap between frames make it less effective? Since the atmospheric distortion is random. Once the frames are aligned, how can the software tell whether they were taken milliseconds or minutes apart?
 
Upvote 0
MichaelHodges said:
LetTheRightLensIn said:
You won't?? Even if you use say a 7D and a 5D2 and the 7D sensor is more efficient at collecting and converting photons per area of surface than the 5D2?? With the 7D you can chose to get either: more detail (unless conditions are super bad) and more noise OR slightly better detail with less de-bayer and other artifacts and slightly better noise (if you view or convert to same scale as the 5D2).

Unfortunately it doesn't work that way in crepuscular conditions. You can have all the "pixels on target" you want, but if the sensor can't handle the low lighting (7D), you're not going to get the shot. And by "shot", I mean something you can print at 16x20.

Yeah but we are not talking about the entire frame on target, we are talking reach limited. Sure, if you are close enough to frame the animal as you like on a FF then sure that does much better, but we are talking reach limited, so the animal fills the same amount of sensor areas on either sensor and in that case it does work exactly that way.


In these cases, noise is the bottleneck.

Depends upon what you mean by these cases. If you simply mean near sunset and just after sunrise in general, than that is not true. A 7D always does at least a trace better than a 5D2 for instance even then WHEN FULLY REACH LIMITED, and since that is a scenario that exists and is not even all that rare, it is definitly not true in general that the FF always does better than the 7D even under crepuscular creeping time for animals.
 
Upvote 0
docsmith said:
And I make this post knowing this thread is titled "The Reach War," but I was a little surprised that no one else had yet brought up that the differences between the 5DIII and 7D (or 70D) is about more than reach and noise.

You answered your own question because the thread is title reach war. So why on earth should it get into discussions of which grip is nicer or what AF is better or whatnot? At that point you may as well start talking about which dish of fish tastes best.
 
Upvote 0
MichaelHodges said:
jrista said:
Regarding birds and DR...to be honest, I have not found that dynamic range is the issue when photographing birds.

So two golden eagles swooping in to take out a bald eagle at your back, silhouetted in the sun doesn't present a DR issue? What about bighorn rams fighting each other in uneven forest light? People wait all year for those moments, heck, they wait years. A second later, it could be gone.

Dynamic range is the single biggest issue with wildlife photography, IMHO. That's why the shadow recovery in the Sony sensors is so appealing.

Personally I encounter DR issues less for wildlife than for landscapes by far and away (of course I also shoot wildlife a lot less than landscapes ;) ). That said there are times where interior forest dappled lighting on say turkeys or pileated woodpeckers or whatnot makes it tricky, at times even with small birds, they can perch in a way that the head is glowing and the body in deep shade and it can be rough going, so yeah it can definitely happen that more DR would definitely be useful.

Also, many more wildlife shots, in my, are taken beyond ISO400 and that is when the DR differences start to lessen between brands. Once you are shooting at ISO1600 it's really not much between them, yeah the sony still does a trace better but it's nothing to bother about then and even at ISO800 while definitely there it's no longer night and day.

That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Oh, also sometimes you get surprised and have just a random isntant for a shot and maybe can't dial in exposure perfectly, with more DR you can rescue underexposed shots much more easily.
 
Upvote 0
scyrene said:
I don't want to bog you down if it's a huge subject, but what is drizzle roughly, and why does a gap between frames make it less effective? Since the atmospheric distortion is random. Once the frames are aligned, how can the software tell whether they were taken milliseconds or minutes apart?

It isn't so much the gap between frames as the total frame count. It's a lot, lot easier to get thousands of frames when using video. When you take them one at a time, there is a fairly significant overhead, an overhead that could last the span of several frames. Using video cuts down that overhead significantly. For superresolution to be effective, you actually don't want all the frames to be perfectly aligned...you want very very slight variations between each frame, as the algorithm uses those differences to enhance resolution and "see" past things like atmospheric turbulence, diffraction, optical aberrations, etc.

You could still do superresolution with individual still frames. You would just need several times as much time to gather enough frames for it to be effective. (Although, if there is enough movement of the subject between frames, as is the case without tracking...that can actually be too much movement. You want small movements between frames, but otherwise have the subject remain generally stationary...if it's drifting across the frame, then you first do have to align, and alignment might result in everything being TOO consistent across frames, thereby reducing the effectiveness of the superres algorithm.) This is particularly true when a superres algorithm is used in conjunction with a stacking algorithm and other algorithms, as is the case with planetary integration software, as those programs will drop a considerable number of frames that do not meet certain quality criteria. Remember, the goal, with the moon...or mars...or any other planet, is to take only the frames from those moments when seeing clears enough that the detail shows through really well. So, if you can take 100 still frames in 5 minutes, or 1000 video frames in 30 seconds, well, your going to choose video. ;)
 
Upvote 0
LetTheRightLensIn said:
That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Totally agree here. Having more DR is not a problem, and can make some of the rarer but tougher situations, like the ones you described, easier to deal with.

To that end, I think Magic Lantern is a HUGE bonus for Canon shooters, as (at least so far, with the 6D) they have managed to increase high ISO DR to levels that were previously only attainable at ISO 400 and below on most cameras (the notable exceptions being 1DX and D4).
 
Upvote 0
AlanF said:
jrista said:
serendipidy said:
Jrista,
Great images and informative discussion. I have learned a lot. Very confusing to noobs. I remember someone on CR frequently talking about better resolution being related to " number of pixels on target." So with reach limited subjects, you need either higher focal length lens or more (ie smaller) pixels per area on the sensor, to get better detail resolution. Did I say that correctly?

Yeah, that's correct. BTW, it's me who has always said "pixels on target". ;) I read that a long time ago on BPN forums, from Roger Clark I think, and started experimenting with it. I think it's the best way to describe the problem...because it scales. It doesn't matter how big the pixels are, or how big the sensor is...more pixels on target, the better the IQ. If you are only filling 10% of the frame, try to fill 50%. It doesn't matter if the frame is APS-C, FF, or something else...it's all relative.

It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area: the bigger the pixel, the greater the number of photons flowing through it and the greater the current generated, and the statistical variation in both becomes less important. The dynamic range is also greater for large pixels than can accommodate a large number of electrons. A low megapixel sensor should have very good signal to noise and DR, but poor resolution. Now, see what happens as we progress to the other extreme. As, we decrease the size of the pixel, the resolution increases but the statistical noise starts to increase as the number of photons hitting each pixel decreases per unit time. The electrical noise also increases until the inherent noise in the circuit becomes greater than that due to the fluctuation in number of electrons generated by the photons. We all experience this as the noise caused by increasing the iso setting. The dynamic range also decreases. Eventually, the pixel becomes so small that it loses all of its dynamic range because the well is so shallow it can hold only a few electrons.

So, too large a pixel gives too little resolution, too small a pixel gives too much noise and too small dynamic range. You could have a 20 billion too small useless pixels on target where 20 million would be the optimal number. Because of the above reasoning, astrophotographers and astronomers match pixel size to their telescopes. For photographers, the optimal size for current sensors pixels is around the range of crop to FF.

Actually it works out that smaller pixels tend to actually help dynamic range overall although hurt say mid-tone SNR. However, the degree to which they hurt SNR is pretty modest in the typical denisities we are comparing with current cameras. If you compared a 180MP APS-C to an 8MP APS_C the 120MP one might start to suffer enough to care with the current tech, BSI and such might help that though. But for say a 36MP FF vs a 12MP FF the difference is so minor that it's really nothing to bother about, it depends on the exact tech, but let us even say 1/8th of a stop for kicks, who really cares.
 
Upvote 0
jrista said:
scyrene said:
I don't want to bog you down if it's a huge subject, but what is drizzle roughly, and why does a gap between frames make it less effective? Since the atmospheric distortion is random. Once the frames are aligned, how can the software tell whether they were taken milliseconds or minutes apart?

It isn't so much the gap between frames as the total frame count. It's a lot, lot easier to get thousands of frames when using video. When you take them one at a time, there is a fairly significant overhead, an overhead that could last the span of several frames. Using video cuts down that overhead significantly. For superresolution to be effective, you actually don't want all the frames to be perfectly aligned...you want very very slight variations between each frame, as the algorithm uses those differences to enhance resolution and "see" past things like atmospheric turbulence, diffraction, optical aberrations, etc.

You could still do superresolution with individual still frames. You would just need several times as much time to gather enough frames for it to be effective. (Although, if there is enough movement of the subject between frames, as is the case without tracking...that can actually be too much movement. You want small movements between frames, but otherwise have the subject remain generally stationary...if it's drifting across the frame, then you first do have to align, and alignment might result in everything being TOO consistent across frames, thereby reducing the effectiveness of the superres algorithm.) This is particularly true when a superres algorithm is used in conjunction with a stacking algorithm and other algorithms, as is the case with planetary integration software, as those programs will drop a considerable number of frames that do not meet certain quality criteria. Remember, the goal, with the moon...or mars...or any other planet, is to take only the frames from those moments when seeing clears enough that the detail shows through really well. So, if you can take 100 still frames in 5 minutes, or 1000 video frames in 30 seconds, well, your going to choose video. ;)

Thanks again :) I can try the video option for the moon at least, where I can pull down the focal length and still have it fill the frame. Since I downsize the final images from stills anyway, it probably edges in favour of the big stack. But for tinier subjects, like planets, unless I went with your (amazing sounding!) magnified video option, I'd stick with manually-taken frames. I managed to take 100-200 before getting bored :)
 
Upvote 0
jrista said:
LetTheRightLensIn said:
That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Totally agree here. Having more DR is not a problem, and can make some of the rarer but tougher situations, like the ones you described, easier to deal with.

To that end, I think Magic Lantern is a HUGE bonus for Canon shooters, as (at least so far, with the 6D) they have managed to increase high ISO DR to levels that were previously only attainable at ISO 400 and below on most cameras (the notable exceptions being 1DX and D4).

doesn't that lose half the res though? that would be bad for reach limited wildlife in particular I'd think
although perhaps for the parts of the body in shade the detail is not as often critical?
 
Upvote 0
weixing said:
Hi,
Today, I do a compare shots on FF vs APS-C on a real bird under real life condition... only manage to try out ISO 1600 and ISO 3200 as start to rain very heavily after this. I just open them using lightroom 4, took a screenshot, paste on paint and saved as jpeg.
Test Condition
Camera: Canon 6D (left) vs Canon 60D (right)
Lens: Tamron 150-600mm @ F8
Subject: Stork-billed Kingfisher at around 18m (this is the only real bird that I can find that will stay at the same place for extended period of time with minimum movement).
Weather: Cloudy


After looking at the compare shots, my initial conclusion is that the 60D sensor doesn't seem to have a significant details advantage (if any) under real life condition (at least this seem to be true when using the Tamron 150-600mm lens) over the 6D and the 6D (up to ISO 3200) doesn't seem to have a real noise advantage if the 60D image was scale down.

Have a nice day.

despite a relatively low contrast subject and a lens said to be fairly soft at the extreme end (and perhaps other issues for AF or shake for all we know) I still saw that the 60D was giving it more details, comparing like that can sometimes give a slight apparent advantage to the lower density camera since the eye tends to confused crispness with detail

PS: The CanonRumors website seem to scale down the screenshot image (actual size is 1920 x 1080) to fit the website frame... to view at actual size, need to click on the image and using the scroll bar below the post to scroll through the image... or is there a setting to show the image actual size??

yeah a real drag, I ended up just right clicking saving to my computer and then viewing with an imager at 100% view
 
Upvote 0
jrista said:
In crepuscular light, the low light around sunrise and sunset, you are NOT going to be using ISO 100 or 200. As you say, your going to be up at ISO 12800. You need the high ISO so you can maintain a high shutter rate, so you can freeze enough motion to get an acceptable image. There are times during the day when you can capture wildlife out and about, but the best times are indeed during the crepuscular hours of the day.

Just for reference, here are the dynamic range values for four key cameras at ISO 12800:

D810: 7.3
D800: 7.3
5D III: 7.8
1D X: 8.8

As far as dynamic range for wildlife and bird photography during "activity hours" goes, there is no question the 1D X wins hands down. It's got a 1.5 stop advantage over the D800/D810, the supposed dynamic range kings.

Good points in general, although the wrong specific data.
The actual values should have been listed as:

ISO12,800 DR
D810: 8.13
5D3: 8.25
1DX: 8.99

1DX with a 0.86 stop advantage over D810.

So there is nothing between it for the 5D3 vs D810, although the 1DX does give you nearly a stop more which is nice. In other terms it gets tricky, smaller pixels give finer grain which bothers the eye less and allow you to apply more advanced NR techniques. I'm not sure how banding and glow and such look between the nikon and canon at 12,800. So perhaps the real feel of the difference for some shots would be more or perhaps less.

(I know you asked to not have this brought up again, but since you went to DxO, there is no choice. You can't about smaller pixels not hurting and then suddenly when you go to DxO chose the wrong setting that does penalize smaller pixels.)

At ISO 6400, we have:

we actually have:
D810 9.08
5D3 9.07
1Dx 9.88

etc.
 
Upvote 0
Jul 21, 2010
31,213
13,073
LetTheRightLensIn said:
jrista said:
In crepuscular light, the low light around sunrise and sunset, you are NOT going to be using ISO 100 or 200. As you say, your going to be up at ISO 12800. You need the high ISO so you can maintain a high shutter rate, so you can freeze enough motion to get an acceptable image. There are times during the day when you can capture wildlife out and about, but the best times are indeed during the crepuscular hours of the day.

Just for reference, here are the dynamic range values for four key cameras at ISO 12800:

D810: 7.3
D800: 7.3
5D III: 7.8
1D X: 8.8

As far as dynamic range for wildlife and bird photography during "activity hours" goes, there is no question the 1D X wins hands down. It's got a 1.5 stop advantage over the D800/D810, the supposed dynamic range kings.

Good points in general, although the wrong specific data.
The actual values should have been listed as:

ISO12,800 DR
D810: 8.13
5D3: 8.25
1DX: 8.99

1DX with a 0.86 stop advantage over D810.

So there is nothing between it for the 5D3 vs D810, although the 1DX does give you nearly a stop more which is nice. In other terms it gets tricky, smaller pixels give finer grain which bothers the eye less and allow you to apply more advanced NR techniques. I'm not sure how banding and glow and such look between the nikon and canon at 12,800. So perhaps the real feel of the difference for some shots would be more or perhaps less.

(I know you asked to not have this brought up again, but since you went to DxO, there is no choice. You can't about smaller pixels not hurting and then suddenly when you go to DxO chose the wrong setting that does penalize smaller pixels.)

At ISO 6400, we have:

we actually have:
D810 9.08
5D3 9.07
1Dx 9.88

etc.

Interesting. Please tell us what RAW converter you use that allows you to edit RAW files downscaled to 8 MP. All of them I've used only allow RAW editing at full resolution. ::) ::) ::)
 
Upvote 0
LetTheRightLensIn said:
jrista said:
LetTheRightLensIn said:
That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Totally agree here. Having more DR is not a problem, and can make some of the rarer but tougher situations, like the ones you described, easier to deal with.

To that end, I think Magic Lantern is a HUGE bonus for Canon shooters, as (at least so far, with the 6D) they have managed to increase high ISO DR to levels that were previously only attainable at ISO 400 and below on most cameras (the notable exceptions being 1DX and D4).

doesn't that lose half the res though? that would be bad for reach limited wildlife in particular I'd think
although perhaps for the parts of the body in shade the detail is not as often critical?

I did not think so...but I could be wrong. If it does, then you are right, it wouldn't be good in a reach-limited situation.
 
Upvote 0
LetTheRightLensIn said:
jrista said:
In crepuscular light, the low light around sunrise and sunset, you are NOT going to be using ISO 100 or 200. As you say, your going to be up at ISO 12800. You need the high ISO so you can maintain a high shutter rate, so you can freeze enough motion to get an acceptable image. There are times during the day when you can capture wildlife out and about, but the best times are indeed during the crepuscular hours of the day.

Just for reference, here are the dynamic range values for four key cameras at ISO 12800:

D810: 7.3
D800: 7.3
5D III: 7.8
1D X: 8.8

As far as dynamic range for wildlife and bird photography during "activity hours" goes, there is no question the 1D X wins hands down. It's got a 1.5 stop advantage over the D800/D810, the supposed dynamic range kings.

Good points in general, although the wrong specific data.
The actual values should have been listed as:

ISO12,800 DR
D810: 8.13
5D3: 8.25
1DX: 8.99

1DX with a 0.86 stop advantage over D810.

So there is nothing between it for the 5D3 vs D810, although the 1DX does give you nearly a stop more which is nice. In other terms it gets tricky, smaller pixels give finer grain which bothers the eye less and allow you to apply more advanced NR techniques. I'm not sure how banding and glow and such look between the nikon and canon at 12,800. So perhaps the real feel of the difference for some shots would be more or perhaps less.

(I know you asked to not have this brought up again, but since you went to DxO, there is no choice. You can't about smaller pixels not hurting and then suddenly when you go to DxO chose the wrong setting that does penalize smaller pixels.)

At ISO 6400, we have:

we actually have:
D810 9.08
5D3 9.07
1Dx 9.88

etc.

Here is my reference:

http://sensorgen.info/CanonEOS-1D_X.html
http://sensorgen.info/CanonEOS_5D_MkIII.html
http://sensorgen.info/NikonD800.html
http://sensorgen.info/NikonD810.html
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
jrista said:
AlanF said:
jrista said:
serendipidy said:
Jrista,
Great images and informative discussion. I have learned a lot. Very confusing to noobs. I remember someone on CR frequently talking about better resolution being related to " number of pixels on target." So with reach limited subjects, you need either higher focal length lens or more (ie smaller) pixels per area on the sensor, to get better detail resolution. Did I say that correctly?

Yeah, that's correct. BTW, it's me who has always said "pixels on target". ;) I read that a long time ago on BPN forums, from Roger Clark I think, and started experimenting with it. I think it's the best way to describe the problem...because it scales. It doesn't matter how big the pixels are, or how big the sensor is...more pixels on target, the better the IQ. If you are only filling 10% of the frame, try to fill 50%. It doesn't matter if the frame is APS-C, FF, or something else...it's all relative.

It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area: the bigger the pixel, the greater the number of photons flowing through it and the greater the current generated, and the statistical variation in both becomes less important.

True. However that does not falsify my claims about pixels on target. We don't look at pixels. We look at images. Noise is relative to area. If you take 6.25µm pixels and 4.3µm pixels, you can fit 2.1 of the smaller pixels into every one of the larger pixels. Assuming the same technology (which is not actually the case with the 5D III and 7D...but humor me here), those 2.1 smaller pixels have the same amount of signal, and therefor the same amount of noise, as the single larger pixel. Noise is relative to area. If you increase the area of the sensor which your subject occupies, you reduce noise as a RELATIVE FACTOR.


AlanF said:
The dynamic range is also greater for large pixels than can accommodate a large number of electrons. A low megapixel sensor should have very good signal to noise and DR, but poor resolution. Now, see what happens as we progress to the other extreme. As, we decrease the size of the pixel, the resolution increases but the statistical noise starts to increase as the number of photons hitting each pixel decreases per unit time.

Per-pixel noise is an absolute factor. You are absolutely right that larger pixels have less noise and higher dynamic range. However ultimately, to maximize IQ, you don't want to achieve some arbitrary balance between pixel size and pixel count. You simply want to maximize the number of pixels on subject, regardless of their size. Because it really isn't about the pixels...it's about the area of the sensor your subject occupies.

In a reach-limited situation, the absolute area of the sensor occupied by your subject is fixed...it doesn't matter how large the sensor is. You will be gathering the same amount of light in total for your subject regardless of what sensor your using, or how big it's pixels are. Therefor, the only other critical factor to IQ is detail...smaller pixels are better, in that case, all else being equal.

AlanF said:
The electrical noise also increases until the inherent noise in the circuit becomes greater than that due to the fluctuation in number of electrons generated by the photons. We all experience this as the noise caused by increasing the iso setting. The dynamic range also decreases. Eventually, the pixel becomes so small that it loses all of its dynamic range because the well is so shallow it can hold only a few electrons.

Actually, electronic noise within the pixels themselves, ignoring all other sources of read noise (which tend to be downstream from the pixels) is due to dark current. Dark current noise is relative to pixel area and temperature...and dark current noise DROPS as pixel size drops. The amount of dark current that can flow through a photodiode is relative to it's area, just like the charge capacity of a photodiode is relative to it's area. So, technically speaking, electronic noise does not increase as pixel size decreases. Again, dark current noise is relative to the unit area...pixel size, ultimately, does not matter.

When it comes to read noise overall, that actually has far less to do with pixel size, and far more to do with the downstream pixel processing logic, how it's implemented, the frequency at which those circuits operate, etc. Most read noise comes from the ADC unit, especially when they are high frequency. I've seen read noise in CCD cameras that use Kodak KAF sensors change from one iteration to the next. A camera using a KAF-8000 series had as much as 40e- read noise a number of years ago. The same cameras today have ~7e- read noise. They are identical sensors...the only real difference is read noise. That's because read noise isn't a trait inherent to the sensor...it's related to all the logic that reads the sensor out and converts the analog signal to a digital signal. Canon could greatly reduce their read noise, without changing their sensor technology at all...because the majority of their noise comes from circuitry off-die in the DIGIC chips.

AlanF said:
So, too large a pixel gives too little resolution, too small a pixel gives too much noise and too small dynamic range. You could have a 20 billion too small useless pixels on target where 20 million would be the optimal number. Because of the above reasoning, astrophotographers and astronomers match pixel size to their telescopes. For photographers, the optimal size for current sensors pixels is around the range of crop to FF.

Your ignoring the fact that you can always downsample an image made with a higher resolution sensor to the same smaller dimensions as an image made with bigger pixels. The 7D and 5D III are the cameras I used because they are the cameras I have. I often use the term "all else being equal" in my posts, because it's a critical factor. The 7D and 5D III are NOT "all else being equal". They are a generation apart. The 7D pixels are technologically inferior to the 5D III pixels.

So, ASSUMING ALL ELSE BEING EQUAL, there is absolutely no reason to pick larger pixels over smaller pixels, assuming your going to be framing your subject the same with identical sensor sizes. If your photographing a baboon's face, and you frame it so that face fills the frame with a nice amount of negative space. If you have a 10mp and 40mp camera, You should ALWAYS pick the sensor with smaller pixels. You can always downsample the 40mp image by a factor of two, and you'll have the same amount of noise as the 10mp camera. Noise is relative to unit area. It doesn't matter if that unit area is one pixel in a 10mp camera, or four pixels in a 40mp camera...it's still the same unit area. Average those four smaller pixels together, and you reduce noise by a factor of two. Which is exactly the same thing as binning for pixels during readout, which is also exactly the same thing as simply using a bigger pixel.

The caveat, here, is that with a 40mp sensor, you have the option of resolving more detail. You plain and simply don't have that option with the 10mp sensor. More pixels just delineates detail...and noise...more finely. Finer noise has a lower perceptual impact on our visual observation. If the baboon face is framed the same, then your gathering the same amount of light from that baboon's face regardless of pixel size. Photon shot noise (the most significant source of noise in our photos) is intrinsic to the photonic wavefront entering the lens and reaching the sensor. Smaller pixels simply delineate that noise more finely.

Jon
I am using the same source of information that you quoted for number of pixels on target - Clark.

http://www.clarkvision.com/articles/does.pixel.size.matter/

Quote: "The images in Figures 10 and 11 illustrate that combining pixels does not equal a single image. The concept of a camera with many small pixels that are averaged to simulate a camera with larger pixels with the same sensor size simply does not work for very low light/high ISO conditions. This is due to the contribution of read and electronics noise to the image. Again this points to sensors with larger pixels to deliver better image quality in high ISO and low light situations."
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
jrista said:
AlanF said:
+1 My biggest mistakes are when my camera is set for point exposure for birds against a normal background and one flies by against the sky and I don't have time to dial in +2 ev to compensate or vice versa. Two more stops of DR would solve those problems.

This is a case where you want more DR to eliminate the need for the photographer to make the necessary exposure change. If you encounter this situation a lot, I highly recommend reading Art Morris' blog, and maybe buy his book "The Art of Bird Photography". He has an amazing technique for setting exposure quickly and accurately, such that making the necessary change quickly to handle this situation properly would not be a major issue.

Personally, I wouldn't consider this a situation where more DR is necessary. It might be a situation where more DR solves a problem presented by a lack of certain skills...but it is not actually a situation where more DR is really necessary.

Autofocus is not necessary, automatic metering is not necessary, IS is not necessary. The fact is that having those features makes it a lot easier, and having an extra couple of stops of DR would also make it easier. It is not a question of lack of skill but having a camera that eliminates one more variable.
 
Upvote 0