Full Frame and Bigger Pixels vs. APS-C and Smaller Pixels - The Reach War

jrista said:
Personally, I believe the idea of a lens "outresolving" a sensor, or a sensor "outresolving" a lens, is a misleading concept. Output resolution is the result of a convolution of multiple factors that affect the real image being resolved. Sensor and lens work together to produce the resolution of the image you see in a RAW file on a computer screen...one isn't outrsolving the other. I've gone over that topic many times, so I won't go into detail again here, but ultimately, the resolution of the image created by both the lens and sensor working together in concert is closely approximated by the formula:

It is not a misleading concept, just one that has to be used carefully. There are many processes in physics and chemistry where the end result is related to the all the components usually summed as reciprocals; e.g. resistors in parallel in an electric circuit; the overall resolution of an optical system, etc. Where those components all make similar contributions, none of them dominate and all are taking into the reckoning. E.g, a 1 ohm resistor in parallel with a 1 ohm has an overall resistance if 0.5 ohms. However, if dominates the system then the others are unimportant - the overall resistance of a 1 ohm resistor in parallel with a 100 ohm, is little different from a 1 ohm parallel with a million ohm, all very close to 1 ohm. If a lens at a particular aperture produces a point source that gives an image much smaller than a pixel, then increasing the number of pixels could be useful to increase resolution as the lens is outresolving the sensor. If the lens projects a point source to a size that is much larger than the size of a pixel, the sensor is outresolving the lens and it is a waste of time increasing the number of pixels. When the point size is similar, the situation is indeed more complicated as neither dominates.
 
Upvote 0
MichaelHodges said:
Dynamic range is the single biggest issue with wildlife photography, IMHO.

Ok, but...

MichaelHodges said:
I can get usable images at ISO 12,800.

So <10 stops is sufficient? ;)

I don't think anyone is saying DR is unimportant, just that some of us don't routinely encounter high DR scenes in bird/wildlife photography. You frequently state you commonly shoot in crepuscular light, I'm not sure if you mean that specifically or actually mean 'late in the day' but use 'crepuscular' because it's a nice word. Crepuscular is twilight, which occurs after the sun drops below the horizon. Since there usually aren't many sources of artificial light in the wild, after the sun goes down there's not a whole lot of scene DR.
 
Upvote 0
AlanF said:
It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area:

But the signal to noise ratio of a given sensor area does not increase with increasing pixel size.
The dynamic range is also greater for large pixels than can accommodate a large number of electrons.

This is also untrue or the G15 wouldn't have more base ISO DR than the 1Dx despite having pixels with 1/14th as much area..

http://www.sensorgen.info/CanonPowershot_G15.html
http://www.sensorgen.info/CanonEOS-1D_X.html
 
Upvote 0
neuroanatomist said:
I don't think anyone is saying DR is unimportant, just that some of us don't routinely encounter high DR scenes in bird/wildlife photography. You frequently state you commonly shoot in crepuscular light, I'm not sure if you mean that specifically or actually mean 'late in the day' but use 'crepuscular' because it's a nice word. Crepuscular is twilight, which occurs after the sun drops below the horizon. Since there usually aren't many sources of artificial light in the wild, after the sun goes down there's not a whole lot of scene DR.

In this context its usage describes animal behavior like deer, bear, moose and so forth. These animals (especially what some would consider 'trophies') are more active during crepuscular periods. For those who shoot big mammals, crepuscular periods are "go time". Anything can and does happen. This is where the APS-C gets switched off and the FF comes out. You'll get better shutter speeds and far less noise overall.

I think there are valid points to using APS-C for small birds in daylight or prime lighting conditions. But for me, the true test of a camera is how it does in unfavorable conditions. The FF will perform admirably in these lowlight periods and capture moments in relatively clean detail.
 
Upvote 0
Wow! I just want to say that my inner engineer geek and photographer geek are really having a blast reading this thread! I know enough to follow the concepts but I also know when my understanding isn't solid yet. When I have more time I'm going to read it through at least once or twice more! This thread is why I stick with the CR forum! The in-depth information I'm seeing and back & forth discussion are riveting.

Thanks so much for putting so much effort and time into this (and many other threads). I'm sure some are reading this and saying, "Who Cares?" but not me. I'm seriously geeking out here. :p

My comments on the thread so far are...

- While I understand what neuro and jrista are saying about DR, I agree with MichaelHodges in that DR carries more significance. At least for me it does for some of the same reasons he states. I bristled a bit when I read jrista's DR opinions. Of course, I'm not shooting wildlife as often as I'm shooting boy life (scouts running around, etc) but the same factors apply, except I probably won't be killed!

- I've read things in the past about lens resolution vs. sensor resolution and while some of the discussions seemed to have a lot of evidence and facts to back up the theory, my gut has never believed it. Good glass is good glass and if it was good enough to produce a beautiful image on film in the '90's, it should still be good enough to produce a beautiful image on a sensor in 2010. The light, the glass and the sensor don't know when the lens was made, they just do their thing. If the glass is clean, aligned correctly and focused properly, the image should be sharp. If a newer technology lens uses better glass, coatings and IS so there is less light corruption then the picture should improve from those upgrades but not because the resolution of the sensor "matches" the resolution of the lens. This idea has never sat well with me.

- jrista, the techniques you are using on the the moon to get sharper images, etc are impressive. I'm really enjoying reading about some of the tricks and gear you are using to do it. I've always struggled with explaining to newbies why it is hard to photograph the moon with the gear they have when they can see the moon clearly with their eyes. I've explained a lot of the obvious stuff but you take it to whole other level! Wow!!
 
Upvote 0
MichaelHodges said:
I think there are valid points to using APS-C for small birds in daylight or prime lighting conditions. But for me, the true test of a camera is how it does in unfavorable conditions. The FF will perform admirably in these lowlight periods and capture moments in relatively clean detail.

I gots to say, this has been my experience as well which is why my FF bodies get 90% use while the 60D stays home now. But I would LOVE it if the 7D2 sensor is so good that this becomes a moot point. LOVE it.
 
Upvote 0
jrista said:
Skulker said:
jrista said:
............
I've long held the opinion that crop sensor cameras, like the 7D, do have value in certain circumstances. The most significant use case where a camera like the 7D really shows it's edge over full frame cameras is in reach-limited situations.

............

I'd like to prove my case

................

Both images were initially scaled to approximately 1/4 their original size (770x770 pixels, to be exact).

The 5D III image was then layered onto the 7D image, and upsampled in Photoshop by a scale factor of exactly 161.32359522807342533660887502944%.

While I agree with you that a 7D (or any so called crop sensor) can have advantages over a so called full frame sensor. I think you need to review your work if your objective is to reach a valid conclusion.

1) you start off with a strong opinion. (its better to have an open mind)
2) Then you try to "prove my point". (it might be better to try to test your opinion)
3) Then you do something that is going to be very detrimental to one of the images.

You may claim that you would have to upsample the 5D3 image to get the same size as the 7D. But you have already down sampled it - so you have lost detail in the 5D3 file.

To demonstrate I made a simple file in Photoshop. 770 pixies ;D wide, copied it, scaled it to 481 wide, then upscaled it to 770 wide. Hardly by chance my file had two types of detail. A sharp line and a not so sharp line. The result can be seen below.

I think I have just proved that photoshop is better than photoshop. :mad:

Don't get me wrong I'm a fan of the 7D and think its a great camera. I also think there is a place for "crop sensors". I'm waiting for the 7D2, I don't think it will be for me, but I definitely see a crop sensor shaped hole in my kit.

and finally whats with 30 odd decimal places!


EDIT: Just in case anyone wonders ;D the down sampling and up sampling were done with default PS settings

You are correct. However, the image below was actually done a bit differently. In this case, both samples were downscaled to fit in the 770x770 pixel image...the 5D III image was not first downsampled then upsampled again.

zsbGCQX.gif


Your right, certainly not as stark a difference as my first example. Maybe that one is invalid. This example, however, does show that the 7D is still picking up more subtle details and nuances of color. The differences are not stark, but they do exist. Also note, both of these images were denoised. They were both denoised to the point where they both exhibited about the same noise levels...where noise was pretty much not visible. Obviously, there was quite a bit less noise reduction applied to the 5D III image.. That actually costs the 7D a little bit of it's detail as well...but it is on a level playing field with the 5D III as far as noise goes, so I still think it's a fair example.


That certainly shows much less of a difference. Have you corrected your original post? You shouldn't leave it with such an error.


Unfortunately as you aren't putting up the raw files, as so many have asked, we can't replicate your work and see if we get the same results. Unless I have missed the link to them.


On my monitor there is quite a color cast to the 5D3 images, but none on the 7D image.


Finally let me say although I have plenty of questions about your thoughts on the "crop factor" and how you have gone about proving your point. I have no issues with the quality of some of your photography and think the images you produce of the night sky are some of the best images seen on this site. ;D


But you still haven't said so again "Whats with the 30 decimal places?" ;D ;D
 
Upvote 0
Lee Jay said:
AlanF said:
It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area:

But the signal to noise ratio of a given sensor area does not increase with increasing pixel size.
The dynamic range is also greater for large pixels than can accommodate a large number of electrons.

This is also untrue or the G15 wouldn't have more base ISO DR than the 1Dx despite having pixels with 1/14th as much area..

http://www.sensorgen.info/CanonPowershot_G15.html
http://www.sensorgen.info/CanonEOS-1D_X.html

There are factors other than pixel size that determine DR, which become the limiting factors for larger sensors - if size were the only factor then a Sony sensor would have the same DR as a Canon. However, it is basic physics that DR will eventually decrease with decreasing pixel size because of the number of electrons that can be accommodated in a well.

The noise of individual pixels is important as well as the overall noise of a particular area of sensor. That is, the overall signal to noise might be independent of the number of pixels, but the variation of signal within that area is what you actually see as noise. Suppose you take a photo of a pure blue background. With a very low pixel density, you will see a very flat blue image. With very high pixel density, you would see lots of colour variation when you pixel peep.
 
Upvote 0
MichaelHodges said:
jrista said:
Regarding birds and DR...to be honest, I have not found that dynamic range is the issue when photographing birds.

So two golden eagles swooping in to take out a bald eagle at your back, silhouetted in the sun doesn't present a DR issue? What about bighorn rams fighting each other in uneven forest light? People wait all year for those moments, heck, they wait years. A second later, it could be gone.

Dynamic range is the single biggest issue with wildlife photography, IMHO. That's why the shadow recovery in the Sony sensors is so appealing.


Anyway, when it comes to bird and wildlife photography, dynamic range is just not an issue.


I completely disagree. It's the issue. Are you going to sneak around a grizzly bear in the bushes to get the right angle? (that's a great way to get yourself killed). Or how about tramping in the willows on a mountain lake to get just the right light on a bull moose? (another good way to get killed). What about when a squirrel decides to watch sunrise over Glacier Point in Yosemite? Are you going to command the sun to rise from the west so you can get the good light? If you are shooting tame birds or zoo animals, maybe it's not much of an issue. But for actual wildlife? Top of the list.

The only one with the control in wildlife photography is the animal. They do what they want, when they want, and under the lighting conditions they see fit. It's your job to take the punches and get to the 12th round.

I don't find this at all. Maybe it's the different lighting conditions here, maybe it's the animals I'm after (tends to be small perched birds). The most extreme DR cases are 1) a grey/white sky with bird on a twig, which is essentially a silhouette, and 2) a black and white bird in bright sunlight. In the former case, although it's rare these shots are ever very appealing (imho), the only way around it is to accept a blown out sky. At least you're not really losing anything (although the edges of the subject can look odd if you play around too much. In the latter case, I follow the rule of never blowing whites on the bird, so if that leads to the blacks being underexposed, so be it.

Dappled shade can be a challenge, but it's rare in my experience for the subject itself to be in both light and shade. Maybe because songbirds are smaller, I don't know. But I don't find the DR of my bird shots exceeds the sensor under most circumstances.
 
Upvote 0
scyrene said:
jrista said:
So, I thought I'd throw in a bit of a "reference image". One way to image more detail, even when seeing is bad, is to take a lot of frames at high shutter speeds, and integrate the best 10-20%. It's called Lucky Imaging (lucky, in that in some of the frames you image, you'll be "lucky" enough to have very good to perfect seeing, where the turbulence clears and everything resolves at high resolution. The exposure duration can range anywhere from a few hundred milliseconds to microseconds. In my case, I kept my exposure settings, so my exposure duration was 10ms.

I took a couple of videos of the shadowed limb of the moon at 1000 frames at 5x zoom, and integrated the best 10% (100 frames) with AutoStakkert! 2. I used the 3x Drizzle option, which is actually a superresolution algorithm, then downsampled to 50% (1.5x original resolution), so the final image is actually showing detail beyond the diffraction and aberration limits of the optics. This is the result:


(Click for full size)


I want to give this a try with the 600mm, 2x TC, and 1.4x Kenko (1680mm) on the 7D. I bet I could resolve some pretty amazing detail by resolving a few thousand frames and integrating the best 10%.

I wish I had the processing skill and knowledge you do! It's impressive how much detail you can pull out at these focal lengths. I went the other way, increasing FL until I got results I was content with, although eventually the extra glass (stacked teleconverters) degrades image quality and it seems to even out. Still, I feel more data can be pulled out of my setup, if only I knew how.

Good stuff, as ever.

It's actually not difficult. AutoStakkert! 2 pretty much does all the work. You record a video, and since the moon in each frame is moving a little bit...due to atmospheric turbulence and due to it's transit across the sky (unless you have a tracking mount...then it will move a little bit due to periodic error of the mount), the position of the moon and its features change just a little frame to frame. That allows advanced algorithms to be used to figure out what the "right" value is for each pixel, and even enhance resolution by using superresolution algorithms like drizzle. All you really have to do is record the video, and perform three steps in AutoStakkert! (load image, analyze, stack).
 
Upvote 0
weixing said:
Hi,
Today, I do a compare shots on FF vs APS-C on a real bird under real life condition... only manage to try out ISO 1600 and ISO 3200 as start to rain very heavily after this. I just open them using lightroom 4, took a screenshot, paste on paint and saved as jpeg.
Test Condition
Camera: Canon 6D (left) vs Canon 60D (right)
Lens: Tamron 150-600mm @ F8
Subject: Stork-billed Kingfisher at around 18m (this is the only real bird that I can find that will stay at the same place for extended period of time with minimum movement).
Weather: Cloudy

After looking at the compare shots, my initial conclusion is that the 60D sensor doesn't seem to have a significant details advantage (if any) under real life condition (at least this seem to be true when using the Tamron 150-600mm lens) over the 6D and the 6D (up to ISO 3200) doesn't seem to have a real noise advantage if the 60D image was scale down.

Have a nice day.

PS: The CanonRumors website seem to scale down the screenshot image (actual size is 1920 x 1080) to fit the website frame... to view at actual size, need to click on the image and using the scroll bar below the post to scroll through the image... or is there a setting to show the image actual size??

Very interesting results. Congrats on finding a bird that would sit still the entire time you took the shots. :D That's definitely the kind of bird you need.

To really truly compare, you would need to overlay the two images on top of each other, and upscale the 6D images so the bird was the same size, then overlay them directly on top of each other (Photoshop's difference layer blending mode makes the positioning very easy). Then you can swap back and forth, and really see the difference. It's pretty much impossible to objectively determine any real differences when looking at the images side-by-side...it becomes almost a pure subjective judgement.

The only other thought I have is the lens used. The 150-600 is a good lens for it's price class, but I can tell it does not resolve the same kind of detail as the EF 600 f/4 II. I am able to resolve fine feather and fur detail even at very high ISO, something I don't see in your images. It doesn't necessarily invalidate the test, however it does throw in a major factor that affects results. The moon is a bit of a different kind of subject than a bird, given that it is primarily seeing limited rather than diffraction limited, so using 1200mm f/8 does not limit resolution the way it would with a terrestrial subject. If I were to do a bird test...I would probably use the 600 at f/4.5, which seems to be the absolute sweet spot of my lens.
 
Upvote 0
jrista said:
scyrene said:
jrista said:
So, I thought I'd throw in a bit of a "reference image". One way to image more detail, even when seeing is bad, is to take a lot of frames at high shutter speeds, and integrate the best 10-20%. It's called Lucky Imaging (lucky, in that in some of the frames you image, you'll be "lucky" enough to have very good to perfect seeing, where the turbulence clears and everything resolves at high resolution. The exposure duration can range anywhere from a few hundred milliseconds to microseconds. In my case, I kept my exposure settings, so my exposure duration was 10ms.

I took a couple of videos of the shadowed limb of the moon at 1000 frames at 5x zoom, and integrated the best 10% (100 frames) with AutoStakkert! 2. I used the 3x Drizzle option, which is actually a superresolution algorithm, then downsampled to 50% (1.5x original resolution), so the final image is actually showing detail beyond the diffraction and aberration limits of the optics. This is the result:


(Click for full size)


I want to give this a try with the 600mm, 2x TC, and 1.4x Kenko (1680mm) on the 7D. I bet I could resolve some pretty amazing detail by resolving a few thousand frames and integrating the best 10%.

I wish I had the processing skill and knowledge you do! It's impressive how much detail you can pull out at these focal lengths. I went the other way, increasing FL until I got results I was content with, although eventually the extra glass (stacked teleconverters) degrades image quality and it seems to even out. Still, I feel more data can be pulled out of my setup, if only I knew how.

Good stuff, as ever.

It's actually not difficult. AutoStakkert! 2 pretty much does all the work. You record a video, and since the moon in each frame is moving a little bit...due to atmospheric turbulence and due to it's transit across the sky (unless you have a tracking mount...then it will move a little bit due to periodic error of the mount), the position of the moon and its features change just a little frame to frame. That allows advanced algorithms to be used to figure out what the "right" value is for each pixel, and even enhance resolution by using superresolution algorithms like drizzle. All you really have to do is record the video, and perform three steps in AutoStakkert! (load image, analyze, stack).

Is that software Windows-only? I did a bit of searching, and couldn't find system requirements anywhere. For some reason, most of this sort of software doesn't run on Macs, so I had to use the only stacker I could find that did, called 'Keith's Image Stacker'. It's pretty good but I have no understanding of how different modes produce different results. I guess I should read up on it more.

As for video - this is a question I've had for a while. Even HD video is only 2MP. No matter how much resolution you're gaining through stacking, surely you're losing 90% (of the 5DIII's potential) versus stills? Any thoughts? When I stacked my moon (I've included a crop of a much reduced-size below) I had to shoot lots of stills manually and use those instead. You're clearly doing something better though, as you seem to be pulling out a similar level of detail even though I was at a much higher focal length (5600mm).
 
Upvote 0
MichaelHodges said:
LetTheRightLensIn said:
You won't?? Even if you use say a 7D and a 5D2 and the 7D sensor is more efficient at collecting and converting photons per area of surface than the 5D2?? With the 7D you can chose to get either: more detail (unless conditions are super bad) and more noise OR slightly better detail with less de-bayer and other artifacts and slightly better noise (if you view or convert to same scale as the 5D2).

Unfortunately it doesn't work that way in crepuscular conditions. You can have all the "pixels on target" you want, but if the sensor can't handle the low lighting (7D), you're not going to get the shot. And by "shot", I mean something you can print at 16x20.

When shooting in RAW during the November white-tail rut in Montana, my 7D becomes almost useless. The sun comes up at 8:30, and light gets shaky around 4:30 thanks to consistent, thick cloud cover. It's often snowing or raining. Once I start hitting 1600 ISO on the 7D, it's time to put it away. Out comes the full frame, where I can get usable images at ISO 12,800. Not to mention that my other cameras do a much better job of focusing on low contrast targets (brown deer with a brown background) than the 7D.

In these cases, noise is the bottleneck.

MichaelHodges said:
jrista said:
Regarding birds and DR...to be honest, I have not found that dynamic range is the issue when photographing birds.

So two golden eagles swooping in to take out a bald eagle at your back, silhouetted in the sun doesn't present a DR issue? What about bighorn rams fighting each other in uneven forest light? People wait all year for those moments, heck, they wait years. A second later, it could be gone.

Dynamic range is the single biggest issue with wildlife photography, IMHO. That's why the shadow recovery in the Sony sensors is so appealing.


Anyway, when it comes to bird and wildlife photography, dynamic range is just not an issue.


I completely disagree. It's the issue. Are you going to sneak around a grizzly bear in the bushes to get the right angle? (that's a great way to get yourself killed). Or how about tramping in the willows on a mountain lake to get just the right light on a bull moose? (another good way to get killed). What about when a squirrel decides to watch sunrise over Glacier Point in Yosemite? Are you going to command the sun to rise from the west so you can get the good light? If you are shooting tame birds or zoo animals, maybe it's not much of an issue. But for actual wildlife? Top of the list.

The only one with the control in wildlife photography is the animal. They do what they want, when they want, and under the lighting conditions they see fit. It's your job to take the punches and get to the 12th round.

I decided to combine your two posts, because I think the context of the two matter here. First, you talk about crepuscular conditions. You have brought that up a couple times in the past as well, and it is a valid point. However I think it is a point at odds with the points you make in the second post.

In crepuscular light, the low light around sunrise and sunset, you are NOT going to be using ISO 100 or 200. As you say, your going to be up at ISO 12800. You need the high ISO so you can maintain a high shutter rate, so you can freeze enough motion to get an acceptable image. There are times during the day when you can capture wildlife out and about, but the best times are indeed during the crepuscular hours of the day.

Just for reference, here are the dynamic range values for four key cameras at ISO 12800:

D810: 7.3
D800: 7.3
5D III: 7.8
1D X: 8.8

As far as dynamic range for wildlife and bird photography during "activity hours" goes, there is no question the 1D X wins hands down. It's got a 1.5 stop advantage over the D800/D810, the supposed dynamic range kings. At ISO 6400, we have:

D810: 8.3
D800: 8.3
5D III: 8.5
1D X: 9.7

At ISO 3200 we have:

D810: 9.2
D800: 9.2
5D III: 9.5
1D X: 10.5.

At ISO 1600 we have:

D810: 9.8
D800: 9.8
5D III: 10.1
1D X: 10.8

And finally, at ISO 800 we have:

D810: 10.8
D800: 11.2
5D III: 10.5
1D X: 11.1

It is not until we reach ISO 800 that the D800 series cameras start to close the cap and overtake the high ISO advantage of Canon cameras high ISO DR. It is certainly possible to shoot at ISO 800, and ISO 400, in the hour leading up to sunset...I have shot at those settings myself. However shadows are long during that hours, and it is more common that I am shooting at a higher ISO. It is only the hour or two around the two-hour period of midday that I ever find myself shooting at ISO 100 or 200...and then, it is rarely with fast moving subjects like a Golden Eagle flying directly at me to fight with another bird behind me. I'd again be at a higher ISO to guarantee I have the shutter speed I need to capture that action with just the right amount of motion blur in the wing tips, but otherwise freezing the motion of the bird itself.

Regarding pixels on subject...I'm confused why you would say that doesn't matter. If you increase the size of your subject relative to the size of your pixels, that means that the frequency of the noise becomes higher and higher relative to the subject. It really doesn't matter if you are at a higher ISO or not...the frequency of noise is still based on the pixel pitch. If I take two shots of a deer, at ISO 12800, and in one the deer fills 33% of the frame and in the other the deer fills 66% of the frame. Which image is going to have better IQ? The one where the deer fills 33%? No, of course not. The image where the deer is larger in the frame...the image where there are "more pixels on the deer", is going to have the better IQ. The deer is much larger, so all that ISO 12800 noise is going to be less intrusive, as in terms of relative frequency, the noise is much smaller.

Whether the 7D is useful or useless depends on exactly that. Pixels on subject. I've been able to make very good use of my 7D under very difficult lighting conditions by taking the time to get the subject framed right, and getting it frames large enough. Even at very high ISO:

night-heron-at-night-1-of-1.jpg


This night heron was shot at ISO 3200, but underexposed by about a stop (I'd had my 7D configured to limit which ISO settings would be automatically selected at the time, and I was shooting Auto ISO). The shutter speed was 1/6th second. This was with the 600mm f/4 L II, well after sunset...blue hour was on, and there were dark clouds blocking a lot of the remaining post-sunset light. The only reason I was able to make anything of the shot and still have this much detail was because I managed to get enough pixels on the bird that it could withstand the editing.

So, first off, I do not believe that the argument that the enhanced DR of the Sony sensors is useful when it comes to wildlife or bird photography. On the contrary, given the measurements, it seems like the Canon cameras do indeed have the DR edge, especially the 1D X, and especially at the higher ISO settings that are critical during the hours wildlife is most active.

To address a couple points more specifically. The silhouette scene...a bird silhouetted is a bird silhouetted. Either it is a dark subject against a very bright background, or it's not. If we are talking about something where a bird is silhouetted against bright sunlit water, even a D800 is going to struggle with that....assuming you even had the option of shooting at ISO 100. The likelihood is that your using a much higher ISO (pretty much GUARANTEED in the "two golden eagles flying in with the sun at their backs" case) in which case there is no advantage to using a D800 or having more DR. In that situation, you do something like this:





You make the images REAL silhouettes...pure shadow superimposed over a brighter background.

When it comes to other animals, like bull moose or elk, bears, or the squirrel that wants to watch sunset. Well, starting with the squirrel...I would certainly do what I could to get on the western side of it. I mean, we are talking about good lighting here, lighting that does your subject justice. Aside from silhouetting your subject, shooting from the back side directly into sunlight is not the most flattering light for a wildlife subject. It might make for one interesting photo one time, but in general, I look for the scenes and angles where my subjects are better lit. The sun does not necessarily need to be directly at your back. Actually, having the sun DIRECTLY at your back is not good either, as it flattens your subject. There needs to be a certain angle, and sometimes you can suffer a little bit of loss on the DR front (i.e. end up with slightly too much DR in the scene than you can really handle) in order to get a shot that might otherwise not be possible.

Regarding moose, elk, deer, etc. I absolutely do what I can to get a better angle on them. But you pick your battles, for sure. Deer, elk, etc. are quite docile during the earlier parts of the year. It is only really during the rut and their mating season that they take on a hostile stance. That is where having a big long lens, and some TC's and, maybe even, a cropped sensor, become really handy. They give you a much more comfortable working distance when photographing rutting wildlife.

rocky-mountain-national-park-wildlife-1-of-6.jpg


Here is an example where I did what I could to get the right angle of light, during crepuscular hours. Light still coming from more of an angle than I wanted, the elk's body is decently lit but the beard is more shadowed than I wanted, but shot still came out quite nicely. In this particular situation, for the given exposure, a D800 would not have offered me much in the way of advantages over the 7D...I was reach limited, I was at very high ISO. The D800 might have had slightly more DR, but less resolution, and similar per-pixel noise...but a considerably slower frame rate.

So, at least in my experience, given that I rarely have the opportunity to shoot wildlife of any kind at ISO 100, and were not talking about zoo animals here, but real WILD life...I do not believe DR is a critical issue with wildlife photography. You simply don't have 12 or 14 stops at ISO settings from 400 and up. At the really high ISO settings we use during crepuscular hours, ISO 3200, 6400, even 12800, we might not even have EIGHT stops of DR! I also do not believe it is impossible to situate yourself, the photographer, at the right angle to your subject such that it is properly lit, therefor minimizing or eliminating DR issues in the first place. Even in the cases where DR might become an issue, such as backlit subjects...if you look at the photography of well-respected wildlife pros, like Andy Rouse...he doesn't try to lift the shadows of a shaded backlit subject like an African wildcat by many stops. He leaves them shaded, he leaves them contrasty. Sometimes it just isn't about dynamic range...sometimes, dynamic range really, truly, doesn't matter...
 
Upvote 0
AlanF said:
+1 My biggest mistakes are when my camera is set for point exposure for birds against a normal background and one flies by against the sky and I don't have time to dial in +2 ev to compensate or vice versa. Two more stops of DR would solve those problems.

This is a case where you want more DR to eliminate the need for the photographer to make the necessary exposure change. If you encounter this situation a lot, I highly recommend reading Art Morris' blog, and maybe buy his book "The Art of Bird Photography". He has an amazing technique for setting exposure quickly and accurately, such that making the necessary change quickly to handle this situation properly would not be a major issue.

Personally, I wouldn't consider this a situation where more DR is necessary. It might be a situation where more DR solves a problem presented by a lack of certain skills...but it is not actually a situation where more DR is really necessary.
 
Upvote 0
Hi,
jrista said:
weixing said:
Hi,
Today, I do a compare shots on FF vs APS-C on a real bird under real life condition... only manage to try out ISO 1600 and ISO 3200 as start to rain very heavily after this. I just open them using lightroom 4, took a screenshot, paste on paint and saved as jpeg.
Test Condition
Camera: Canon 6D (left) vs Canon 60D (right)
Lens: Tamron 150-600mm @ F8
Subject: Stork-billed Kingfisher at around 18m (this is the only real bird that I can find that will stay at the same place for extended period of time with minimum movement).
Weather: Cloudy

After looking at the compare shots, my initial conclusion is that the 60D sensor doesn't seem to have a significant details advantage (if any) under real life condition (at least this seem to be true when using the Tamron 150-600mm lens) over the 6D and the 6D (up to ISO 3200) doesn't seem to have a real noise advantage if the 60D image was scale down.

Have a nice day.

PS: The CanonRumors website seem to scale down the screenshot image (actual size is 1920 x 1080) to fit the website frame... to view at actual size, need to click on the image and using the scroll bar below the post to scroll through the image... or is there a setting to show the image actual size??

Very interesting results. Congrats on finding a bird that would sit still the entire time you took the shots. :D That's definitely the kind of bird you need.

To really truly compare, you would need to overlay the two images on top of each other, and upscale the 6D images so the bird was the same size, then overlay them directly on top of each other (Photoshop's difference layer blending mode makes the positioning very easy). Then you can swap back and forth, and really see the difference. It's pretty much impossible to objectively determine any real differences when looking at the images side-by-side...it becomes almost a pure subjective judgement.

The only other thought I have is the lens used. The 150-600 is a good lens for it's price class, but I can tell it does not resolve the same kind of detail as the EF 600 f/4 II. I am able to resolve fine feather and fur detail even at very high ISO, something I don't see in your images. It doesn't necessarily invalidate the test, however it does throw in a major factor that affects results. The moon is a bit of a different kind of subject than a bird, given that it is primarily seeing limited rather than diffraction limited, so using 1200mm f/8 does not limit resolution the way it would with a terrestrial subject. If I were to do a bird test...I would probably use the 600 at f/4.5, which seems to be the absolute sweet spot of my lens.
I had no doubt that EF600mm F4 II with it's 150mm front element will resolve more details compare to my 95mm front element, but I'm $$ limited... ha ha ha ;D

Anyway, the sky is cloudy and the bird is under the shade... so I think the details are a bit more difficult to resolve under this flat lighting condition.

Have a nice day.
 
Upvote 0
AlanF said:
jrista said:
serendipidy said:
Jrista,
Great images and informative discussion. I have learned a lot. Very confusing to noobs. I remember someone on CR frequently talking about better resolution being related to " number of pixels on target." So with reach limited subjects, you need either higher focal length lens or more (ie smaller) pixels per area on the sensor, to get better detail resolution. Did I say that correctly?

Yeah, that's correct. BTW, it's me who has always said "pixels on target". ;) I read that a long time ago on BPN forums, from Roger Clark I think, and started experimenting with it. I think it's the best way to describe the problem...because it scales. It doesn't matter how big the pixels are, or how big the sensor is...more pixels on target, the better the IQ. If you are only filling 10% of the frame, try to fill 50%. It doesn't matter if the frame is APS-C, FF, or something else...it's all relative.

It is not true as a general statement that the more pixels on target, the better. There have to be optimum sizes of pixels and optimal numbers on target, as shown by the following arguments. The signal to noise of a pixel increases with its area: the bigger the pixel, the greater the number of photons flowing through it and the greater the current generated, and the statistical variation in both becomes less important.

True. However that does not falsify my claims about pixels on target. We don't look at pixels. We look at images. Noise is relative to area. If you take 6.25µm pixels and 4.3µm pixels, you can fit 2.1 of the smaller pixels into every one of the larger pixels. Assuming the same technology (which is not actually the case with the 5D III and 7D...but humor me here), those 2.1 smaller pixels have the same amount of signal, and therefor the same amount of noise, as the single larger pixel. Noise is relative to area. If you increase the area of the sensor which your subject occupies, you reduce noise as a RELATIVE FACTOR.


AlanF said:
The dynamic range is also greater for large pixels than can accommodate a large number of electrons. A low megapixel sensor should have very good signal to noise and DR, but poor resolution. Now, see what happens as we progress to the other extreme. As, we decrease the size of the pixel, the resolution increases but the statistical noise starts to increase as the number of photons hitting each pixel decreases per unit time.

Per-pixel noise is an absolute factor. You are absolutely right that larger pixels have less noise and higher dynamic range. However ultimately, to maximize IQ, you don't want to achieve some arbitrary balance between pixel size and pixel count. You simply want to maximize the number of pixels on subject, regardless of their size. Because it really isn't about the pixels...it's about the area of the sensor your subject occupies.

In a reach-limited situation, the absolute area of the sensor occupied by your subject is fixed...it doesn't matter how large the sensor is. You will be gathering the same amount of light in total for your subject regardless of what sensor your using, or how big it's pixels are. Therefor, the only other critical factor to IQ is detail...smaller pixels are better, in that case, all else being equal.

AlanF said:
The electrical noise also increases until the inherent noise in the circuit becomes greater than that due to the fluctuation in number of electrons generated by the photons. We all experience this as the noise caused by increasing the iso setting. The dynamic range also decreases. Eventually, the pixel becomes so small that it loses all of its dynamic range because the well is so shallow it can hold only a few electrons.

Actually, electronic noise within the pixels themselves, ignoring all other sources of read noise (which tend to be downstream from the pixels) is due to dark current. Dark current noise is relative to pixel area and temperature...and dark current noise DROPS as pixel size drops. The amount of dark current that can flow through a photodiode is relative to it's area, just like the charge capacity of a photodiode is relative to it's area. So, technically speaking, electronic noise does not increase as pixel size decreases. Again, dark current noise is relative to the unit area...pixel size, ultimately, does not matter.

When it comes to read noise overall, that actually has far less to do with pixel size, and far more to do with the downstream pixel processing logic, how it's implemented, the frequency at which those circuits operate, etc. Most read noise comes from the ADC unit, especially when they are high frequency. I've seen read noise in CCD cameras that use Kodak KAF sensors change from one iteration to the next. A camera using a KAF-8000 series had as much as 40e- read noise a number of years ago. The same cameras today have ~7e- read noise. They are identical sensors...the only real difference is read noise. That's because read noise isn't a trait inherent to the sensor...it's related to all the logic that reads the sensor out and converts the analog signal to a digital signal. Canon could greatly reduce their read noise, without changing their sensor technology at all...because the majority of their noise comes from circuitry off-die in the DIGIC chips.

AlanF said:
So, too large a pixel gives too little resolution, too small a pixel gives too much noise and too small dynamic range. You could have a 20 billion too small useless pixels on target where 20 million would be the optimal number. Because of the above reasoning, astrophotographers and astronomers match pixel size to their telescopes. For photographers, the optimal size for current sensors pixels is around the range of crop to FF.

Your ignoring the fact that you can always downsample an image made with a higher resolution sensor to the same smaller dimensions as an image made with bigger pixels. The 7D and 5D III are the cameras I used because they are the cameras I have. I often use the term "all else being equal" in my posts, because it's a critical factor. The 7D and 5D III are NOT "all else being equal". They are a generation apart. The 7D pixels are technologically inferior to the 5D III pixels.

So, ASSUMING ALL ELSE BEING EQUAL, there is absolutely no reason to pick larger pixels over smaller pixels, assuming your going to be framing your subject the same with identical sensor sizes. If your photographing a baboon's face, and you frame it so that face fills the frame with a nice amount of negative space. If you have a 10mp and 40mp camera, You should ALWAYS pick the sensor with smaller pixels. You can always downsample the 40mp image by a factor of two, and you'll have the same amount of noise as the 10mp camera. Noise is relative to unit area. It doesn't matter if that unit area is one pixel in a 10mp camera, or four pixels in a 40mp camera...it's still the same unit area. Average those four smaller pixels together, and you reduce noise by a factor of two. Which is exactly the same thing as binning for pixels during readout, which is also exactly the same thing as simply using a bigger pixel.

The caveat, here, is that with a 40mp sensor, you have the option of resolving more detail. You plain and simply don't have that option with the 10mp sensor. More pixels just delineates detail...and noise...more finely. Finer noise has a lower perceptual impact on our visual observation. If the baboon face is framed the same, then your gathering the same amount of light from that baboon's face regardless of pixel size. Photon shot noise (the most significant source of noise in our photos) is intrinsic to the photonic wavefront entering the lens and reaching the sensor. Smaller pixels simply delineate that noise more finely.
 
Upvote 0
AlanF said:
jrista said:
Personally, I believe the idea of a lens "outresolving" a sensor, or a sensor "outresolving" a lens, is a misleading concept. Output resolution is the result of a convolution of multiple factors that affect the real image being resolved. Sensor and lens work together to produce the resolution of the image you see in a RAW file on a computer screen...one isn't outrsolving the other. I've gone over that topic many times, so I won't go into detail again here, but ultimately, the resolution of the image created by both the lens and sensor working together in concert is closely approximated by the formula:

It is not a misleading concept, just one that has to be used carefully. There are many processes in physics and chemistry where the end result is related to the all the components usually summed as reciprocals; e.g. resistors in parallel in an electric circuit; the overall resolution of an optical system, etc. Where those components all make similar contributions, none of them dominate and all are taking into the reckoning. E.g, a 1 ohm resistor in parallel with a 1 ohm has an overall resistance if 0.5 ohms. However, if dominates the system then the others are unimportant - the overall resistance of a 1 ohm resistor in parallel with a 100 ohm, is little different from a 1 ohm parallel with a million ohm, all very close to 1 ohm. If a lens at a particular aperture produces a point source that gives an image much smaller than a pixel, then increasing the number of pixels could be useful to increase resolution as the lens is outresolving the sensor. If the lens projects a point source to a size that is much larger than the size of a pixel, the sensor is outresolving the lens and it is a waste of time increasing the number of pixels. When the point size is similar, the situation is indeed more complicated as neither dominates.

Your basically talking about asymptotic relationships in systems that resolve. You are indeed correct, just like resistors in a circuit, output resolution is bound by the lowest common denominator. If the sensor is the limiting factor, which would be the case if the lens was resolving a spot smaller than a pixel, then yes...you would want to increase sensor resolution to experience more significant gains.

I think my opinion diverges from yours when talking about if the lens is resolving a larger spot than the lens. That doesn't suddenly mean increasing sensor resolution is useless. I wouldn't say there is a hard wall there. Once the spot of light resolved by a lens starts growing larger than a pixel, that is the point at which you first start experiencing diminishing returns. There is still value in increasing the resolution of the sensor, however. You begin to oversample...however, in the grand scheme of things, oversampling is actually good. If we had sensors that were consistently capable of oversampling the diffraction spot of a diffraction limited lens by about 3x, then we would be able to do away with low pass filters entirely, and NOT suffer the consequences of moire and other aliasing. Oversampling could do away with a whole lot of issues, eliminate the complaints of people who incessently pixel peep, etc. The frequency of photon shot noise would drop to well below the smallest resolvable element of detail.

To me, there is absolutely no reason to not use the highest resolution sensor you can get your hands on. As I stated in my previous reply...noise is relative to unit area. Average up whatever ratio of smaller pixels equal the area of a larger pixel, and you will have the same noise (all else being equal). I would much rather oversample my lens by a factor of two to three, than always be undersampling it. I'd MUCH rather have the frequency of photon shot noise be significantly higher than the frequency of the smallest resolvable detail, as then, it would simply be a matter of course to downsample by 2-3x for every single photo. Then, the smallest resolvable detail is roughly pixel-sized, and noise is 1.4-1.7x lower.
 
Upvote 0
scyrene said:
Is that software Windows-only? I did a bit of searching, and couldn't find system requirements anywhere. For some reason, most of this sort of software doesn't run on Macs, so I had to use the only stacker I could find that did, called 'Keith's Image Stacker'. It's pretty good but I have no understanding of how different modes produce different results. I guess I should read up on it more.

Yeah, windows is pretty much the operating system of choice for astrophotography software. There are some new apps available for iOS devices, but overall, not much of the software we use runs on Macs. I think most astrophotographers either use a dual-boot (virtualization tends to be problematic and too slow) with macs so they can run windows when they need to...or, they simply have a windows based laptop for their astrophotography stuff.

If your interested in AP, then I highly recommend you pick up a windows box of some kind. The vast majority of the software out there, like BackyardEOS, is only available for windows.

scyrene said:
As for video - this is a question I've had for a while. Even HD video is only 2MP. No matter how much resolution you're gaining through stacking, surely you're losing 90% (of the 5DIII's potential) versus stills? Any thoughts? When I stacked my moon (I've included a crop of a much reduced-size below) I had to shoot lots of stills manually and use those instead. You're clearly doing something better though, as you seem to be pulling out a similar level of detail even though I was at a much higher focal length (5600mm).

BackyardEOS has some unique features. It seems to be able to use the 5x and 10x zoom features of live view, then it records a 720p video from those zoomed in views. So, your actually getting more resolution than if you recorded a 720p video at 1x.

For superresolution algorithms to work, you need your frames to be pretty close together. You want some separation, a minimal mount of time to allow for the subject to "jitter" between frames, as it's the jitter that allows an algorithm like drizzle to work in the first place.

At 5600mm, you should be able to pull out some extreme detail of a small area of the moon's surface. I'd love to have 5600mm at my disposal! :D If you pick up a windows laptop, install BackyardEOS, and play around with the planetary imaging feature...you'll start to see how it all works, and you'll start getting amazing results.
 
Upvote 0