Review: Sensor Performance of the 7D Mark II

jrista said:
Sorry, didn't check the plot. I guess exposure was a bad word. The signal strength does not increase. Noise is reduced by averaging, but the strength of the signal remains the same. With digital systems, once your into detail that's buried in the read and dark current noise, you have a serious quantization noise problem. You could stack 3600 frames...but that faint detail is not going to be as detailed as it would be if you used the longest relevant exposure for your level of read noise.

Going off your plot, I'd say stacking 15 minute exposures would be the idea. You maximize your signal strength in each sub, and can integrate fewer subs to average out the read noise and reveal the faintest details. Your going to get most of the detail above the noise floor, and exposed well enough that your not going to face quantization error issues.

There still seems to be a lot of confusion here. Accumulated total exposure time is proportional to the amount of light collected. So yes it is increasing with additional exposures. If photon noise limited (the situation most of the time in photography), signal-to-noise ratio goes up as the square root of the total exposure. In processing many sub-exposures, if you average, sub-exposures signal level stays the same and the noise floor is pushed down as the square root of the number of exposures averaged. If you add the sub-exposures signal increases linearly and noise increases as the square root. Simple math, but in either case signal-to-noise ratio increases with total exposure time.

Regarding dark frames, if you are new to this kind of photography, and have a relatively new camera, YOU DO NOT NEED DARK FRAMES. Modern digital cameras, circa 2008 or so and later have on sensor dark current suppression, so there is no dark current to remove with a dark frame. Canon cameras have another added feature if you would read my description here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Before starting an imaging session, do a sensor clean. That also updates the hot/dead/stuck pixel list. Modern raw converters will then not include those pixels in the raw conversion. NO NEED FOR DARK FRAMES. Those how say you need dark frames are either using old technology or are clinging to old outdated and not needed methods that just makes things harder.

Flat fields are not needed if you are using profiled camera lenses. Certainly start with a clean sensor, but with the step above, you should have no dust spots and with fast lenses, dust spots don't really show. During raw conversion (e.g. photoshop ACR) simply select the lens profile and viola! the flat field is applied. Quick and simple. Again details in the above link.

With cameras like the 7D Mark 2, or 6D, there is no banding in extreme stretches from images at iso1600. Astrophiotography is simple and effective.

Regarding subs, people keep posting to a graph with 8.4 electron read noise. Not very relevant to modern DSLRs with read noise under 3 electrons with modern cameras, the plot is irrelevant.. A proper plot would show a steeper rise indicating that shorter subs work well.

For the person starting out, I suggest wide angle astrophotography. Follow my series in the link above. Use your 60 mm f/2.8 with either a barn door home made tracker, or iOptron (see my gear page).

15 minute exposure would saturate many stars and brighter parts of nebulae. The idea of low isos and 15 minutes exposures simply leads to quanitzation noise, loss of the low end faint signals, and saturated highlights. And with a fast lens even the sky would be overexposed.

With a 60 mm f/2.8 lens on a 7D2, 1-minute subs at iso1600, you should be able to do much better than this from a dark sky:
http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/orion.35mm.rnclark.c10.09.2013.C45I4598-613_61sec.avg14.g2-bin4x4s.html

Roger
 
Upvote 0
Lee Jay said:
jrista said:
Sorry, didn't check the plot. I guess exposure was a bad word. The signal strength does not increase. Noise is reduced by averaging, but the strength of the signal remains the same.

No, the signal strength (the number of photons collected) does increase. That is, in fact, the main purpose of doing it.


Eh, I see what you are saying. With an analog signal, I think that could be possible (I don't process my images as analog signals). From another angle, averaging frames together doesn't increase the digital signal level...if you had a level in your brightest areas of say 13,000, and in the dimmest areas of 2, and your read noise level is around 7-15 with banding and color blotches, averaging reduces the noise (but enhances the banding), but the absolute levels of the digital signal do not change. Once you average enough to reduce the read noise to an effectively meaningless level, the faintest parts of your signal are still going to have a digital level of 2, and any differences in level of the faint detail that should be revealed in neighboring pixels is going to be swamped by quantization noise...in other words, there is likely to be data in areas that end up looking like a largely continous and flat mass separated by sharp steps (posterization), rather than DETAIL.


Assuming perfectly ideal circumstances, yes...your taking light collected across many subs and combining them together...you still collected the light. But it doesn't increase the amount of detail. So, theoretically correct...in practice I think there are things that work against this ideal. For one, quantization error.

In addition, with shorter and shorter subs, you have an increasing ratio of dead time between frames where you are not collecting light. If you get one photon approx. every 6 seconds from the dim outer region of a nebula, and your taking 5 second subs with a two second delay between (for readout time), your going to miss the majority of those photons. You'll capture some, but fewer than if you have longer exposures. Thus, in practice, stacking 1200x1 subs is not as good as stacking 20x60 subs which is not as good as stacking 2x600 subs. The longer subs are going to be gathering more light in total, which isn't as meaningful for stronger parts of the signal, but could be hugely meaningful for the faintest parts of the signal. The lag time between subs is minimal when your taking fewer, longer subs.


Lee Jay said:
With digital systems, once your into detail that's buried in the read and dark current noise, you have a serious quantization noise problem.

Actually, you have a serious read and dark current noise problem.


Sorry, I was unclear. If you average together 3600 short exposures, you'll average out a ton of the read noise and dark current, leaving behind a very faint signal that suffers from quantization error. It'll be posterized and flat, without much if any structure. That is an issue with quantization error, since you averaged out read noise.


Lee Jay said:
Going off your plot,

Not mine, by the way.

I'd say stacking 15 minute exposures would be the idea.

No, you have to re-do that plot for your level of light pollution and for the noise of your camera and f-stop of your optics.


Whoever's it is. :P

But no, you misunderstood again. For the plot you provided, assuming that read noise and that skyfog level, a 12-15 minute exposure would get you ideal subs. You would waste the least amount of light (ignoring for now the fact that skyfog is useless light), maximize signal strength, minimize the amount of faint detail subject to quantization error, minimize the amount of subs you need to average to reduce read noise to the point where you can stretch that fainter detail and have clean results.


Assuming that read noise and that skyfog level, I do not believe short exposures, say a matter of seconds, are going to give you the same results as 15 minute exposures, for all the reasons I stated at the top. You might produce a more complete signal in the end, but since it is digital, your just making a lower level of detail smoother and less noisy...your not increasing detail. I see this problem with images from guys who use really short exposures with EMCCD cameras...they get REALLY smooth, clean images...but they lack the detail of longer exposure images. Even when a thousand or more subs are integrated...the signal gets really clean, but that's it. I guess one way to look at it is that longer exposures are like an in-camera detail-enhancing stretch.
 
Upvote 0
jrista said:
But no, you misunderstood again. For the plot you provided, assuming that read noise and that skyfog level, a 12-15 minute exposure would get you ideal subs. You would waste the least amount of light (ignoring for now the fact that skyfog is useless light), maximize signal strength, minimize the amount of faint detail subject to quantization error, minimize the amount of subs you need to average to reduce read noise to the point where you can stretch that fainter detail and have clean results.


Assuming that read noise and that skyfog level, I do not believe short exposures, say a matter of seconds, are going to give you the same results as 15 minute exposures, for all the reasons I stated at the top. You might produce a more complete signal in the end, but since it is digital, your just making a lower level of detail smoother and less noisy...your not increasing detail. I see this problem with images from guys who use really short exposures with EMCCD cameras...they get REALLY smooth, clean images...but they lack the detail of longer exposure images. Even when a thousand or more subs are integrated...the signal gets really clean, but that's it. I guess one way to look at it is that longer exposures are like an in-camera detail-enhancing stretch.

Huh...I would have chosen something in the knee of the curve, say, 2-4 minutes. That way, you get 90% of the SNR and have the lowest probability of having to throw out a long sub or three due to a gust of wind, an airplane or a satellite.
 
Upvote 0
Roger N Clark said:
jrista said:
Sorry, didn't check the plot. I guess exposure was a bad word. The signal strength does not increase. Noise is reduced by averaging, but the strength of the signal remains the same. With digital systems, once your into detail that's buried in the read and dark current noise, you have a serious quantization noise problem. You could stack 3600 frames...but that faint detail is not going to be as detailed as it would be if you used the longest relevant exposure for your level of read noise.

Going off your plot, I'd say stacking 15 minute exposures would be the idea. You maximize your signal strength in each sub, and can integrate fewer subs to average out the read noise and reveal the faintest details. Your going to get most of the detail above the noise floor, and exposed well enough that your not going to face quantization error issues.

There still seems to be a lot of confusion here. Accumulated total exposure time is proportional to the amount of light collected. So yes it is increasing with additional exposures. If photon noise limited (the situation most of the time in photography), signal-to-noise ratio goes up as the square root of the total exposure. In processing many sub-exposures, if you average, sub-exposures signal level stays the same and the noise floor is pushed down as the square root of the number of exposures averaged. If you add the sub-exposures signal increases linearly and noise increases as the square root. Simple math, but in either case signal-to-noise ratio increases with total exposure time.


Please see my latest answer. I think that will clear up some confusion about what I'm talking about. I agree, if you add, signal strength increases. I was talking about averaging, which keeps the signal strength, but reduces noise. In the case of the latter, I believe that imposes limitations on how much you can improve detail with continued averaging. You eventually end up with a wicked-clean signal, but that does not mean it is as detailed (i.e. it doesn't exhibit the finer fainter structures) as if you averaged fewer longer exposures which DO have a stronger signal.


Roger N Clark said:
Regarding dark frames, if you are new to this kind of photography, and have a relatively new camera, YOU DO NOT NEED DARK FRAMES. Modern digital cameras, circa 2008 or so and later have on sensor dark current suppression, so there is no dark current to remove with a dark frame. Canon cameras have another added feature if you would read my description here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Before starting an imaging session, do a sensor clean. That also updates the hot/dead/stuck pixel list. Modern raw converters will then not include those pixels in the raw conversion. NO NEED FOR DARK FRAMES. Those how say you need dark frames are either using old technology or are clinging to old outdated and not needed methods that just makes things harder.

I disagree with this, just based on my practical experience with both the 7D and 5D III. Dark current is still high enough in DSLRs to be an issue. I can share some subs of mine where I had some wide swings in temperature. At the higher temperatures, the impact of dark signal and the accompanying dark signal noise, was very obvious. This was during summer, without a cold box, so sensor temperatures (which were higher than ambient) ranged from nearly 30C to over 40C. At lower temperatures such as we have now, you can see how the dark signal became a largely meaningless factor, but at 25, 35, 40C, you can clearly see the increase in noise.

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II, however the 5D III without a doubt has hot pixels, I've seen plenty of dark frames from the 6D showing hot pixels above about 10C, and more of them show up with an increase in temperature (for a given exposure length).

Beyond just hot pixels, there are other reasons to use dark frames. Again, based on my experience with the 5D III, I get a little bit of amplifier glow along the right-hand edge of my subs. It's pretty much impossible to remove with PixInsights background extraction tools, and Russel Croman's GradientXterminator does not seem to eliminate it either. The only way to really get rid of it is with dark frames.


All that said, these days, I generally don't bother with dark frames. They are a royal PITA to deal with...what with them having to be temperature matched and all that. They can mitigate odd issues with the light frame signals, such as that amplifier glow, but there are other ways to deal with the most common issue with dark current, the hot pixels. These days, I dither. Using BackyardEOS, Sequence Generator Pro, MaxIm DL, you can hook into guiding software like PHD or MetaGuide and use that to move the scope in DEC and RA a little bit between each frame. That displaces the hot pixels in each frame, allowing sigma-clipping based algorithms (like the Winsorized Sigma Clipping used in PixInsights stacking script) to identify and reject the hot (and cold) pixels.


So, while I agree that darks are not particularly necessary these days thanks to dithering, I do not believe that most current digital cameras are so free of dark current that it is entirely a non-issue. Maybe it's just a tolerance thing...I'm a bit of a perfectionist, and I don't want any hot pixels in my images. If you have a high tolerance for hot pixels, then maybe the dark current levels of current DSLRs is fine.
 
Upvote 0
Lee Jay said:
jrista said:
But no, you misunderstood again. For the plot you provided, assuming that read noise and that skyfog level, a 12-15 minute exposure would get you ideal subs. You would waste the least amount of light (ignoring for now the fact that skyfog is useless light), maximize signal strength, minimize the amount of faint detail subject to quantization error, minimize the amount of subs you need to average to reduce read noise to the point where you can stretch that fainter detail and have clean results.


Assuming that read noise and that skyfog level, I do not believe short exposures, say a matter of seconds, are going to give you the same results as 15 minute exposures, for all the reasons I stated at the top. You might produce a more complete signal in the end, but since it is digital, your just making a lower level of detail smoother and less noisy...your not increasing detail. I see this problem with images from guys who use really short exposures with EMCCD cameras...they get REALLY smooth, clean images...but they lack the detail of longer exposure images. Even when a thousand or more subs are integrated...the signal gets really clean, but that's it. I guess one way to look at it is that longer exposures are like an in-camera detail-enhancing stretch.

Huh...I would have chosen something in the knee of the curve, say, 2-4 minutes. That way, you get 90% of the SNR and have the lowest probability of having to throw out a long sub or three due to a gust of wind, an airplane or a satellite.


With sigma clipping algorithms, airplanes, satellites, cosmic ray hits, etc. are pretty easy to reject. You usually need more than 10 frames, but that's usually the case anyway. I live near an airport, and aside from the ludicrously low planes that cover a huge portion of the frame, I use the kappa-sigma clipping algorithm along with aggressive dithering for light frames to reject the streaks out. With really long exposures, say the 30-45 minutes (and sometimes even longer) I'm seeing guys do these days for OIII narrow band imaging on CCDs, multiple streaks from aircraft, meteors, satellites, etc. can end up in a single frame, and it can become more problematic. Most of them still don't seem to have that many problems when using a sigma clipping algorithm to integrate.



A gust of wind might be a problem. Depends on the mount...if your using something with absolute encoding and skymodeling, then wind is usually not a problem unless it is very strong and continuous. At smaller image scales, such as me at 2.1"/px, I can still absorb a good deal of wind without it showing up in my stars, and I am using the lowly Atlas as a mount. (That may partly be due to the fact that I usually have poor seeing, which bloats my stars anyways.) At a larger image scale, sure, wind is likely to be a problem.


There is also the fact that it's the faint detail that you expose for (well, assuming you want to get as much faint detail as possible...maybe that isn't everyone's goal.) Were assuming that the whole object has a higher photon flux. That may not be the case. The photon flux of the bulk of a nebula might be up at 14e-/min, but the fainter outer detail is likely to be much lower...say ~2e-/min? I think about the faint detail. I can expose for the Trap in Orion in 15 seconds. I had to expose at least 5 minutes to get the faint outer detail just barely above the read noise (and at that long of an exposure, there was dark current noise as well...those faint outer regions are very noisy.) To fully get those faint details above the noise floor, I settled on 8 minute exposures.
 
Upvote 0
jrista said:
I disagree with this, just based on my practical experience with both the 7D and 5D III. ...

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II,

Jon,
First, this is supposed to be a thread about the 7D Mark II and what a game changer it is. You are talking about problems with older cameras. On page 7 you posted an image of the Orion nebula made with a 5DIII, 600 f/4, an hour of exposure, and many flats, darks, bias. You got a little of the faint nebula dust around M42. Nice image. BUT..

I am attaching an image made with the 7D2, 300 f/2.8, no flats, no darks, no bias, and only 27.5 minutes of exposure (plus a few short exposures down to 2seconds for the core of M42). That is 27 61 second exposures at ISO 1600. Simple tracking with an Astrotrac, and no guiding. Simple processing: raw conversion in photoshop with lens profile, and photoshop reads the hot/dead/stuck pixel list and does not include them in the raw conversion. Then simple align the images, and sigma clip average. Then stretch with curves in photoshop. There are NO issues with hot/stuck/dead pixels in the final image. There is no problem with amp glow or non uniformity of the background. The pink glow in the lower left is the emission from IC 342 where the Horsehead nebula is.

Your aperture is 150mm diameter/ mine at 107 mm diameter, square the ratio, so you collected twice the light per second from the subject and more than twice the exposure time, so 4 times the light. Yet I show much fainter nebulae using much simpler methods. THAT is why the 7D2 is a game changer.

Again, please see my description of simplified methods for night photography here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Flats, darks, and bias frames are no longer needed.

Roger
 

Attachments

  • orion.nebula.m42_61,10,4,2sec_c11.21.2014.0J6A1631-1657-SigAv.g-b6x6s.jpg
    orion.nebula.m42_61,10,4,2sec_c11.21.2014.0J6A1631-1657-SigAv.g-b6x6s.jpg
    317.4 KB · Views: 228
Upvote 0
Roger N Clark said:
jrista said:
I disagree with this, just based on my practical experience with both the 7D and 5D III. ...

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II,

Jon,
First, this is supposed to be a thread about the 7D Mark II and what a game changer it is. You are talking about problems with older cameras. On page 7 you posted an image of the Orion nebula made with a 5DIII, 600 f/4, an hour of exposure, and many flats, darks, bias. You got a little of the faint nebula dust around M42. Nice image. BUT..

I am attaching an image made with the 7D2, 300 f/2.8, no flats, no darks, no bias, and only 27.5 minutes of exposure (plus a few short exposures down to 2seconds for the core of M42). That is 27 61 second exposures at ISO 1600. Simple tracking with an Astrotrac, and no guiding. Simple processing: raw conversion in photoshop with lens profile, and photoshop reads the hot/dead/stuck pixel list and does not include them in the raw conversion. Then simple align the images, and sigma clip average. Then stretch with curves in photoshop. There are NO issues with hot/stuck/dead pixels in the final image. There is no problem with amp glow or non uniformity of the background. The pink glow in the lower left is the emission from IC 342 where the Horsehead nebula is.

Your aperture is 150mm diameter/ mine at 107 mm diameter, square the ratio, so you collected twice the light per second from the subject and more than twice the exposure time, so 4 times the light. Yet I show much fainter nebulae using much simpler methods. THAT is why the 7D2 is a game changer.

Again, please see my description of simplified methods for night photography here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Flats, darks, and bias frames are no longer needed.

Roger
I find both of your pictures inspiring... but I must say that I am very impressed with the results Roger gets from a far simpler method... If I ever get a clear night here I have to go try this out!
 
Upvote 0
Roger N Clark said:
jrista said:
I disagree with this, just based on my practical experience with both the 7D and 5D III. ...

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II,

Jon,
First, this is supposed to be a thread about the 7D Mark II and what a game changer it is. You are talking about problems with older cameras. On page 7 you posted an image of the Orion nebula made with a 5DIII, 600 f/4, an hour of exposure, and many flats, darks, bias. You got a little of the faint nebula dust around M42. Nice image. BUT..

I am attaching an image made with the 7D2, 300 f/2.8, no flats, no darks, no bias, and only 27.5 minutes of exposure (plus a few short exposures down to 2seconds for the core of M42). That is 27 61 second exposures at ISO 1600. Simple tracking with an Astrotrac, and no guiding. Simple processing: raw conversion in photoshop with lens profile, and photoshop reads the hot/dead/stuck pixel list and does not include them in the raw conversion. Then simple align the images, and sigma clip average. Then stretch with curves in photoshop. There are NO issues with hot/stuck/dead pixels in the final image. There is no problem with amp glow or non uniformity of the background. The pink glow in the lower left is the emission from IC 342 where the Horsehead nebula is.

Your aperture is 150mm diameter/ mine at 107 mm diameter, square the ratio, so you collected twice the light per second from the subject and more than twice the exposure time, so 4 times the light. Yet I show much fainter nebulae using much simpler methods. THAT is why the 7D2 is a game changer.

Again, please see my description of simplified methods for night photography here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Flats, darks, and bias frames are no longer needed.

Roger

Marvellous shot, Roger!
Good to see that good knowledge and proper usage makes this body, and its sensor, shine at least as bright as the nebulæ the two of you have shown us here.
To be honest, I think that this last piece of evidence is the best sign that 7D Mark II can deliver excellent images.
 
Upvote 0
DominoDude said:
Roger N Clark said:
jrista said:
I disagree with this, just based on my practical experience with both the 7D and 5D III. ...

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II,

Jon,
First, this is supposed to be a thread about the 7D Mark II and what a game changer it is. You are talking about problems with older cameras. On page 7 you posted an image of the Orion nebula made with a 5DIII, 600 f/4, an hour of exposure, and many flats, darks, bias. You got a little of the faint nebula dust around M42. Nice image. BUT..

I am attaching an image made with the 7D2, 300 f/2.8, no flats, no darks, no bias, and only 27.5 minutes of exposure (plus a few short exposures down to 2seconds for the core of M42). That is 27 61 second exposures at ISO 1600. Simple tracking with an Astrotrac, and no guiding. Simple processing: raw conversion in photoshop with lens profile, and photoshop reads the hot/dead/stuck pixel list and does not include them in the raw conversion. Then simple align the images, and sigma clip average. Then stretch with curves in photoshop. There are NO issues with hot/stuck/dead pixels in the final image. There is no problem with amp glow or non uniformity of the background. The pink glow in the lower left is the emission from IC 342 where the Horsehead nebula is.

Your aperture is 150mm diameter/ mine at 107 mm diameter, square the ratio, so you collected twice the light per second from the subject and more than twice the exposure time, so 4 times the light. Yet I show much fainter nebulae using much simpler methods. THAT is why the 7D2 is a game changer.

Again, please see my description of simplified methods for night photography here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Flats, darks, and bias frames are no longer needed.

Roger

Marvellous shot, Roger!
Good to see that good knowledge and proper usage makes this body, and its sensor, shine at least as bright as the nebulæ the two of you have shown us here.
To be honest, I think that this last piece of evidence is the best sign that 7D Mark II can deliver excellent images.

Now if we can get an equally impressive improvement in cloud filter technology we would not be here discussing all of this. :)

Great shot Roger and nice article. Gives some of us some hope.
 
Upvote 0
Roger N Clark said:
jrista said:
I disagree with this, just based on my practical experience with both the 7D and 5D III. ...

There are still issues with dark current. Hot pixels still exist in modern cameras. I cannot speak for the 7D II,

Jon,
First, this is supposed to be a thread about the 7D Mark II and what a game changer it is. You are talking about problems with older cameras. On page 7 you posted an image of the Orion nebula made with a 5DIII, 600 f/4, an hour of exposure, and many flats, darks, bias. You got a little of the faint nebula dust around M42. Nice image. BUT..

I am attaching an image made with the 7D2, 300 f/2.8, no flats, no darks, no bias, and only 27.5 minutes of exposure (plus a few short exposures down to 2seconds for the core of M42). That is 27 61 second exposures at ISO 1600. Simple tracking with an Astrotrac, and no guiding. Simple processing: raw conversion in photoshop with lens profile, and photoshop reads the hot/dead/stuck pixel list and does not include them in the raw conversion. Then simple align the images, and sigma clip average. Then stretch with curves in photoshop. There are NO issues with hot/stuck/dead pixels in the final image. There is no problem with amp glow or non uniformity of the background. The pink glow in the lower left is the emission from IC 342 where the Horsehead nebula is.

Your aperture is 150mm diameter/ mine at 107 mm diameter, square the ratio, so you collected twice the light per second from the subject and more than twice the exposure time, so 4 times the light. Yet I show much fainter nebulae using much simpler methods. THAT is why the 7D2 is a game changer.

Again, please see my description of simplified methods for night photography here:
http://www.clarkvision.com/articles/night.photography.image.processing/
Flats, darks, and bias frames are no longer needed.

Roger


Very nice image! I like this one MUCH better than the horse head one! :) You should have lead with this.

Now, I need to correct a couple of things. First, I image from a pretty heavily light polluted red zone on the bortle scale. I was not imaging bare lens. I had an Astronomik CLS-XL filter inserted into my 5D III, which reduces the light by about one and a third stops or so (I did mention this before, but only briefly). I was not working with four times as much light. I wasn't even working with twice as much light. I'm currently working on replacing the Astronomik CLS-XL filter with an IDAS LPS-D1 or -P2, which has more passbands and fewer/smaller filtered bands. It should increase the amount of light I get in a given exposure.

Second, I did not use darks. I dither, but I did not use darks. The 200-frame master bias is months old, I generated it once back around July, and have been using it ever since. The flats I use because I do have dust motes. If your really clean and don't live in an arid place like Colorado, maybe you can get away with not having to deal with flats. Sure, it's possible to skip the calibration frames entirely...but I feel I need the flats at the very least, and when integrating with DSS, it wants a bias to calibrate both the flats and lights. You do what you gotta do. :P However, I did NOT use darks...just to be clear on that.

Another thing to point out, you have a larger field of view and a stop faster aperture, which means more light concentrated onto each pixel. On top of that, the pixels are smaller with a lower FWC, so they saturate faster. Doesn't that ultimately mean you achieve a brighter exposure for a given unit time (regardless of what the ISO is, since that just affects the gain used)?

I mean, if I increased my field of view by...what, ~25% (FoV * (600/480)...? Or would it be FoV * SQR(600/480)?), removed the CLS filter, increased the aperture by a stop, and was shooting under darker skies without as much light pollution, sure, I'd have been able to get more exposure in less time as well. Because of my skyfog levels, I'm either relegated to using shorter exposures and getting a lesser signal (I'm limited to about 180 seconds tops unfiltered), or forced to use some kind of filtration to block out the light pollution while passing useful nebula bands. I chose the latter...my prior posts should explain why.

I do not have a 400mm lens that I could use to normalize the field of view, but I can give short unfiltered subs a try, and simply stack many more of them. I can give a higher ISO a try as well. Just to see how things pan out in the end. LP is my limiting factor, and I don't think that I can pull out that much outer nebulosity in 30 minutes...but I'm curious enough to try now. I might need around 25% more time to offset the field of view difference, and on top of that there is the relative aperture difference. (Correct me if I am wrong, but the size of the physical aperture affects resolving power...more light per unit point of sky is gathered with a larger aperture (so you can resolve smaller stars), however overall light gathering capacity is still related to the relative aperture, since light is still falling off over the distance of the focal length...thus, a 300mm f/2.8 lens is still gathering more light per unit time per pixel than a 600mm f/4 lens, no?)

I don't know the specifics, and the 7D II may certainly be improved by having less hot pixels. I am not sure if that is necessarily a "game changer", although it certainly is an improvement. I would really like to know what your sky brightness levels are...as I think that plays a huge role in how much faint detail you can pull out in an image of a given total integration time. It sounds like you shot unfiltered with a faster lens as well, which to me makes it sound like you were sucking down a lot of light.

Sorry for being so contrary...but, once you factor in the circumstances of my imaging, I have to point out that I was NOT gathering four times as much light, not even a stop more light. Due to the wider FoV and faster relative aperture, I do believe you were getting more light per pixel, which has a significant impact on overall exposure. I am still skeptical that the 7D II is so much of a game changer as it's being made out to be. I don't just say that based on the discussion here. I have some 7D II data I'm working with...I may just post a screenshot, although I'm not sure I can share the RAW until I ask the owner. It honestly doesn't look that much better to me than most other Canon RAW data. It still has plenty of horizontal banding (although it's fat bands, not per-row banding), and plenty of color blotchiness and still has that very strong red cast. I see a very marginal improvement, not something game changing. I am always happy to admit when I am wrong...but, so far, I don't think that anything I've seen about the 7D II so far demonstrates that I am (at least regarding sensor performance...overall, I think the camera as a whole is a great performer, and once again fills the unique spot that the original 7D did.)
 
Upvote 0
For anyone who is interested, this is a useful way of measuring your skyfog brightness:


http://www.pbase.com/samirkharusi/image/37608572


It's simple and effective, and produces a comparable number. Last time I measured, I was getting around 18.66 Mag/sq arc-sec, which is ok, but far from dark sky quality. My exposure time to mid frame is around 75-78 seconds or so.





For those who are interested, some example exposures from a 5D III and 7D II:


5D III
QwgnBbU.jpg



7D II
8aireFK.jpg



Here's my personal opinion, based on what I observe. I see an improvement....but, it also seems like horizontal banding increased while vertical banding was eliminated. Still has the same general characteristic to color noise and red tint. Overall background noise levels do not seem to have dropped considerably, although the removal of vertical banding, and the thicker, softer nature of the horizontal banding certainly helps with the standard deviation.


I do not necessarily see a huge improvement in how much I could lift the shadows...with debanding on both images, the ability to differentiate steps in the wedge from each other simply stops at around swatch 34 in both images.


Just for contrast, the NX1:


wESOH5b.jpg



The ability to discern steps stops at swatch 38? That's about a stop, stop and a third more DR than the 7D II. Personally, and I mean just speaking for myself...not trying to tell everyone else what is or isn't for them, I could do far more with the data from the NX1 than I could with the data from the 7D II. No banding really to speak of. Clean, random noise without any visible color cast. Very low noise. I get more SNR right out the gate in each and every sub.





One final thing for Roger. I'm a Canon fan. All my equipment is Canon so far. I REALLY want Canon to take that quantum leap into the modern age with a sensor that competes with the likes of the one in the NX1, or Sony's Exmor. I really do. I especially do from an astrophotography standpoint. I could use a Samsung NX1 for astro, but I wouldn't have the ability to use all the amazing software tools, like BackyardEOS, with it. I would LOVE to have the NX1 level IQ in the 7D II. If it did, I'd have been first in line to preorder!


I see extremely marginal improvements in Canon's sensor technology generation after generation. It does get better, but I guess I really want Canon to not just take these little micro steps, with a .1 stop improvement in DR here, or a .2 stop improvement there. The NX1 has at least a full stop improvement in DR. The Exmor has a full two stop improvement. (Personally, I don't like DXO's Print DR numbers, so I don't use them.)


I don't see anything like that in the 7D II. Relative to the 7D, or the 60D, or the 5D III or even 1D X, I'm sure it's a nice improvement. However, when I look at images, when I stretch the raws, I don't see anything that I could, personally, honestly call a "game changer". I really, REALLY wish it was. But I don't see it.
 
Upvote 0
jrista said:
Very nice image! I like this one MUCH better than the horse head one! :) You should have lead with this.

Now, I need to correct a couple of things. First, I image from a pretty heavily light polluted red zone on the bortle scale. I was not imaging bare lens. I had an Astronomik CLS-XL filter inserted into my 5D III, which reduces the light by about one and a third stops or so (I did mention this before, but only briefly). I was not working with four times as much light. I wasn't even working with twice as much light. I'm currently working on replacing the Astronomik CLS-XL filter with an IDAS LPS-D1 or -P2, which has more passbands and fewer/smaller filtered bands. It should increase the amount of light I get in a given exposure.

Second, I did not use darks. I dither, but I did not use darks. The 200-frame master bias is months old, I generated it once back around July, and have been using it ever since. The flats I use because I do have dust motes. If your really clean and don't live in an arid place like Colorado, maybe you can get away with not having to deal with flats. Sure, it's possible to skip the calibration frames entirely...but I feel I need the flats at the very least, and when integrating with DSS, it wants a bias to calibrate both the flats and lights. You do what you gotta do. :P However, I did NOT use darks...just to be clear on that.

Him Jon,
That's funny as I live in Colorado too, and made the Horsehead and M42 images from the Denver Astronomical Society site near Byers. The skies there are OK, but not super, and it was a night of bright airglow and high cirrus, so not great conditions. I often work in dusty environments too, and far dustier than Colorado (e.g. the dusty Serengeti).



jrista said:
Another thing to point out, you have a larger field of view and a stop faster aperture, which means more light concentrated onto each pixel. On top of that, the pixels are smaller with a lower FWC, so they saturate faster. Doesn't that ultimately mean you achieve a brighter exposure for a given unit time (regardless of what the ISO is, since that just affects the gain used)?

I mean, if I increased my field of view by...what, ~25% (FoV * (600/480)...? Or would it be FoV * SQR(600/480)?), removed the CLS filter, increased the aperture by a stop, and was shooting under darker skies without as much light pollution, sure, I'd have been able to get more exposure in less time as well. Because of my skyfog levels, I'm either relegated to using shorter exposures and getting a lesser signal (I'm limited to about 180 seconds tops unfiltered), or forced to use some kind of filtration to block out the light pollution while passing useful nebula bands. I chose the latter...my prior posts should explain why.

Lets try this example. It is raining uniformly over your back yard (if you don't have one, pretend you do). You cover the back yard in pans to collect water. Does it matter how big the pans are assuming none overflow? No it doesn't. The amount of water collected depends on the rate of rainfall and the time you leave the pans out.

One of the most confusing subjects among digital photographers these days seems to be understanding exposure. There is light density in the focal plane, and light from the subject. They are different. Let's do a simple example.

Say a 100 mm focal length f/4 lens images a square object that results in 10 x 10 pixels in the camera with 5 micron pixels. Let's say the light in the scene results in a photon rate of 1 photon per second. The subject on the sensor is:
(10 pixels * 5 microns /pixel) squared = 50*50 = 2500 square microns, so we get 2500 photons per second.

Now let's move to a 200 mm f/4 lens. We still get 1 photon per square micron per second delivered to the sensor. But the subject is now 20 x 20 pixels due to the longer focal length.
Now the subject is (20 pixels * 5 microns /pixel) squared = 100*100 = 10,000 square microns. We then receive 10,000 photons per second
from the subject.

It is the increased amount of light given by the larger lens that gives the image quality with the longer focal length. We got 4 times the light. The 200 f/4 lens has double the diameter of the 100 f/4 lens, so 4 times the collecting area.

Read more about this in my 4-part series on exposure: http://www.clarkvision.com/articles/iso/
The key factor is called Etendue.

Field of view is irrelevant. What matters, especially in astrophotography and other low light photography where every photon counts to make a good image, is aperture to collect the light, and exposure time to collect light long enough.

So in your case, the 600 f/4 lens and 5DIII camera with 6.25 micron pixels, giving Etendue per pixel (area of the lens times solid angle of the pixel, called the A Omega product:
Lens area, A = 150 mm * 150mm * pi /4 = 17671 sq mm
Pixel solid angle = Omega = (206265 * 0.00625 micron pixels / 600 mm focal length) squared = 2.148 arc-seconds^2 = 4.6 square arc seconds
Etendue = A*Omega = 17671 *4.6 ~ 81300 sq mm sq arc-sec

My 300 on the 7DII with 4.09 micron pixels:
A = 107*107*pi/4 = 8992 sq mm, Omega = (206265 * .00409/300)^2 = 2.81*2.81 = 7.9 sq arc-sec
Etendue = A*Omega = 8992 * 7.9 ~ 71000

So your system was receiving 81300 / 71000 = 1.14 times more light per pixel per second. But your image size is larger so an object in your image covers more pixels and more pixel equal more light FROM the SUBJECT. You have more pixels by the factor 2.81/2.148, or 1.3 times in each dimension, so 1.3^2 = 1.7 times more pixel on the subject. That combined with the 1.14 gives 1.94x. So your system delivered 1.94 times the light per second as my system. You exposed for 60 minues to my 27.5, so another 60/27.5 = 2.18 times more light.


Total light collected for your image from the subject = 1.94*2.18 = 4.2 times more light than for my image. That means 4.2 times more light from the Trapezium, 4.2 times more light from a small nebula, etc.


jrista said:
I don't know the specifics, and the 7D II may certainly be improved by having less hot pixels. I am not sure if that is necessarily a "game changer", although it certainly is an improvement. I would really like to know what your sky brightness levels are...as I think that plays a huge role in how much faint detail you can pull out in an image of a given total integration time. It sounds like you shot unfiltered with a faster lens as well, which to me makes it sound like you were sucking down a lot of light.

See above; you got more light. If you had a brighter sky, that would limit how faint you could get. But see my review on the 5DIII. it still suffers from banding. I also have dark frames and I agree that it has amp glow, so not the best implementation of on sensor dark current suppression, especially considering when it was introduced. The 7D2 is much better.

My sky was magnitude 21.1 per square arc-second. Contact me off list. The next new Moon, maybe we could go to the same location. New Moon will be the weekend of Dec 20. I would like to get out to a dark site. Not sure where you live in Colorado.


jrista said:
Sorry for being so contrary...but, once you factor in the circumstances of my imaging, I have to point out that I was NOT gathering four times as much light, not even a stop more light. Due to the wider FoV and faster relative aperture, I do believe you were getting more light per pixel, which has a significant impact on overall exposure. I am still skeptical that the 7D II is so much of a game changer as it's being made out to be. I don't just say that based on the discussion here. I have some 7D II data I'm working with...I may just post a screenshot, although I'm not sure I can share the RAW until I ask the owner. It honestly doesn't look that much better to me than most other Canon RAW data. It still has plenty of horizontal banding (although it's fat bands, not per-row banding), and plenty of color blotchiness and still has that very strong red cast. I see a very marginal improvement, not something game changing. I am always happy to admit when I am wrong...but, so far, I don't think that anything I've seen about the 7D II so far demonstrates that I am (at least regarding sensor performance...overall, I think the camera as a whole is a great performer, and once again fills the unique spot that the original 7D did.)

See above on the amount of light. 4.2x. Color blotchiness is a function of the raw converter, not the sensor. There is negligible banding at ISO 1600 in the 7d2.

The sensor data in my reviews is independent of any lens or raw converter. Many things one sees online are also a function of the lens and raw converter used.

Compare my 5DIII data: http://www.clarkvision.com/reviews/evaluation-canon-5diii/
to the 7D2 data: http://www.clarkvision.com/reviews/evaluation-canon-7dii/

Look at the pattern noise in tables 2a and 2b. The 5DIII shows significant banding even at ISO 1600. Not so with the 7D2. That makes a major difference in pulling out faint detail. The 5DIII also has higher dark current, and that too limits your ability to get faint.

Roger
 
Upvote 0
Roger N Clark said:
Jon,
Your bar tests are not equal. The NX1 chart is severely overexposed on the bright end so no wonder it shows better on the low end.

Roger


None of the images are over-exposed. It's just a screen stretch in PixInsight, which tends to radically push up the highlights. All of the images exposed step 1 to roughly the same brightness levels. A screen stretch in PI tends to pull up the deepest tones to a common level...since there were deeper tones in the NX1 image than in the Canon images, that pushed the highlights up even more. Note, though, that this is just a "screen" stretch. When I measure areas of the image using the statistics tool, that's all done in linear space. The data is debayered, but beyond that, it's otherwise untouched, so the statistics are as good as I can get myself. (I may be able to load the images into PixInsight without debayering.)


I am trying to get new step wedge images created. I don't have the cameras, someone else has them. I'm trying to get images from each camera that just barely clip step 1 in the wedge, which would baseline all the cameras exactly, allowing me to get more accurate information. There is some slight variation in the images I shared, but it isn't as significant as the screen stretch would imply.
 
Upvote 0
Roger N Clark said:
Jon,
Your bar tests are not equal. The NX1 chart is severely overexposed on the bright end so no wonder it shows better on the low end.

Roger

Its difficult to do a test right, which is why there are so few online testers that are highly respected. Even so, I try to read as many as possible, since they sometimes come at things from a different angle. I've seen so many conflicting reviews and poorly done ones online that it gets frustrating. (Clarkvision is one of the good ones).

One thing that I think I have seen is many posts of beautiful bird photos, but the feather detail is smeared. I haven't yet formed a opinion because its a very difficult subject to photograph, and I have only seen FF or 1.3 crop images that are better. Lens settings, shutter speed, vibration, air temperatures, distance to subject, etc all play a part, so real world images often merely reflect the photographers capabilities to deal with all those things.
 
Upvote 0
Roger N Clark said:
Him Jon,
That's funny as I live in Colorado too, and made the Horsehead and M42 images from the Denver Astronomical Society site near Byers. The skies there are OK, but not super, and it was a night of bright airglow and high cirrus, so not great conditions. I often work in dusty environments too, and far dustier than Colorado (e.g. the dusty Serengeti).


Ah! Well, cool! I had no idea. Colorado seems to be a hotspot for astrophotographers...I've met at least half a dozen on CloudyNights...and more seem to show up all the time. We should have a star party. :P



Roger N Clark said:
My sky was magnitude 21.1 per square arc-second. Contact me off list. The next new Moon, maybe we could go to the same location. New Moon will be the weekend of Dec 20. I would like to get out to a dark site. Not sure where you live in Colorado.


I'll PM you.


Trying to keep the rest of this as compact as possible, so I'm going to snip some stuff out.

Roger N Clark said:
Lets try this example. It is raining uniformly over your back yard (if you don't have one, pretend you do). You cover the back yard in pans to collect water. Does it matter how big the pans are assuming none overflow? No it doesn't. The amount of water collected depends on the rate of rainfall and the time you leave the pans out.

.../snip/...

So your system was receiving 81300 / 71000 = 1.14 times more light per pixel per second. But your image size is larger so an object in your image covers more pixels and more pixel equal more light FROM the SUBJECT. You have more pixels by the factor 2.81/2.148, or 1.3 times in each dimension, so 1.3^2 = 1.7 times more pixel on the subject. That combined with the 1.14 gives 1.94x. So your system delivered 1.94 times the light per second as my system. You exposed for 60 minues to my 27.5, so another 60/27.5 = 2.18 times more light.

Total light collected for your image from the subject = 1.94*2.18 = 4.2 times more light than for my image. That means 4.2 times more light from the Trapezium, 4.2 times more light from a small nebula, etc.

Roger N Clark said:
See above; you got more light.


A couple of things. I still think things between the two cameras normalize out a bit once the factors below are considered, hence the reason I'm still skeptical that the 7D II is a huge improvement over prior Canon cameras, however I may be swayed here...so bear with me.

1. I was using the Astronomik CLS filter. Without the filter, sure, I'd have gathered more total light. The filter blocks about 1 1/3rd stops of light, so, at the very least, instead of a 4.2x increase in *total* light gathered, it's 4.2/2.66, or around 1.57x more *total* light. I agree, there is a difference in TOTAL light gathered.

In the case of regular old terrestrial photography, this factor of total light gathered is very significant, as it can mean less noise. Thing is, it can mean less noise because you can frame the subject the same in both FF and APS-C cameras. In doing that, you gather more light in total for the same subject...more pixels on subject...more detail, less noise. I fully agree with that point.

However...

2. Is pixel size really meaningless when it comes to astrophotography? Yes, I gathered more light in total with the larger frame, however as far as signal to read noise ratio, that is a per pixel thing. I have more sensor area spread out over more pixels, and I put more light onto more pixels in total, but the amount of light per pixel was lower (again, don't forget that I had a filter in place that was blocking 1 1/3rd stops of light).

Going off of a 1.14x difference in light gathered without the filter, 1.14/1.56 = 0.73x. Assuming this is right, the signal of each pixel was thus closer to the noise floor than with your 7D II image...and the SNR of each pixel was lower. Consequence of me using a filter.

Using the data from your reviews, read noise of the 5D III at ISO 400 is 9.8e- (surprising...I was going off of sensorgen data, which put it around 5.something electrons...which is barely more than half what your tests indicate...I think your test is more accurate, given my experience with noise at ISO 400), while read noise from the 7D II at ISO 1600 is 2.4e-. I would (while using the filter) need to collect four times more subs to reduce noise to a similar level of one single 7D II frame, and even more than that to reduce noise to the level of your integration. I think the integration I shared before was 12x300s subs, so RN would have been averaged down to about 2.82e-. Assuming 27 subs with the 7D II, your integrations noise floor would have been 0.46e-.

Yes, I gathered more light in total per sensor. I gathered less light per pixel, and had to deal with more read noise. The SNR in my subs was lower, signal was lower, noise floor was higher (ok, I conceed, higher ISO would probably have been better!)

I also had more skyfog (which, BTW, was extracted with PixInsights DynamicBackgroundExtraction...when skyfog is removed, the object signal is left behind, and since that signal is weaker, often significantly weaker, than the skyfog+object signal, it's even noisier). I don't know what the skyfog was that night, so I couldn't tell you what e-/min skyfog there was vs. e-/min object signal. Regardless, between the filter and the skyfog, I do not believe my OBJECT signal strength (which is largely what I was left with after DBE in PixInsight) was nearly as high as yours. If I had imaged under 21.1mg/sq" skies without a filter, I have no doubt my results would have been significantly better, probably more similar to yours (with the added negative of the 5D III banding...really hate that junk! :P )

Anyway. My point is... I was imaging with a filter, under ~18mg/sq" skies tops, so it is not surprising that the 7D II image is cleaner...but were basically comparing apples and oranges at the moment, so I don't think the differences between my image and yours are proof that the 7D II is a "game changer." I do, however, now believe the 7D II is a solid improvement for astrophotography over prior Canon cameras, with maybe the exception of the 6D (it produces some pretty good data as well.)



I could be off base here...been at this too long now today.



I really would like to hit a dark site with you, get some better data, and do a better and more "apples to apples" comparison. The 7D II may really be a huge improvement. It would be interesting to test a couple other cameras as well, one with an Exmor, and if possible, the Samsung NX1. (I would really love to see you test a couple of those cameras using your method...the only broad source of low-level test data like that that we have is DXO, and so much of what DXO does is buried within a black box...I have a hard time trusting their data. Your reviews are open and detailed...would be awesome to see the D810, A7s, and NX1 tested by you.) I would be willing to bet the NX1 and A7s both trounce any Canon camera out there as far as astrophotography goes...but, that's mostly based off of my own testing. I don't have the quality of data you do when I examin raw data.

Anyway, I am happy to admit that used under the right conditions, say 21.1mg/sq" skies and no filter, it may perform significantly better than I give it credit for...dark noise/hot pixels may not be as much of an issue if I could expose shorter. I may be blaming the 5D III for noise levels that are actually imposed on me by light pollution...

...and that leads me to this thought. I am interested in the ISO settings you've used. You certainly had much lower read noise. Maybe it's just a matter of my skyfog levels, as after I extract them with DBE, the remaining object signal is usually quite a bit noisier. I may simply be exposing longer to combat that issue, and thinking about it, the light added to the stars by light pollution does not get extracted, which may actually be the real cause of my star clipping issue. I've been exposing longer at lower ISO to get more dynamic range to avoid clipping my stars (which does seem to help), at the cost of faint detail SNR.


Roger N Clark said:
If you had a brighter sky, that would limit how faint you could get. But see my review on the 5DIII. it still suffers from banding. I also have dark frames and I agree that it has amp glow, so not the best implementation of on sensor dark current suppression, especially considering when it was introduced. The 7D2 is much better.

Aye, I have very bright skies a lot of the time (and sometimes, they get very dark...orange zone if not darker). The 300 second subs I gathered were actually made on a night with poor visibility and high scattered light as well. I don't know if you remember, but back when we had that deep cold front moving through Colorado just a couple weeks ago, there were some clear nights, but the sky was REALLY milky and bright...transparency was terrible. That was the first time I imaged Orion...and it was probably a really bad idea to use data from those days. I forgot about that earlier today, but I may simply discard that whole entire set of data, and start fresh on a clearer night... That may have significantly limited my ability to expose good signal strength on those faint outer details... :-\


Roger N Clark said:
See above on the amount of light. 4.2x. Color blotchiness is a function of the raw converter, not the sensor. There is negligible banding at ISO 1600 in the 7d2.

The sensor data in my reviews is independent of any lens or raw converter. Many things one sees online are also a function of the lens and raw converter used.


Compare my 5DIII data: http://www.clarkvision.com/reviews/evaluation-canon-5diii/to the 7D2 data: http://www.clarkvision.com/reviews/evaluation-canon-7dii/Look at the pattern noise in tables 2a and 2b. The 5DIII shows significant banding even at ISO 1600. Not so with the 7D2. That makes a major difference in pulling out faint detail. The 5DIII also has higher dark current, and that too limits your ability to get faint.


Ok, fair enough. I have noticed that CaptureOne doesn't seem to have as much problem with the blotchy color as Lightroom. I still use Lightroom, as CaptureOne has far less support for various RAW file types, and my 60-day trial is going to run out any day now. But, it did seem to produce cleaner noise. I am not sure what PixInsight does, I think it's configurable, so I can go poke around. I may also be able to load the image without debayering.
 
Upvote 0
Jon,
One point abut your long response. You seem focused on read noise. A larger factor is noise from dark current. For example if dark current is 0.1 electron/seconds, and you do a one hour exposure, dark current will be 360 seconds * 0.1 e/sec = 360 electrons. While the dark current level is suppressed by on sensor electronics, the noise is not. So noise is is square root 360 = 19 electrons. You may have averaged the read noise, but one does not average the dark current noise--it just keeps accumulating. The fact that the 7D2 dark current is so much lower means that its noise contribution is that much lower.

Roger
 
Upvote 0
Roger N Clark said:
Jon,
One point abut your long response. You seem focused on read noise. A larger factor is noise from dark current. For example if dark current is 0.1 electron/seconds, and you do a one hour exposure, dark current will be 360 seconds * 0.1 e/sec = 360 electrons. While the dark current level is suppressed by on sensor electronics, the noise is not. So noise is is square root 360 = 19 electrons. You may have averaged the read noise, but one does not average the dark current noise--it just keeps accumulating. The fact that the 7D2 dark current is so much lower means that its noise contribution is that much lower.

Roger


Sure, I understand dark current is a factor. I also understand that it can be a significant factor with uncooled DSLRs. I don't think it is a key factor in my Orion images, or at least not the dominant noise factor.


I do consider read noise more of a problem, however I guess that is probably because I am shopping CCD cameras, and I have recently been discussing this topic with other astrophotographers. With FLI and QSI CCD cameras, cooling deltaT below ambient is between -45C to -55C, with dark current noise levels around 0.02e-/s/px @-15C for KAF type sensors, and 0.003e-/s/px @-10C for Sony ICX type sensors. Most of the discussions I've had over the last couple of weeks have made the assumption that the sensor was being cooled, significantly, and that dark current noise was a trivial component of noise overall...which left read noise as the primary limiting noise factor.


During the summer, I don't doubt I had some wicked high dark current noise (I actually know for a fact that I had really high dark current noise on a few nights that topped 80F, which lead to sensor temps that were pushing 40C, and a couple subs that actually had sensor temos over 40C...the dark current noise was insane. I actually have those subs somewhere...I should share, just for kicks if nothing else. :) I kind of freaked when I thought my sensor was going to fry itself, pulled the camera off the lens, stuck it in the freezer for a while, and that seemed to fix the temperature spiking problem.)


However, I forgot that my 300s series of subs were taken during that cold front. The 480s, 90s, 30s and 15s exposures were all taken later, after the deep freeze left Colorado, with sensor temps around 12C or so, maybe some around 8C. The 300s subs were 3c, a little above freezing. They have terrible skyfog, but dark current noise was barely visible (one or two hot pixels a frame, and the background sky noise levels were extremely low.)


At 300s exposures, assuming 0.1e-/s/px dark current would be 30e-, subtracted out by CDS, leaving behind 5.4e- DCN. That is still less than the 9.8e- RN. I guess it is additive, which puts the read noise floor at ~15.2e-...so yeah, not great. I was working off the assumption that read noise on the 5D III at ISO 400 was lower. Given that its almost 10e-, I am thinking I'll move to ISO 800, and see how that is. I still worry about clipping my stars with the lower saturation point, though...I like colorful stars.
 
Upvote 0
jrista said:
I do consider read noise more of a problem, however I guess that is probably because I am shopping CCD cameras, and I have recently been discussing this topic with other astrophotographers. With FLI and QSI CCD cameras, cooling deltaT below ambient is between -45C to -55C, with dark current noise levels around 0.02e-/s/px @-15C for KAF type sensors, and 0.003e-/s/px @-10C for Sony ICX type sensors. Most of the discussions I've had over the last couple of weeks have made the assumption that the sensor was being cooled, significantly, and that dark current noise was a trivial component of noise overall...which left read noise as the primary limiting noise factor.

Hi Jon,
Look at the dark current, table 3 in my 7DII review. You are citing special cooling with 0.02 e/s at -15 C. The 7DII dark current is 0.02 at +15 C! Then you cite 0.003 e/s for a sony sensor at -10C. The 7DII is 0.003 e/s at -2 C.

Also, earlier I posted an image of M42. I actually posted the wrong image. That was not my final. Here is my final (also a little larger):
http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/orion.nebula.m42_61,10,4,2sec_c11.21.2014.0J6A1631-1657-SigAv.h-b5x5s.html

Then I combined my Horsehead and M42 into a panorama. Note these images were independently stretched and one had over 2 times the exposure:
http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/horsehead+m42_300mm_c11.21.2014.0J6A1631-1750-SigAv.h-pan1-b5x5s.html

Roger
 
Upvote 0