Deep Sky Astrophotography

scyrene said:
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.Iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?

OOFT. I've been targeting Andromeda recently, but this is so much better than anything I can do! And with modest equipment, well done!

Thanks scyrene!Don't forget this is low res so it makes it look a bit better!
 
Upvote 0
alexthegreek said:
scyrene said:
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.Iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?

OOFT. I've been targeting Andromeda recently, but this is so much better than anything I can do! And with modest equipment, well done!

Thanks scyrene!Don't forget this is low res so it makes it look a bit better!

Oh I know. But it's still everything I aspire to!
 
Upvote 0
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.1 min at iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?
You people are inspirational!
 
Upvote 0
Don Haines said:
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.1 min at iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?
You people are inspirational!

Oh man....thanks Don!!
 
Upvote 0
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.1 min at iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?

Great start! Looks pretty good.

Regarding the DSS/Darkmaster issue. I've never used the two together, so I can't say what might be wrong. Any reason you are not just throwing all the files into DSS and letting it do it's thing?
 
Upvote 0
BeenThere said:
I was looking at the pixel insight web site and they don't tell you much about how to use the program (beginner). Can anyone recommend some reading/tutorials for getting started with this program?

PixInsight is phenomenal, truly. Almost all of the images I've shared here on this thread were processed with it.

You can find quite a lot of tutorials for it online. I've also got some of my own articles covering individual PI tools, and will be adding more if and when I can find the time. Find those here:

https://jonrista.com/the-astrophotographers-guide/pixinsights/
 
Upvote 0
I haven't been posting my own images here much lately, as I have not had much clear sky this year. Not much at all. I've also moved on from the 5D III...but, I am still active. I have moved to a monochrome camera with LRGB and narrow band filters. I've had some clear nights finally, and managed to acquire some good data with the new camera. It's not a Canon, so I probably won't be sharing all my images from it here, but here is my latest:

Wizard Nebula

SJx1Qtj.jpg


All narrow band filters, Ha, SII and OIII. Total integration time is less than 4 hours, which is pretty rare for narrow band images. This new camera is extremely low noise, especially at higher gain. I used gain 200 here, which only has 1.3e- read noise. Dark current is a minuscule 0.006e-/s @ -20C.

I combined 2 hours of Ha and used that as a "luminance" channel...a monochrome detail channel. The rest was used for color. I combined 24 minutes of Ha, 40.5 minutes of OIII and 52.5 minutes of SII to create individual narrow band channels. Those were combined into RGB via a custom 'PixelMath' blend in PixInsight:

Code:
  RED: (SII*.8 + Ha*.2)*.45 + (Ha*.8 + OIII*.2)*.55
GREEN: (Ha*.3 + OIII*.7)*.45 + (Ha*.2 + OIII*.7)*.55
 BLUE: OIII

This effectively blends two different "standard" blends together to make a custom blend. The first standard blend is SHO, or Sulfur/Hydrogen/Oxygen mapped to Red/Green/Blue also called the Hubble Palette, and looks like this:

DNfWFCT.jpg


The second standard blend is HOO, or Hydrogen/Oxygen/Oxygen mapped to Red/Green/Blue, also called the "natural" palette as it weights hydrogen and oxygen more accurately to their respective colors, and looks like this:

38bWBWt.jpg


A true "natural" color palette should also actually blend Ha into blue somewhat as well, and blend even less into green. This is because Hydrogen-alpha (Ha) is only one of many emission lines that hydrogen gas emits at when excited, and Hydrogen-beta (Hb) is another that emits very close to the OIII line. It tends to be fainter, so usually the blend is Red 100% Ha, Green 75% OIII, Blue 75% OIII + 25% Ha. The problem there is you have trouble getting proper star color (they all show up magenta), hence the reason for the more standard HOO blend, which produces better stars.

As you might have noticed, I tend to spice things up a little bit with the "standard" blends. ;) Blending the two together to make a third blend was something I stumbled across mostly by accident, as I dragged the HOO blend over the SHO blend in PI, which renders the dragged window partially transparent. That gave me the idea for the third blend.
 
Upvote 0
jrista said:
I haven't been posting my own images here much lately, as I have not had much clear sky this year. Not much at all. I've also moved on from the 5D III...but, I am still active. I have moved to a monochrome camera with LRGB and narrow band filters. I've had some clear nights finally, and managed to acquire some good data with the new camera. It's not a Canon, so I probably won't be sharing all my images from it here, but here is my latest:
Jon, I think that I can speak for everyone here..... Please post your non-Canon images with the explanations of how you did it. Technique and process are more important than which camera was used and you are both an inspiration and a mentor to the rest of us.
 
Upvote 0
Don Haines said:
jrista said:
I haven't been posting my own images here much lately, as I have not had much clear sky this year. Not much at all. I've also moved on from the 5D III...but, I am still active. I have moved to a monochrome camera with LRGB and narrow band filters. I've had some clear nights finally, and managed to acquire some good data with the new camera. It's not a Canon, so I probably won't be sharing all my images from it here, but here is my latest:
Jon, I think that I can speak for everyone here..... Please post your non-Canon images with the explanations of how you did it. Technique and process are more important than which camera was used and you are both an inspiration and a mentor to the rest of us.

+1

Yes please, indeed!
 
Upvote 0
jrista said:
I haven't been posting my own images here much lately, as I have not had much clear sky this year. Not much at all. I've also moved on from the 5D III...but, I am still active. I have moved to a monochrome camera with LRGB and narrow band filters. I've had some clear nights finally, and managed to acquire some good data with the new camera. It's not a Canon, so I probably won't be sharing all my images from it here, but here is my latest:

Wizard Nebula

SJx1Qtj.jpg


All narrow band filters, Ha, SII and OIII. Total integration time is less than 4 hours, which is pretty rare for narrow band images. This new camera is extremely low noise, especially at higher gain. I used gain 200 here, which only has 1.3e- read noise. Dark current is a minuscule 0.006e-/s @ -20C.

I combined 2 hours of Ha and used that as a "luminance" channel...a monochrome detail channel. The rest was used for color. I combined 24 minutes of Ha, 40.5 minutes of OIII and 52.5 minutes of SII to create individual narrow band channels. Those were combined into RGB via a custom 'PixelMath' blend in PixInsight:

Code:
  RED: (SII*.8 + Ha*.2)*.45 + (Ha*.8 + OIII*.2)*.55
GREEN: (Ha*.3 + OIII*.7)*.45 + (Ha*.2 + OIII*.7)*.55
 BLUE: OIII

This effectively blends two different "standard" blends together to make a custom blend. The first standard blend is SHO, or Sulfur/Hydrogen/Oxygen mapped to Red/Green/Blue also called the Hubble Palette, and looks like this:

DNfWFCT.jpg


The second standard blend is HOO, or Hydrogen/Oxygen/Oxygen mapped to Red/Green/Blue, also called the "natural" palette as it weights hydrogen and oxygen more accurately to their respective colors, and looks like this:

38bWBWt.jpg


A true "natural" color palette should also actually blend Ha into blue somewhat as well, and blend even less into green. This is because Hydrogen-alpha (Ha) is only one of many emission lines that hydrogen gas emits at when excited, and Hydrogen-beta (Hb) is another that emits very close to the OIII line. It tends to be fainter, so usually the blend is Red 100% Ha, Green 75% OIII, Blue 75% OIII + 25% Ha. The problem there is you have trouble getting proper star color (they all show up magenta), hence the reason for the more standard HOO blend, which produces better stars.

As you might have noticed, I tend to spice things up a little bit with the "standard" blends. ;) Blending the two together to make a third blend was something I stumbled across mostly by accident, as I dragged the HOO blend over the SHO blend in PI, which renders the dragged window partially transparent. That gave me the idea for the third blend.
Hi,
Nice... :)

My area had very serious light pollution, so didn't do any Astrophotography for a very long time. Recently, after reading a review of 7D2 for Astrophotography, I do a test shot using my Canon 7D2 and EF 100-400mm L II and the result looks promising.

Since I already had a GOTO equatorial mount, might well start some shooting and see what can I get using my WO FLT-98 telescope (just brought a Field Flattener) and EF 100-400mm L II.

Have a nice day.
 
Upvote 0
Well thanks, guys. :) I have more to share, I just gotta process it.

Weixing...here is a little secret. If you live in a light polluted area, unless you are doing narrow band imaging, it doesn't matter what kind of camera you have. Noise is an interesting thing, in that the noise from all the various potential sources add together in quadrature. That means that if you have one noise term that is much higher than the others, then the others effectively do not matter.

Now, in the city...there is really only one noise term that matters most of the time: Light pollution! Sometimes, for certain cameras, dark current might still matter, and if it is an older camera like the 5D II, dark current might actually be worse than light pollution. (Imagine that!) However, with most modern cameras, dark current is low enough that it matters no more than read noise when imaging in a light polluted area.

Ntotal = SQRT(Sobject + Slightpollution + DC + Nread^2)

If you are at a dark site, where light pollution is very low, say 1e-/s, and object signal is 2e-/s then a 300 second exposure will give you:

Ntotal = SQRT(2e-/s*60s + 1e-/s*60s + 0.2e-/s*60s + 5e-^2) = 14.7e-

However if you are in the city, where light pollution could easily be 15e-/s or more:

Ntotal = SQRT(2e-/s*60s + 15e-/s*60s + 0.2e-/s*60s + 5e-^2) = 32.5e-

What happens if we reduce read noise?

Ntotal = SQRT(2e-/s*60s + 1e-/s*60s + 0.2e-/s*60s + 2e-^2) = 14e-
Ntotal = SQRT(2e-/s*60s + 15e-/s*60s + 0.2e-/s*60s + 5e-^2) = 32.2e-

Notice how dropping from 5e- to 2e- read noise didn't really help all that much in either case here. However it helped more at the dark site than in the city with light pollution. What happens if you reduce dark current (0.2e-/s is low, but it can get much, much lower...for example, the ASI1600 has only 0.006e-/s @ -20C):

Ntotal = SQRT(2e-/s*60s + 1e-/s*60s + 0.006e-/s*60s + 2e-^2) = 13.6e-
Ntotal = SQRT(2e-/s*60s + 15e-/s*60s + 0.006e-/s*60s + 5e-^2) = 32e-

Well, that helped reduce noise at the dark site...but it did not really help much in the city. What if we reduced both read noise and dark current to zero (impossible in reality, but it gives us an idea of what the ideal amount of noise would be):

Ntotal = SQRT(2e-/s*60s + 1e-/s*60s + 0.006e-/s*60s + 2e-^2) = 13.4e-
Ntotal = SQRT(2e-/s*60s + 15e-/s*60s + 0.006e-/s*60s + 5e-^2) = 31.94e-

The dark site example here is about as good as it can possibly get. Which means that even with 5e- and 0.2e-/s dark current initially, we were getting fairly close. However, in the city, the difference between having dark current and read noise, and not...is basically meaningless.

Light pollution is the great normalizer. If you are in the city, technology doesn't matter. You can image with pretty much any camera, and get mostly the same results regardless of camera. A 7D II will perform much the same as a 6D, which will perform much the same as a D5300, etc.

The great enabler for city imagers...is narrow band filters used with a monochrome camera. With narrow band, you block out almost all LP...reducing it to a tenth of an electron per second or less. You can do in a couple of hours with a camera like the ASI1600 w/ 6nm or narrower Ha and OIII filters, what it could take you WEEKS to do with a DSLR in the city.

So...if anyone is very interested in astrophotography, but is stuck in the city, remember two things:

A) It doesn't matter what camera you have...better cameras won't help. So, if you cannot get a mono camera and filters, then just use whatever you have. It will all be the same in the end.

B) If you can afford it, get a cooled monochrome camera and some narrow band filters! It will allow you to get really nice results in the city, and thanks to companies like ZWO, it doesn't have to be that expensive. I use the ZWO ASI1600MM-Cool myself. However ZWO has many other cooled mono cameras, some for only a few hundred bucks. Their electronic filter wheel is only two hundred bucks. And a hydrogen alpha filter will cost you a couple hundred bucks. You could get a fully functional monochrome imaging package for less than a grand...and you wouldn't have to struggle with LP any more.
 
Upvote 0
jrista said:
Well thanks, guys. :) I have more to share, I just gotta process it.

Weixing...here is a little secret. If you live in a light polluted area, unless you are doing narrow band imaging, it doesn't matter what kind of camera you have. Noise is an interesting thing, in that the noise from all the various potential sources add together in quadrature. That means that if you have one noise term that is much higher than the others, then the others effectively do not matter.

I may be overreaching here, but can I play devil's advocate and say that light pollution is not noise? It's signal. Unwanted signal, but signal - a real, fairly constant element in the scene, not random, nor caused by the equipment. Is that fair?

Otherwise obviously I'm sure you're right. And it makes me feel better what you say. I've never been to a dark site, so my inferior equipment is okay (PS I am getting that astro cam you suggested, maybe next month; I need to work on my alignment and tracking first).
 
Upvote 0
scyrene said:
jrista said:
Well thanks, guys. :) I have more to share, I just gotta process it.

Weixing...here is a little secret. If you live in a light polluted area, unless you are doing narrow band imaging, it doesn't matter what kind of camera you have. Noise is an interesting thing, in that the noise from all the various potential sources add together in quadrature. That means that if you have one noise term that is much higher than the others, then the others effectively do not matter.

I may be overreaching here, but can I play devil's advocate and say that light pollution is not noise? It's signal. Unwanted signal, but signal - a real, fairly constant element in the scene, not random, nor caused by the equipment. Is that fair?

Otherwise obviously I'm sure you're right. And it makes me feel better what you say. I've never been to a dark site, so my inferior equipment is okay (PS I am getting that astro cam you suggested, maybe next month; I need to work on my alignment and tracking first).

Any unwanted signal that is going to be removed from the image and also introduces noise, IS a noise. Here is a fuller formula:

SNR = S/N

Where S = signal, and N = noise.

SNR = (Sobject * Ccount)/SQRT(Ccount * (Sobject + Slightpollution + Sdarkcurrent + Nread^2))

Note the single term in the numerator: Sobject. That is the only signal we actually care about. That is the only signal we are going to keep.

Note all the terms in the denominator: Sobject, Slightpollution, Sdarkcurrent, and Nread. Those are all noise terms. Why is Slightpollution only in the denominator, and not in the numerator? We could do that...however, that is not representative of what our image will look like once we OFFSET the light pollution. Why do we offset it? Because if we do not offset it, it increases the "signal shift" or "signal separation", which brightens the background.

Consider this:

nz8iRnv.jpg


This is the Pleiades, two single subs, no processing. Imaged from my heavily light polluted red bortle zone back yard, as well as from my quite dark green bortle zone dark site. The increased "signal" from light pollution in these two unprocessed images is quite obvious in the left panel there. It should be noted...these two images have identical exposure. The object signal is almost the same in both, around 50-60e-. The light polluted image is much brighter purely because of the unwanted light pollution photons that were recorded.

Now consider this:

CnZCEz3.jpg


This is the same two images (cropped to just the pleiades themselves). The only difference here, is I offset the light pollution. Notice how much noisier the left side panel, the image from my light polluted back yard, is compared to the dark site image?

Light pollution alone is indeed a singal, and as a signal, it has SNR. It's own SNR is:

SNRlp = Slp/SQRT(Slp)

However, if we remove the signal part, we are just left with the noise:

Nlp = SQRT(Slp)

In the second set of images above, after offsetting the lp, we are left with all of that extra noise...and none of the extra signal.

So...light pollution IS a noise. You just have to understand the context within which it behaves only as a noise and not a signal. ;)

Oh, I would also offer that light pollution can be VERY INCONSISTENT within the frame. LP is the primary source of gradients in astro images. Gradients can wreak havoc on the underlying object signal, and make it very difficult to get an effective stretch, or for that matter, to effectively offset LP. If one corner of the image is say 2000 16-bit ADU darker than the other corner. When you go to offset the LP...you either end up with a very obvious gradient, from nearly black to a much brighter opposite corner or edge...and, worse, the gradient can be colored...maybe it's orangish-red, maybe it's a magenta-green gradient, maybe it's bluish (i.e. with the moon in the sky), etc.

LP is the insidious, mischievous bastard child of Loki of the astrophotography world. It injects itself into your data and wreaks havoc on everything, and can often make it impossible to pull out a usable signal unless you invest MASSIVE amounts of time into getting massive amounts of data to compound your signal so much that it finally overpowers all the LP gradients, excess noise and other issues.

Another benefit of narrow band filters? They are nearly immune to LP in general, and are thus also nearly immune to gradients. ;) I don't even calibrate my NB data with flats...just a 25-dark master frame lately. The field structure is nearly perfectly flat when the data comes out of the camera, and PixInsight's DBE tool makes pretty short work of what minor gradients or vignetting may exist.
 
Upvote 0
jrista said:
scyrene said:
jrista said:
Well thanks, guys. :) I have more to share, I just gotta process it.

Weixing...here is a little secret. If you live in a light polluted area, unless you are doing narrow band imaging, it doesn't matter what kind of camera you have. Noise is an interesting thing, in that the noise from all the various potential sources add together in quadrature. That means that if you have one noise term that is much higher than the others, then the others effectively do not matter.

I may be overreaching here, but can I play devil's advocate and say that light pollution is not noise? It's signal. Unwanted signal, but signal - a real, fairly constant element in the scene, not random, nor caused by the equipment. Is that fair?

Otherwise obviously I'm sure you're right. And it makes me feel better what you say. I've never been to a dark site, so my inferior equipment is okay (PS I am getting that astro cam you suggested, maybe next month; I need to work on my alignment and tracking first).

Any unwanted signal that is going to be removed from the image and also introduces noise, IS a noise. Here is a fuller formula:

SNR = S/N

Where S = signal, and N = noise.

SNR = (Sobject * Ccount)/SQRT(Ccount * (Sobject + Slightpollution + Sdarkcurrent + Nread^2))

Note the single term in the numerator: Sobject. That is the only signal we actually care about. That is the only signal we are going to keep.

Note all the terms in the denominator: Sobject, Slightpollution, Sdarkcurrent, and Nread. Those are all noise terms. Why is Slightpollution only in the denominator, and not in the numerator? We could do that...however, that is not representative of what our image will look like once we OFFSET the light pollution. Why do we offset it? Because if we do not offset it, it increases the "signal shift" or "signal separation", which brightens the background.

Consider this:

nz8iRnv.jpg


This is the Pleiades, two single subs, no processing. Imaged from my heavily light polluted red bortle zone back yard, as well as from my quite dark green bortle zone dark site. The increased "signal" from light pollution in these two unprocessed images is quite obvious in the left panel there. It should be noted...these two images have identical exposure. The object signal is almost the same in both, around 50-60e-. The light polluted image is much brighter purely because of the unwanted light pollution photons that were recorded.

Now consider this:

CnZCEz3.jpg


This is the same two images (cropped to just the pleiades themselves). The only difference here, is I offset the light pollution. Notice how much noisier the left side panel, the image from my light polluted back yard, is compared to the dark site image?

Light pollution alone is indeed a singal, and as a signal, it has SNR. It's own SNR is:

SNRlp = Slp/SQRT(Slp)

However, if we remove the signal part, we are just left with the noise:

Nlp = SQRT(Slp)

In the second set of images above, after offsetting the lp, we are left with all of that extra noise...and none of the extra signal.

So...light pollution IS a noise. You just have to understand the context within which it behaves only as a noise and not a signal. ;)

Oh, I would also offer that light pollution can be VERY INCONSISTENT within the frame. LP is the primary source of gradients in astro images. Gradients can wreak havoc on the underlying object signal, and make it very difficult to get an effective stretch, or for that matter, to effectively offset LP. If one corner of the image is say 2000 16-bit ADU darker than the other corner. When you go to offset the LP...you either end up with a very obvious gradient, from nearly black to a much brighter opposite corner or edge...and, worse, the gradient can be colored...maybe it's orangish-red, maybe it's a magenta-green gradient, maybe it's bluish (i.e. with the moon in the sky), etc.

LP is the insidious, mischievous bastard child of Loki of the astrophotography world. It injects itself into your data and wreaks havoc on everything, and can often make it impossible to pull out a usable signal unless you invest MASSIVE amounts of time into getting massive amounts of data to compound your signal so much that it finally overpowers all the LP gradients, excess noise and other issues.

Another benefit of narrow band filters? They are nearly immune to LP in general, and are thus also nearly immune to gradients. ;) I don't even calibrate my NB data with flats...just a 25-dark master frame lately. The field structure is nearly perfectly flat when the data comes out of the camera, and PixInsight's DBE tool makes pretty short work of what minor gradients or vignetting may exist.

Hehe, you rewarded my cheekiness with a very full answer, that I did not deserve. Thanks :)

Light pollution is a nightmare, and I agree, one of the worst aspects is how it varies across the sky (so any moderately wide shots or wider are very hard to correct for). Also in these parts, they've started replacing the narrowband sodium street lights with white LEDs :-\

Currently working on M31, which shows well here, despite LP. But it'll take a lot more clear weather to get anywhere near some of the shots on this thread... Still, it's good to have a challenge.

(PS I agree with narrowband filtering, it seems the way to go not having access to dark sites).
 
Upvote 0
jrista said:
alexthegreek said:
Here's my Andromeda galaxy done with my stock 500d (which is about to quit on me, sometimes it doesn't turn on and that will be the end of me) and an old Jupiter 21M 200mm I got for 50 euro.1 min at iso 800 f5.6 about 60 min integration time and about 40-50 (or so I think) darks.Im not sure about the number of darks because deep sky stacker does not seem to be loading the file lists I create with darkmaster correctly.1st group is empty(I've read that's normal), second is ok but the next group is missing a lot of darks!Any idea what's going on?

Great start! Looks pretty good.

Regarding the DSS/Darkmaster issue. I've never used the two together, so I can't say what might be wrong. Any reason you are not just throwing all the files into DSS and letting it do it's thing?

Thanks jrista!I use darkmaster because I have a darks library and so it combines the older darks with the darks I shoot on the same night.Plus it groups them based on temp
 
Upvote 0