East Wind Photography said:
Bundu said:
jrista said:
The 7D II has pronounced horizontal banding. I was hoping that at the very least the 7D II would just have random read noise...the presence of the horizontal banding is extremely dismaying to me.
I really want to try/start astro photography. I have a 7DmarkII. When do this banding occur and how do I prevent/minimalise it?
Thank you for all the info.
You can only minimize it and other effects by taking shorter exposures, taking dark frames, and staking using something like Starstax or other application that can stack multiple sub exposures to enhance signal to noise.
ie; 60 ten second exposures will produce better results than a single 600 second exposure.
You have to be careful with advice like this, as it is not simple and strait forward. Yes, stacking more subs can increase SNR, but SNR is not the only thing that matters. There are caveats here.
First, there is SNR and there is Signal. A low signal can have a high SNR by reducing noise...but it's still a low signal. If you are not gathering data on dimmer nebulosity (or say outer galaxy halo), your simply not gathering it. You cannot make it appear out of nowhere, and even if you gather a photon per minute on the faintest details, you are going to need many hundreds of subs to average out noise enough to actually see those details. The only way to improve the strength of those dimmer details is to expose longer (or get a better camera with lower read noise, and cool it to obscenely cold temperatures).
Second, fixed pattern noise does not average out. Fixed bands (I have several on my 5D III), hot and cold pixels, these things are REINFORCED by stacking, not removed. The same goes for dust spots...if you do not take proper flats that have identical dust spots, they will be reinforced with stacking, becoming little "black holes" in your images. (So, I disagree with Roger here...DLSRs are quite good these days, but that has nothing to do with why we need flats, and if we use flats, we at the very least need to use biases.) There are ways of mitigating the reinforcement of noise artifacts. The use of dark frames is one. The use of dithering during imaging is another (although with DSLRs you have to be pretty aggressive due to the use of AA filters on most cameras...and that high aggression in dithering can lead to other problems.) Bias frames can also be used to remove the fixed pattern inherent in the sensor due to manufacturing, however this usually only reveals itself with very deep stretching.
In the end, you have to decide what you want. Do you want a light exposure that just pulls out the brighter details, or do you want a deep exposure that pulls out the very faint details? If you want deeper exposures, then you need to expose longer. You could stack a hundred 60 second exposures, and that will reduce noise, but it will not increase the exposure, it will not increase the signal strength. It just makes the signal more complete. My first Orion image, here below, are 120 second exposures:
This was my second astro photo. There are a decent number of frames here, 30 of them to be exact. Noise was reduced a fair amount, however a lot of the fainter details are buried in the read noise and dark current. I know this, because I reprocessed it recently, and tried to pull out more detail:
I could take more exposures....60, 90. The problem with integrating sub frames is the more you integrate, the less an impact each frame has on improving signal. Integrate 30x, you reduce noise by a factor of ~5.5, integrate 100x you reduce noise by a factor of 10x (more than triple frames, less than double the reduction in noise), integrate 400x you reduce noise by a factor of 20x (four times the number of frames again, over 13x the number I originally started out with, to double the reduction in noise again.) The simple fact here is that I may eventually reveal some more of the faint details...but those details are going to have a very weak signal. They are going to suffer from quantization noise and posterization.
There are guys who do something called lucky imaging, where they take countless very short exposures, then stack 500 or 1000 frames. They use cameras with EXCESSIVELY low noise (i.e. <1e-) at extremely cold temperatures (-55 to -70 degrees celsius below ambient), which is orders of magnitude lower than the best DSLRs on the market today. To achieve something similar with a DSLR, you would still need longer "short" exposures (say 15, 20, 30 seconds instead of 5 or 10), and you would need many thousands more, to reduce noise to levels low enough where you could actually see the fainter details.
You can integrate more and more and more data, and you get diminishing returns. No one integrates 400 2-minute exposures. Some guys with $20,000 EMCCD cameras integrate 1000 2-5 second exposures at -40C dT or colder. Most people simply expose longer to improve the SIGNAL, which concurrently improves the SNR of both the bright and faint parts of the object. You then still get a bunch of subs and integrate those to reduce noise and improve the SNR even more. You cannot simply reduce exposure time and hope to get the same results as longer exposures in astrophotography. You can improve the results of bright details, you are likely to not get dim details at all unless you integrate hundreds of frames.