I think it's unfair to say it's "simply not true." It depends. I've shot landscapes that are perfectly still, with only a slight amount of motion in the sky. Now, usually I photograph at ISO 100. However, if your going to be integrating frames, you could get away with using a higher ISO, and taking more frames at a faster shutter speed. If you bumped up to, say, ISO 400, took a bunch of frames, then integrated them together (you can do that with Photoshop, but it's still better to use a tool like DSS to do it, as it can work on the RAW images themselves, rather than demosaiced results), you would very likely GAIN DR in the end.
Well of course. Image averaging increases SNR by the square root of the # of images averaged. You can mathematically derive this simply by knowing that noise adds in quadrature. Which is also why any additive or subtractive operations increase noise, as you mention later on.
But every camera benefits in this manner. And a camera that starts with a higher SNR (Nikon/Sony vs. Canon, assuming all else equal) will benefit just as much. So this is pretty irrelevant in the context of this discussion...
Actually, an easier way to understand all this averaging business is to think that - in terms of photon/shot noise - averaging 8 exposures that are each 1/8 shorter than one long exposure is very similar to just taking one long (8x as long) exposure to begin with. Or using a sensor w/ 8x the surface area. In reality, averaging n exposures is generally worse than taking one exposure n times as long (given you don't clip) b/c of the extra aggregate read noise of 8x as many read events.
Now, such a thing is completely unnecessary with a camera that uses a Sony Exmor sensor. However people who use D800's still do true HDR, and some of them will take a good dozen frames for an HDR merge. Firing off eight ISO 400 frames at a relatively high ISO is trivial in comparison.
That's correct. And, yes, even a D800/810 or A7R benefit from HDR or graduated neutral density filters even with scenes that technically fall within their Raw DR capabilities b/c HDR/GNDs allow you to expose shot-noise limited shadows more - thereby increasing their SNR. So even if using a GND flattens your image such that you have to darken your shadows in post, noise performance will still be better than underexposing those shadows - even for a sensor with no read noise (i.e. a theoretical shot noise-limited sensor). Now, whether it's necessary
or not for any given scene/application is another matter entirely.
So, "simply not true"? Really?
Er, yes I still stand by that, even though we appear to be on the same page
You don't subtract it out. Of course not. ;P You AVERAGE it out! Averaging attenuates the standard deviation of noise. You actually literally CAN NOT subtract noise because subtraction actually enhances the standard deviation, making the noise worse (this is intensely obvious when you start doing astrophotography...I accidentally subtracted a master flat frame once, and the noise was terrible because both the flat and the light frame had random noise. You normally divide out flat frames to avoid that problem.)
Yes, b/c noise adds in quadrature. But my initial point still stands - it seems misleading to point out that image averaging can get you near Sony/Nikon levels of DR. B/c image averaging would also help the Sony/Nikon sensors. Each would keep pulling ahead, and we'll end up right where we began - with a base ISO DR advantage going to the Sony/Nikon architectures with low downstream read noise.
Shooting two ISOs is just a firmware hack to extract more dynamic range from Canon's whole readout pipeline. The best way to extract more dynamic range is averaging. Your basic noise reduction algorithm works by averaging. When you apply noise reduction to any single frame in Lightroom, or with Topaz Denoise, Nik Dfine, NoiseNinja, NeatImage, or any of those tools, you ARE increasing dynamic range. That's what noise reduction does. It increases dynamic range.
Actually, the higher ISO used when shooting two ISOs is just a way to get shadows well above the downstream read noise floor of Canon's architecture. It works, with the downsides of the resolution cost/artifacts, when image averaging is not an option. Also, with all this talk of image averaging, I feel compelled to point out that it's sometimes practically quite difficult to be thinking about image averaging when you're trying to shoot rapidly changing light, sometimes ND filters and long exposures to create motion, etc. It's essentially technology getting in the way of artistry, especially when you consider that there are better options out there for this particular purpose
(base ISO DR).
Yes, you can technically say that NR can increase DR, but it comes at the cost of detail retention. Hence, IMHO, the best tests of DR are done on unfiltered data (or however unfiltered one can get it).
I know LTRLI disagrees, but what were really talking about here is not actually dynamic range. Were talking about editing latitude. I wish these two things, DR and editing latitude, weren't so intrinsically linked, but they are. Editing latitude, as in the ability to lift shadows, is only one benefit of having more dynamic range. Fundamentally, DR is about less noise. Not just read noise, which only exists in the shadows, but ALL noise, which exists at every level of the entire image...shadows, midtones, highlights, whites, blacks, everything. Denoise algorithms reduce noise, which means, by definition, they are concurrently increasing dynamic range.
Actually, since DR is defined as the range between clipping and some lower SNR threshold, DR is really not about ALL noise. Midtone/highlight noise is typically shot noise dominated (ignoring PRNU), but at this point the SNR is typically well above the lower SNR threshold people generally find acceptable. Save for very small sensors and/or very high ISOs.
While the ML Dual ISO technique is certainly one way of reducing Canon's nasty banded read noise, it's not the only way.
Just to be clear, I wasn't even talking about banding. I was talking about the downstream read noise that manifests itself as just random noise. The detrimental effects of this 'downstream' noise can be mitigated by amplifying the signal to the point at which the downstream read noise is irrelevant. This is precisely why Canon DSLRs can catch up in DR at higher ISOs... at these high levels of amplification, it's mainly sensor-level (upstream of the ISO amplifier) read noise that matters. And here, Canon is doing just as well as others.
There are a lot of ways of recovering dynamic range. Canon doesn't clip their signal, they offset instead, so all the image signal data is there. It can be recovered (which is actually NOT the case with Nikon and Sony cameras),
Again, this is no longer true. The D810 has an offset of 600.
Also, I haven't yet seen evidence that the increased DR at base ISO on the D810 is due to 'cooking' the Raw file. In fact, DxO's full SNR curves suggest some sort of nonlinearity introduced into capture at the lowest ISO. The SNR at clipping - where the noise is dominated by shot noise - remains the same for ISO 64 and ISO 100. That indicates to me that for an increased
You should search some of the astrophotography sites. There is a lot if information about how all the various DSLR manufacturers mess with their signals.
Yes, I know that manufacturer's can 'cook' Raw files to a certain extent, but my point was that there's no evidence so far that this 'cooking' is what is leading to the extra DR at base ISO for the D810. I was pointing out that the SNR curve is significantly more non-linear at base ISO compared to ISO 100 for the D810:
Almost like an emulation (albeit very tiny magnitude) of negative film's decreasing response with increasing exposure. I think that's quite interesting, and am trying to get to the bottom of it. It's literally like a roll-off at higher input luminosities. Would love to hear some thoughts from folks here.