Sorry, I was a bit sloppy in the distinction between DR and S/N. They are closely related but not exactly the same. I trust it's still evident from my post what I mean.
Upvote
0
epsiloneri said:Sorry, I was a bit sloppy in the distinction between DR and S/N. They are closely related but not exactly the same. I trust it's still evident from my post what I mean.
WarStreet said:Ironically, this was the ONLY thing I manage to understand. I like your contribution. Now, I only need to do some reading to decipher the aboveAny suggestion ?
epsiloneri said:In many cases the 7D is better, e.g., frames per second, autofocus speed/tracking, no additional optics (no tele converter) etc, but when it comes to dynamic range (the ability to image both bright and dark features simultaneously) I argue there might be an advantage for the 5D2+TC1.4 combo.
neuroanatomist said:If DxOMark is correct, there is no meaningful difference between DR of the 5DII and the 7D. Would you expect using a TC to improve the DR of the 5DII?
epsiloneri said:Let me give a simple example to show you why. Imagine you are using your 5D2 to image a chess board from a large distance. Each chess square takes up exactly one pixel. Now let's say you use a TC 2x. Each square of the chess board now covers four pixels. With four pixels you can collect four times as many photo-electrons for each square, meaning that the S/N (and essentially DR) improves by a factor of two per chess square, or per solid angle.
epsiloneri said:I cannot explain why DxOMark does not find a significant difference in DR between 5D2 and 7D. I didn't find a description of how they measure DR in detail, so I don't know what their numbers mean. It also stands in stark contrast to what is found on Clarkvision, which is much closer to my expectation. Perhaps DxOMark doesn't measure DR per pixel, but per sensor area or something. That would make sense. That a large 5D2 pixel would have the same DR as a small 7D pixel definitely does not make sense, so they must mean something different.
neuroanatomist said:I wonder if your simple example isn't mixing a metaphor (or in this case, mixing theory with practical reality). I understand the theory of 'adding pixels' with photon signal being additive while noise combines in quadrature, so the four pixels have twice the S/N over that spatial resolution. But doesn't that assume invariant illumination across the original pixel or the four 'added' pixels? In your example, the photons from that one chess square are spread over the area of four pixels (which is why the effective aperture with a 2x TC is 2 stops narrower). We're not creating a real superpixel, merely spreading out the light with diminished signal at each pixel, while each pixel still has the same read noise. So although the theory predicts double the S/N, that assumes 4 times as many photons input and twice the noise - and in your example, there aren't four times as many photons to go around. Or am I missing something?
neuroanatomist said:But doesn't that assume invariant illumination across the original pixel or the four 'added' pixels? In your example, the photons from that one chess square are spread over the area of four pixels (which is why the effective aperture with a 2x TC is 2 stops narrower). We're not creating a real superpixel, merely spreading out the light with diminished signal at each pixel, while each pixel still has the same read noise. So although the theory predicts double the S/N, that assumes 4 times as many photons input and twice the noise - and in your example, there aren't four times as many photons to go around. Or am I missing something?
neuroanatomist said:That's fine, but Figure 4 is based on Table 2, the sensor specifications - i.e., it's modeled/calculated data, not real, empirical data.
aldvan said:If I understand correctly the meaning of the debate - I'm not native English speakers, as you might guess - I believe that no one has yet introduced the basic concept of amount of information. In the example of the checkboard, getting sixteen pixels instead of four per square provides four times the information (I'm proportionally increasing the number of pixels in the example to be clearer). If at the four corners of a single checkboard square there are four grains of rice, getting sixteen pixels they will be visible, with four will see only a white spot. S/N and DR are important for image quality, but in my opinion the amount of information recorded comes before any other parameter. Forgive me for my poor English and if I'm off topic ...
epsiloneri said:Yes, I agree that the relatively softer image from 7D could very well be due to atmospheric distortion, since they were not simultaneous and due to the low altitude of the moon. This was in the summer, however, and from my location the moon did not rise much higher that night (I live on 59 deg northern latitude). If I find the time and a clear night I will repeat the experiment this winter with the moon much higher up (and a 5D3).
ronderick said:prjkt said:60D also uses the same battery, and the 20D-50D used the same battery that the 300D and 5D (mark 1) used as well...ronderick said:PS: I always believe that there's a reason why Canon make these two models use the same batteryWith the exception of the 1D/D's, I don't think there's a shared battery for two seperate lines of product.
Sorry, my mistake. I should have checked the batteries for those cameras.
Didn't realize that the older XXD series, 5D and other models use the BP-511 series battery.