December 21, 2014, 03:33:49 AM

Author Topic: Who said Canon cameras suck?!?  (Read 42100 times)

The Hobbyist

  • Guest
Re: Who said Canon cameras suck?!?
« Reply #120 on: September 27, 2012, 11:05:39 PM »
Quote
You can take two shots but:
1. If the subject is moving, sometimes even just branches swaying or if mists are moving about or it's a person etc. it doesn't always work so well, or even at all, sometimes you can try to get it away with fixing modest motion and combining and masking various parts and so on but with hours of post processing and a long struggle and waste of time and it doesn't always work out all that well anyway.

You make a good point.  I'll say, when I am out shooting in that scenario, I always shoot in RAW.  From there, it's as simple as moving the exposure slider only, to create 3 different exported exposures out of 1 shot.  I've done simple HDR's this way, and they look just fine.  Yes, I'd rather take 3, 5, or sometimes 7 bracketed shots.  But other times I know because of movement, I'll just create the HDR from 1 RAW file.  I did this with a shot of a hummingbird, and it worked well.

Still, I guess the DR can be an issue for some folks, and their specific line of work.  I'm just a hobbyist, not a pro.  I guess for me, i am really never needing to crop things out that much for it to be an issue.  If it is, like i said, ill try a simple HDR and that'll usually do the trick.

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #120 on: September 27, 2012, 11:05:39 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #121 on: September 27, 2012, 11:32:27 PM »
When talking about DR on a sensor, I agree, its not the same thing as levels (since were talking about an analog signal). But if were talking about downscaling a D800 image and gaining dynamic range, everything is about levels of luminance.
 In my original example I simply defined noise in the context of a digitized image as being 3 levels. If we downsample an image and average noise by two fold, then the number of levels that constitute noise is between 1 and 2, and may vary a bit by pixel. I am not sure where your example of "previously, we could only resolve 800, 806, 812...now we can resolve 800, 803, 806, 809" is accurate in the context of downscaling an image. Were not talking about resolving anything here, were talking about a three-component pixel with a 0-16384 level range each, and there is nothing preventing us from using every single one of those levels. The thing that diminishes our post-digitization DR is noise, and averaging it by downscaling...as you described, reduces our noise from 3 levels to 1.5 levels. It doesn't change our ability to have digitized (post-ADC) pixels at any and every level between 800 and 812, it simply adds the ability to have levels between 1 or 2 and 3.

Yes, its a stops worth of improvement, but its not a hugely significant improvement.

It is what it is -- a 1 stop improvement in dynamic range. As we agreed, dynamic range is log2(saturation point) - log2(noise level).

 I'd also point out that because you can either set exposure or use a different gray point, an extra stop in the shadows is interchangeable with an extra stop in the highlights -- you can always expose by a stop lower. So worrying about whether you gain a stop in the shadows or highlights is a bit off base.  Dynamic range is dynamic range, number of levels is a different beast ...

Now regarding the number of levels -- your ADC could have every level between 800 and 812, but that doesn't mean that you have that many distinct levels. At some point, if the noise is large enough, the number of "levels" you have doesn't matter. For example, suppose you start with 16384 levels. Suppose you add two low order bits and randomly assign them. I think we agree that after adding those bits we don't really have 65536 (16384 * 2 *2 ) "levels" even if we "used" that many. Back to the example we were discussing, if we have a 14 bit ADC and our noise level is 3, the lowest order bit is close to random. 

The number of levels we have is the number of levels divided by the number of noise levels (assuming read noise doesn't change across the dynamic range) -- it's essentially exp(dynamic range) * some constant

By the way, when we pool multiple pixels, we don't just have possible 16384 levels any more -- we have 16384 multiplied by the number of pixels (we get multiples of .25 when we average. Or if you don't like fractions, you can just add the pixel values. Either way, you end up with 65536 distinct levels). Of course because of the above this doesn't mean that we can distinguish between all of them.

Quote
If read noise is a whole stops worth (image from Exmor sensor), and it eats away from to "bottom", you aren't losing much. If read noise is a few stops (3 to 4) worth (image Canon sensor), you could be losing 7, maybe 15 distinct levels.
This is highly simplified, the conversion from sensor to digital isn't as ideal and linear as this,

I think it's perhaps a bit too simplified. Recall my previous point -- dynamic range at the bottom is interchangeable with dynamic range at the top because you can always underexpose or overexpose.

Now of course if you insist on putting a hard limit on the number of bits available for the signal, you are more likely to suffer quantization loss with a lower dynamic range. For example, if you have 14 bits to represent 12 sops of dynamic range, you get less quantization loss than if you use 14 bits to represent 15 bits of dynamic range. In practice it seems to me that quantization loss (at least in RAW) is not the problem. Also, as I pointed out, if your read noise is (has standard deviation of) 3, you don't "really" have 14 bits worth of distinct levels (the lowest order bit is almost as good as randomly assigned), so you really would get twice as many levels if you could reduce noise by a factor of 2.

Now if you pool multiple pixels you do get more levels. The number of levels you get grows linearly with the number of pixels you pool, but as I pointed out, the noise goes in inverse proportion to sqrt(N), so your true number of levels increases by a factor of sqrt(N). But again, this is different from dynamic rang.e

I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".

Outside of that, I agree with most of what you've said, in the context of an analog signal on a sensor. Dynamic range is an entirely different beast when you move into an analog realm. I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops. However you distribute digital levels when reading the sensor out, that fact won't change, and I believe my answer at #95 is entirely accurate and realistic in that respect.

nonac

  • EOS M2
  • ****
  • Posts: 213
    • View Profile
    • Marty Beck Photography
Re: Who said Canon cameras suck?!?
« Reply #122 on: September 28, 2012, 12:17:22 AM »
Do you people actually take pictures or do you just sit around and analyze the camera and data all day long?  Reading all of this gave me a headache and I had to GO OUTSIDE and take pictures!
5d Mark III, 7d Mark II, 24mm f/1.4L II, 24-105 f/4L IS, 70-200 f/2.8L IS II, 100 f/2.8L IS macro, 135 f/2L, 3x 600EX-RT, ST-E3-RT, EF 1.4x III

bigmag13

  • PowerShot G1 X II
  • ***
  • Posts: 69
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #123 on: September 28, 2012, 12:18:47 AM »
Do you people actually take pictures or do you just sit around and analyze the camera and data all day long?  Reading all of this gave me a headache and I had to GO OUTSIDE and take pictures!
+1 !!!!! lol
5D3,5D2,6D,70-200 2.8Lii, 135L, 24-70Lii, Sigma 15 fisheye, Sigma 35 1.4(A), Zeiss 2/50, 600rt x3 and a st-e3 RT

sach100

  • Rebel SL1
  • ***
  • Posts: 81
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #124 on: September 28, 2012, 12:35:49 AM »
Thanks Jrista! Although, i admit i haven't understood everything you've said (and there's a lot of it!).

I think we've digressed a long way from what your very first post was trying to say. I've never played with a Nikon but i completely agree with you on the Canon's highlight recovery part (from my far less technical experience in recovering highlights).

If one is interested in the technology aspects of photography it doesn't make him/her any less a photographer than he/she already is (however good/bad that person is!) :)
5D MkIII

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #125 on: September 28, 2012, 12:44:41 AM »
Do you people actually take pictures or do you just sit around and analyze the camera and data all day long?  Reading all of this gave me a headache and I had to GO OUTSIDE and take pictures!

Has anyone ever heard of that timespan during our 24-hour day called "Nighttime"? Anyone here have a boring day-job with lots of empty space as you wait for...oh, say....some long-running extensive load test of that new workflow you designed and programmed to run to completion? Yeah. "Whitespace" is such a wonder.

Sure I take pictures. I take lots of pictures. I have over 300 pictures out of several thousand total I've taken over the last few weeks waiting to be trickled up to http://500px.com/JonRista over the next several more weeks.

I love it when someone comes along who thinks you can spend each and every 525,960 minutes in a year with camera in-hand photographing something. ;) I like a good debate in my photographic downtime.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #126 on: September 28, 2012, 12:46:52 AM »
Thanks Jrista! Although, i admit i haven't understood everything you've said (and there's a lot of it!).

I think we've digressed a long way from what your very first post was trying to say. I've never played with a Nikon but i completely agree with you on the Canon's highlight recovery part (from my far less technical experience in recovering highlights).

If one is interested in the technology aspects of photography it doesn't make him/her any less a photographer than he/she already is (however good/bad that person is!) :)

Thanks. I guess I kind of dug my own grave by mentioning the D800. I really just wanted to demonstrate the highlight recovery power of current Canon cameras, but apparently the vultures were just waiting in the corner for someone to jump off the D800 cliff...and I seem happily obliged, and given them a fresh corpse to pick over. Entirely my own fault though....

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #126 on: September 28, 2012, 12:46:52 AM »

straub

  • Guest
Re: Who said Canon cameras suck?!?
« Reply #127 on: September 28, 2012, 02:36:01 AM »
One more thing regarding normalization. Suppose your read noise has standard deviation of S. With enough sample data X, the resultant noise will converge to zero. This is the base assumption of phrases such as "doubling resolution increases DR by Y".

However, in the case of D800 and other newer Nikons, there is no bias value in the NEF file data. For the very bottom end of the ADC output, all values for which X+S<0 are truncated to zero. The end result is that normalizing any samples X smaller than S will not any more result in the noise converging to zero, but rather towards (S-X)/2.

This puts a hard limit to the "normalized DR" claims--e.g. no matter how large the set is, the sample value 0 doesn't converge to 0 (which it should in order for the processed output to touch the max DR of 14 stops in this case).

On the other hand, omitting the bias roughly doubles the "measured" DR in the black frame "tests"--half of the read noise is squashed to zero.

LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4062
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #128 on: September 28, 2012, 02:51:45 AM »
Do you people actually take pictures or do you just sit around and analyze the camera and data all day long?  Reading all of this gave me a headache and I had to GO OUTSIDE and take pictures!

Hah I took like 7500 shots over the last two weeks. That said this thread does seem to have been ultimately pointless, to a certain extent, since it ended where it began.
« Last Edit: September 28, 2012, 02:56:08 AM by LetTheRightLensIn »

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #129 on: September 28, 2012, 07:00:38 AM »
I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".

A stop of dynamic range in the shadows is interchangeable with a stop of dynamic range in the highlights because you can always meter differently (e.g. underexpose by a stop)

You went off on a bit of a tangent suggesting that the extra dynamic range does substantially not increase the number of "levels" of luminance available, and I showed that it actually does (more precisely, that if I reduce noise by a factor of two, I get twice as many luminance levels). I also showed that by downsampling you can interchange spatial resolution for both dynamic range and number of luminance levels.

So the analysis where you try to demonstrate that the difference is "small" by using that table is incorrect. It's incorrect because when you add noise, you don't just lose the "bottom stop(s)" on the table (for example, levels 1-15) and keep all the others, you're really losing information content in the low order bit(s). You're not eating away at the "bottom of the table", you're eating away at the low order bits.

Quote
I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops.

Yes, that's 13.2 stops per pixel. I'm not sure why this matters so much -- the Canon also drops (by about .8 EV) when you go from print to screen because the two cameras don't differ that much in megapixel count. Depending on whether you use DxO's "screen" or "print" number, the Nikon leads by 2.2Ev or 2.5Ev. I'm not sure why you think those 0.3 Ev matter a whole lot -- either way, the Nikon sensor trounces the Canon, so why devote so much effort to trying to prove that the Nikon is "only" 2.2Ev better ?

Back to your #95, the D800 user could underexpose by 1.2 stops. If he downsamples to 8mpx, he will be able to recover those shadows, and get 14.4 stops of dynamic range. I agree that he can't get 14.4 stops per pixel at full resolution.

 As long as the destination for the image is some fixed size (print or on screen) and not a 100% crop, the "print" benchmark is the one you should care about. So I don't agree for example with the notion that medium format, full frame and crop cameras are equal in terms of dynamic range even if they are on a per pixel basis.
« Last Edit: September 28, 2012, 07:15:49 AM by elflord »

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #130 on: September 28, 2012, 07:20:38 AM »
Thanks. I guess I kind of dug my own grave by mentioning the D800. I really just wanted to demonstrate the highlight recovery power of current Canon cameras, but apparently the vultures were just waiting in the corner for someone to jump off the D800 cliff...and I seem happily obliged, and given them a fresh corpse to pick over. Entirely my own fault though....

I'm not a D800 "fan" or even a Nikon user. I'm a satisfied 5DII owner. However, I do get a bit tired of the DxO bashing from Camera "fans". It's usually horribly ill informed.

zim

  • 1D Mark IV
  • ******
  • Posts: 794
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #131 on: September 28, 2012, 08:34:07 AM »
jrista - Thanks for your very long and detailed reply the time you took is most appreciated

From way back in my film I've been used to bracketing (around what I believe to be the correct exposure) simply for insurance so I've been very laxed in my use of the histogram, well thats my excuse  :P anyway going to put my learnin head on and go for it!!  ;D

Best Regards

jrista

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4814
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #132 on: September 28, 2012, 12:36:55 PM »
I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".

A stop of dynamic range in the shadows is interchangeable with a stop of dynamic range in the highlights because you can always meter differently (e.g. underexpose by a stop)

You went off on a bit of a tangent suggesting that the extra dynamic range does substantially not increase the number of "levels" of luminance available, and I showed that it actually does (more precisely, that if I reduce noise by a factor of two, I get twice as many luminance levels). I also showed that by downsampling you can interchange spatial resolution for both dynamic range and number of luminance levels.

So the analysis where you try to demonstrate that the difference is "small" by using that table is incorrect. It's incorrect because when you add noise, you don't just lose the "bottom stop(s)" on the table (for example, levels 1-15) and keep all the others, you're really losing information content in the low order bit(s). You're not eating away at the "bottom of the table", you're eating away at the low order bits.

Perhaps we are on different pages. Once an image is digitized, its digitized. It has a fixed bit depth. In the case of modern DSLR's, the 14-bit output of a RAW is fixed, and the physical dimensions of that image are also fixed (so downscaling really isn't an option to start with...not if you wish to continue working with the image as a RAW image.) If you do export that 14-bit raw to say, TIFF, then you now have a 16-bit image. The number of bits is fixed. It doesn't change. If you scale that TIFF image down, yes, you can mitigate noise. You'll really be mitigating two types: Photon Shot and Read Noise. When it comes to Photon Shot, the D800 doesn't have any real advantage over any other camera, and the benefit of scaling would be the same for any image. When it comes to Read Noise, that noise only exists in the black and shadow levels. If you scale an image down, your only affecting the bottom small percentage of the total tonal range of your TIFF image. You could certainly move the gray point around, but your not redistributing bits...your only redistributing the existing tonal levels in the image...so the gain in the shadows of a few levels isn't going to translate into thousands of highlight levels by moving the gray point around after downscaling.

Again, though, I've been trying to discuss this topic in the context of a digital image on a computer. You keep conflating the issue by bringing in the behavior of the hardware in a camera. I'm not talking about metering and adjusting the exposure value pre-exposure time. I'm talking about working an image in post after its been digitized by the ADC and imported off the camera/memory card, as the original debate was whether you can really actually gain over a stop of DR by the simple act of scaling an image down (an act that occurs well beyond the camera, so discussing how you can use the DR of the hardware to gain shadow or highlight range is out of context.) I believe you can gain a couple stops of DR by downscaling, however since it is in the "lower order bits", or in the darkest tonal levels of an image, the gain is minimal. Were not talking about a huge difference overall, we are talking about a very small difference overall. That difference well indeed may improve the dynamic range of your shadow detail a bit, but its not like your gaining more than double the total tonal range you had before (which I had mistakenly thought was the opposing argument.)

Quote
I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops.

Yes, that's 13.2 stops per pixel. I'm not sure why this matters so much -- the Canon also drops (by about .8 EV) when you go from print to screen because the two cameras don't differ that much in megapixel count. Depending on whether you use DxO's "screen" or "print" number, the Nikon leads by 2.2Ev or 2.5Ev. I'm not sure why you think those 0.3 Ev matter a whole lot -- either way, the Nikon sensor trounces the Canon, so why devote so much effort to trying to prove that the Nikon is "only" 2.2Ev better ?

Back to your #95, the D800 user could underexpose by 1.2 stops. If he downsamples to 8mpx, he will be able to recover those shadows, and get 14.4 stops of dynamic range. I agree that he can't get 14.4 stops per pixel at full resolution.

 As long as the destination for the image is some fixed size (print or on screen) and not a 100% crop, the "print" benchmark is the one you should care about. So I don't agree for example with the notion that medium format, full frame and crop cameras are equal in terms of dynamic range even if they are on a per pixel basis.

That last statement is your mistake, though. It's also the same mistake DXO makes: Why is the assumption that a "print" is always less than native resolution (the same as a 100% crop)? The D800 has a native print size of around 17x22 (roughly speaking). If I print at native size, I am not downscaling. That effectively is a 100% crop. There isn't any averaging of any pixels going on when I print at native size, and once ink is laid down on paper, at best (assuming I use something like Epson UltraChrome or Canon Lucia ink on a high luster paper) I might get a dMax of 2-2.3, which is around 6 to 7 stops. The only time DXO's "Print DR" actually results in greater dynamic range is when that 8x12" printable image is viewed on a computer, and even then...you would require a 14-bit display to actually observe the all the detail at any level offered by a 14 stop image. Generally speaking, if I buy a camera like the D800, I'm not going to print at just 17x22. I'm going to print huge: 24x36, 30x40, 40x60. Those prints will probably be on Canvas (maybe 6 stops), or possibly on a fine art paper (which have a limited dynamic range around 5 stops.)

I argue about this because the entire notion of "Print DR" is assumptive, misleading, and attempts to nail down a specific result in a world (the world of print) that has thousands of potential final output options, viewing distances, inks, color gamuts, lighting scenarios, etc. etc. Its a terrible concept, a very misleading concept. It doesn't belong in the world of objective camera testing, at least not the way DXO does it where its a primary factor of measure for Sensor IQ.

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #132 on: September 28, 2012, 12:36:55 PM »

LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4062
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #133 on: September 28, 2012, 12:46:29 PM »
I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".

A stop of dynamic range in the shadows is interchangeable with a stop of dynamic range in the highlights because you can always meter differently (e.g. underexpose by a stop)

You went off on a bit of a tangent suggesting that the extra dynamic range does substantially not increase the number of "levels" of luminance available, and I showed that it actually does (more precisely, that if I reduce noise by a factor of two, I get twice as many luminance levels). I also showed that by downsampling you can interchange spatial resolution for both dynamic range and number of luminance levels.

So the analysis where you try to demonstrate that the difference is "small" by using that table is incorrect. It's incorrect because when you add noise, you don't just lose the "bottom stop(s)" on the table (for example, levels 1-15) and keep all the others, you're really losing information content in the low order bit(s). You're not eating away at the "bottom of the table", you're eating away at the low order bits.

Quote
I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops.

Yes, that's 13.2 stops per pixel. I'm not sure why this matters so much -- the Canon also drops (by about .8 EV) when you go from print to screen because the two cameras don't differ that much in megapixel count. Depending on whether you use DxO's "screen" or "print" number, the Nikon leads by 2.2Ev or 2.5Ev. I'm not sure why you think those 0.3 Ev matter a whole lot -- either way, the Nikon sensor trounces the Canon, so why devote so much effort to trying to prove that the Nikon is "only" 2.2Ev better ?

Back to your #95, the D800 user could underexpose by 1.2 stops. If he downsamples to 8mpx, he will be able to recover those shadows, and get 14.4 stops of dynamic range. I agree that he can't get 14.4 stops per pixel at full resolution.

 As long as the destination for the image is some fixed size (print or on screen) and not a 100% crop, the "print" benchmark is the one you should care about. So I don't agree for example with the notion that medium format, full frame and crop cameras are equal in terms of dynamic range even if they are on a per pixel basis.

+1

LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 4062
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #134 on: September 28, 2012, 12:50:14 PM »
I don't necessarily disagree, however I think your starting to conflate contexts. I was trying to discuss DR in terms of a digitized image in the context of scaling, which I think narrows the scope and does simplify things a bit. Your now talking about DR in a much larger context, that of the sensor. That moves us out of the realm of the digital and into the realm of the analog, and I fully agree: Dynamic range in the analog realm of a sensor is an entirely different beast, and a much more complex discussion. But...we were originally talking about the dynamic range gained by the act of downscaling a high resolution image. In that context, I don't believe we can "interchange" the dynamic range gained by normalizing read noise...which always exists in the lower levels of a digital image...with highlights. You would always be gaining on the shadow end when you normalize and average read noise, however I think we've both demonstrated that gain is small, even though it can be called a "full stops worth".

A stop of dynamic range in the shadows is interchangeable with a stop of dynamic range in the highlights because you can always meter differently (e.g. underexpose by a stop)

You went off on a bit of a tangent suggesting that the extra dynamic range does substantially not increase the number of "levels" of luminance available, and I showed that it actually does (more precisely, that if I reduce noise by a factor of two, I get twice as many luminance levels). I also showed that by downsampling you can interchange spatial resolution for both dynamic range and number of luminance levels.

So the analysis where you try to demonstrate that the difference is "small" by using that table is incorrect. It's incorrect because when you add noise, you don't just lose the "bottom stop(s)" on the table (for example, levels 1-15) and keep all the others, you're really losing information content in the low order bit(s). You're not eating away at the "bottom of the table", you're eating away at the low order bits.

Perhaps we are on different pages. Once an image is digitized, its digitized. It has a fixed bit depth. In the case of modern DSLR's, the 14-bit output of a RAW is fixed, and the physical dimensions of that image are also fixed (so downscaling really isn't an option to start with...not if you wish to continue working with the image as a RAW image.) If you do export that 14-bit raw to say, TIFF, then you now have a 16-bit image. The number of bits is fixed. It doesn't change. If you scale that TIFF image down, yes, you can mitigate noise. You'll really be mitigating two types: Photon Shot and Read Noise. When it comes to Photon Shot, the D800 doesn't have any real advantage over any other camera, and the benefit of scaling would be the same for any image. When it comes to Read Noise, that noise only exists in the black and shadow levels. If you scale an image down, your only affecting the bottom small percentage of the total tonal range of your TIFF image. You could certainly move the gray point around, but your not redistributing bits...your only redistributing the existing tonal levels in the image...so the gain in the shadows of a few levels isn't going to translate into thousands of highlight levels by moving the gray point around after downscaling.

Again, though, I've been trying to discuss this topic in the context of a digital image on a computer. You keep conflating the issue by bringing in the behavior of the hardware in a camera. I'm not talking about metering and adjusting the exposure value pre-exposure time. I'm talking about working an image in post after its been digitized by the ADC and imported off the camera/memory card, as the original debate was whether you can really actually gain over a stop of DR by the simple act of scaling an image down (an act that occurs well beyond the camera, so discussing how you can use the DR of the hardware to gain shadow or highlight range is out of context.) I believe you can gain a couple stops of DR by downscaling, however since it is in the "lower order bits", or in the darkest tonal levels of an image, the gain is minimal. Were not talking about a huge difference overall, we are talking about a very small difference overall. That difference well indeed may improve the dynamic range of your shadow detail a bit, but its not like your gaining more than double the total tonal range you had before (which I had mistakenly thought was the opposing argument.)

Quote
I'm still adamant that the D800 sensor is only capable of what its capable of...which by all indications, including DXO's, is about 13.2 stops.

Yes, that's 13.2 stops per pixel. I'm not sure why this matters so much -- the Canon also drops (by about .8 EV) when you go from print to screen because the two cameras don't differ that much in megapixel count. Depending on whether you use DxO's "screen" or "print" number, the Nikon leads by 2.2Ev or 2.5Ev. I'm not sure why you think those 0.3 Ev matter a whole lot -- either way, the Nikon sensor trounces the Canon, so why devote so much effort to trying to prove that the Nikon is "only" 2.2Ev better ?

Back to your #95, the D800 user could underexpose by 1.2 stops. If he downsamples to 8mpx, he will be able to recover those shadows, and get 14.4 stops of dynamic range. I agree that he can't get 14.4 stops per pixel at full resolution.

 As long as the destination for the image is some fixed size (print or on screen) and not a 100% crop, the "print" benchmark is the one you should care about. So I don't agree for example with the notion that medium format, full frame and crop cameras are equal in terms of dynamic range even if they are on a per pixel basis.

That last statement is your mistake, though. It's also the same mistake DXO makes: Why is the assumption that a "print" is always less than native resolution (the same as a 100% crop)? The D800 has a native print size of around 17x22 (roughly speaking). If I print at native size, I am not downscaling. That effectively is a 100% crop. There isn't any averaging of any pixels going on when I print at native size, and once ink is laid down on paper, at best (assuming I use something like Epson UltraChrome or Canon Lucia ink on a high luster paper) I might get a dMax of 2-2.3, which is around 6 to 7 stops. The only time DXO's "Print DR" actually results in greater dynamic range is when that 8x12" printable image is viewed on a computer, and even then...you would require a 14-bit display to actually observe the all the detail at any level offered by a 14 stop image. Generally speaking, if I buy a camera like the D800, I'm not going to print at just 17x22. I'm going to print huge: 24x36, 30x40, 40x60. Those prints will probably be on Canvas (maybe 6 stops), or possibly on a fine art paper (which have a limited dynamic range around 5 stops.)

I argue about this because the entire notion of "Print DR" is assumptive, misleading, and attempts to nail down a specific result in a world (the world of print) that has thousands of potential final output options, viewing distances, inks, color gamuts, lighting scenarios, etc. etc. Its a terrible concept, a very misleading concept. It doesn't belong in the world of objective camera testing, at least not the way DXO does it where its a primary factor of measure for Sensor IQ.

He is not conflating anything. You have a lot of knowledge but you are not getting the true conceptual meaning of one or two pretty important things it seems.

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #134 on: September 28, 2012, 12:50:14 PM »