November 23, 2014, 05:53:14 PM

Author Topic: Who said Canon cameras suck?!?  (Read 40867 times)

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4627
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon sensors suck?!?
« Reply #105 on: September 27, 2012, 06:22:21 PM »
Please read my answer at #95. Maybe then you'll finally get my point. All your doing when scaling images in software is manipulating existing levels, which really doesn't improve DR. It might mitigate noise, making detail in the shadows appear more accurate....but that has nothing to do with the camera. That has everything to do with software, and software is effectively an infinitely subjective thing. Lets eliminate the subjectivity here, and focus on what the physical device we put in our hands and use to take a photograph can do.

You are missing the point about how to carry out a fairer relative comparison. But if you want to believe a 1DX tech based 36MP FF sensor would do much worse for high ISO noise than a 10D tech based 4MP FF sensor be my guest....

I fully understand the point of normalization for comparison. I also think its a fundamentally flawed concept. You need to get some new material, because repeatedly trotting the "Well you have to compare on an equivalent playing field" argument out over and over just becomes abrasive after a while. I KNOW your argument. Listen to mine: What you do in post with scaling DOES NOT TELL YOU what the hardware can do IN TERMS OF DR. It only tells you what SOFTWARE can do in terms of SIMULATING a DR gain. Mucking with an image in post doesn't change the capabilities of the hardware though.

I know that if you scale a D800 image down to the size of Camera X, or scale the image of Camera X up to the size of the D800, you get a rough picture of how those two cameras images compare...ACCORDING TO THE SOFTWARE THAT DID THE SCALING. I learned about the software by doing a normalized comparison. I didn't learn anything new about the D800 and Camera X, though. What happens on a desktop computer, a laptop, a tablet, hell even a phone...doesn't tell you anything about a camera. It tells you about the destop/laptop/tablet/phone...and the software that particular device is running.

I'm a printer. I don't downscale. I either print at native resolution, or I significantly upscale. The fact that I can mitigate noise and do some fancy dithering to push my 13.2 stop 17x22" image into a 14.4 stop 8x12" image DOES NOTHING FOR ME. As a printer, I can't use the initial 13.2 stops, let alone 14.4 stops, of dynamic range anyway. I either have to compress the information in such an image into a much smaller 5 or 6 stops, and in the process, even if I maintain very tight control over it and manually tweak levels, white and black point, gamut, etc...LOSE A CONSIDERABLE AMOUNT OF DETAIL (particularly in the shadows...where a D800 has the potential to lose a hell of a lot more than any other camera), or I simply print, and let the printer decide what to discard so it can stuff all that extra DR into a print with less than HALF as much...at best. I could maybe get 7 stops of DR in print, but I would have to use a ridiculously bright and unbelievably glossy paper to do so, which really only looks good for a very few types of photos in a very few kinds of settings. But I can't use much more than 7 stops of dynamic range in an image in the first place...outside of a bit of initial shadow or highlight recovery for the first round of dynamic range compression to fit 10+ stops of DR on a computer screen that ALSO can't utilize as much as any camera on the market produces.

Normalized comparisons tell me about SOFTWARE. They don't tell me anything about DSLR HARDWARE. I can't photograph a scene with 14.4 stops in a single shot with the D800. But all this "Well you have to normalize to compare!" crap tells me I can! That's a serious problem! People believe that kind of S___, and it doesn't tell them jack about the camera they are buying. Hell, DXO could improve their DXO Optics software's scaling algorithms and probably gain another...hmm... 0.2, 0.4 stops of DR? That would suddenly mean the D800 is capable of 14.6...maybe even 14.8 stops of DR, right? Because you have to friggin normalize to compare cameras, RIGHT?! NO!!!!!! You don't!

If I want to know about the D800, and what the D800 can do me, and whether the D800 will perform well for my photography...I haven't asked about any other camera...I've only asked about the D800. I could care less about how it compares to 500 different cameras. I care about what the D800 itself can ACTUALLY DO. Enough with this "You HAVE TO if you want to compare!" bull...its unhelpful. Not everything is a competition. Not everything is about comparing camera A, B, C, D, and the whole rest of the freakin alphabet. Comparisons are really starting to muddy the waters, to throw out arbitrary "facts" that don't mean anything out of a very specific and very narrow conceptual space (i.e. DXO labs and all of their specific testing hardware and software algorithms). Keep it simple, ppl! HARDWARE. That's what a DSLR is. Hardware. Lets look at the hardware, not software. Its still possible to compare hardware traits directly. You don't need to keep the exact dimensions of a digitized image (which is 100% POST HARDWARE) the same to have an objective comparison of cameras. Hardware statistics tell you everything you need to know about a camera, about two cameras, about how those two cameras compare from a real-world, in-the-field performance perspective (assuming you actually want to know how to cameras compare...however if you just want to know how one camera fares on its own, hardware statistics will tell you that too.)

canon rumors FORUM

Re: Who said Canon sensors suck?!?
« Reply #105 on: September 27, 2012, 06:22:21 PM »

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #106 on: September 27, 2012, 06:41:48 PM »
What you can do in software doesn't matter. Dynamic range benefits what you do in-camera. It doesn't matter if you can use clever software algorithms to massage the 13.2 stops of DR in an original image to fabricate artificial data to extract 14.0, 14.4, or 16 stops of "digital DR" (which is not the same thing as hardware sensor DR). I'll try to demonstrate again, maybe someone will get it this time.

"I am composing a landscape scene on-scene, in-camera. I meter the brightest and darkest parts of my scene, and its 14.4 stops exactly! HA! I GOT 'DIS! I compose my scene with the D800's live view, and fiddle with my exposure trying to get the histogram to fit entirely between the extreme left edge and the extreme right edge. Yet, for the life of me, I CAN'T. Either my histogram rides up the right edge a bit (the highlights), or it rides up the left edge a bit (the shadows). This is really annoying. DXO said this stupid camera could capture 14.4 stops of DR!! Why can't I capture this entire scene in a single shot?!?!?!!!1!!11 I didn't bring any ND filters because this is the uberawesomedonkeyshitcameraoftheyearpureawesomeness!!!!!"

The twit trying to capture a landscape with 14.4 stops of DR in a single shot CAN NOT because the sensor is only capable of 13.2 stops of DR! The twit of a landscape photographer is trying to capture 1.2 stops (2.4x as much light) in a single shot and his camera simply isn't capable of doing so. He could take two shots, offset +/- 2 EV and combine them in post with HDR, but there is no other way his camera is going to capture 14.4 stops of DR.

THAT ^^^^^ UP THERE ^^^^^ IS MY POINT about the D800. It is not a 14.4 stop camera. It is a 13.2 stop camera. You can move levels around in post to your hearts content, dither and expand the LEVELS YOU HAVE. But if you don't capture certain shadow or highlight detail TO START WITH....you CAN'T CREATE IT LATER. All your doing is averaging and dithering the 13.2 stops you actually captured to SIMULATE more DR. Ironically, that doesn't really do anyone any good, since computer screens are, at most, capable of about 10 stops of DR (assuming you have a super-awesome 10-bit RGB LED display), and usually only capable of about 8 stops of DR (if you have a nice high end 8-bit display), and for those of you unlucky enough to have an average $100 LCD screen, your probably stuck with only 6 stops of DR. Print is even more limited. An average fine art or canvas print might have 5 or 6 stops. A print on a high dMax gloss paper might have as much as 7 stops of DR.

There is little benefit to "digital DR" that is higher than the sensor's native DR. Your not gaining any information you didn't start out with, your simply redistributing the information you have in a different way by, say, downscaling with a clever algorithm to maximize shadow DR. But if you didn't record shadow detail higher than pure black to start with, no amount of software wizardry will make that black detail anything other than black. And even if you do redistribute detail within the shadows, midtones, or highlights...if your image has 14 stops of DR you can't actually SEE IT. Not on a screen. Not in print. You have to compress it, merge those many stops into fewer stops, and thus LOSE detail, to view it on a computer screen or in print.



In my original example that started this thread...my camera DID record the information I recovered. I am not, have not, and will not claim that my 7D is capable of anything more than 11.12 stops of DR, because that's what the sensor gets (at least according to DXO.) My original post was simply noting that one can make the BEST USE of that hardware DR but exposing to the right. Canon cameras offer a lot of highlight exposure latitude, and based on my accidental overexposure of a dragonfly, I've learned you can not only ETTR a little...you can ETTER a LOT with a modern Canon camera (i.e. 7D, 5D III, 1D IV, 1D X). You can really pack in the highlights and recover a tremendous amount of information in post.

However the same facts of reality regarding hardware DR that exist for the D800 also exist for the 7D. DXO Mark lists their "Print DR" for the 7D at 11.73 stops. Same as with the D800 above, if I try to photograph a landscape with 11.73 stops of DR, I'm going to either block the shadows a small amount, or blow some of the highlights a small amount. No way around that. I am going to have to compromise on about 2/3rds of a stop one way or another.

I do follow the above. Here are some things I don't quite follow:

You state this:
Quote
The friggin sensor has an average read noise level of around 3 electrons, and a maximum saturation point (at ISO 100) of 44972 electrons. Those FACTS about the D800 sensor DO NOT CHANGE, no matter what you do with software.

I take it that the above refers to the read noise of a single pixel ?

Your definition of dynamic range if I understand correctly, is log2( saturation point ) - log2( read noise ).

Now if I'm allowed to average two adjacent pixels into a "superpixel", the saturation point won't change, but the read noise will go down.

That's not a "software" trick, it's the fact that the "hardware" doesn't consist of a single pixel.

bdunbar79

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2601
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #107 on: September 27, 2012, 06:42:13 PM »
Canon cameras don't suck nearly as bad as this thread does.
2 x 1DX
Big Ten, GLIAC, NCAC

bdunbar79

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2601
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #108 on: September 27, 2012, 06:55:00 PM »
No one says that Canon sucks, but they have a outdated sensor tech compared to others
there seems to be a bit difficult for some to separate facts from feelings


Except the 1DX right?  I hope you don't say that sensor tech is outdated, please don't.
2 x 1DX
Big Ten, GLIAC, NCAC

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4627
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #109 on: September 27, 2012, 07:15:43 PM »
I do follow the above. Here are some things I don't quite follow:

You state this:
Quote
The friggin sensor has an average read noise level of around 3 electrons, and a maximum saturation point (at ISO 100) of 44972 electrons. Those FACTS about the D800 sensor DO NOT CHANGE, no matter what you do with software.

I take it that the above refers to the read noise of a single pixel ?

Your definition of dynamic range if I understand correctly, is log2( saturation point ) - log2( read noise ).

Now if I'm allowed to average two adjacent pixels into a "superpixel", the saturation point won't change, but the read noise will go down.

That's not a "software" trick, it's the fact that the "hardware" doesn't consist of a single pixel.

Read noise MAY go down. Its not guaranteed to go down. Simple logical exercise. Lets assume a hypothetical sensor has an average read noise of 3 levels. If you take four adjacent deep shadow pixels, A - D, each with a level anywhere from 2 to 4: A2 B3 C2 D4. If we average those pixels together we get (2 + 3 + 2 + 4)/4 = 11/4 = 2.75. Our read noise is 3, and our averaged pixels are 2.75. Well, since we can't actually have fractional levels in an image stored as integers, that's still 3. We didn't really change anything. We could try to assume that the signal for all four pixels is 2, which would result in an average of 2...but since our average read noise is 3, we could never really be sure whether that averaged result of 2 is actual image data or just noise, or 50/50 of each. But lets call it a win anyway...you gained 1 level of DR by averaging four pixels with a level of 2 together. You could also have this: A3 B4 C4 D3. That averages to 3.5. You might even have A4 B4 C4 D4, which obviously averages to a level of 4. We can't really gain much in terms of DR around our average read noise of 3. Sometimes the average of a few pixels might be less, sometimes it might be more. But its a freaking average...if were averaging information around read noise...were going to end up with something pretty much the SAME as our read noise.

Additionally, a stop is a doubling. You have effectively 9410 levels with a 13.2 stop sensor (2^13.2). A 14.4 stop sensor would have 21681 levels. That's a massive difference. Logically, just with some basic deduction, does anyone SERIOUSLY believe they are gaining 12,208 additional distinct levels, more than their already amazing 13.2 stop image had in the first place, simply by averaging some slightly noisy pixels when downscaling??? Simple logical deduction...does anyone really, truly, honestly believe they are gaining that much via the simple act of downscaling? I mean, the D800 is amazing...but not THAT amazing.

cliffwang

  • Canon 7D MK II
  • *****
  • Posts: 492
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #110 on: September 27, 2012, 07:18:06 PM »
No one says that Canon sucks, but they have a outdated sensor tech compared to others
there seems to be a bit difficult for some to separate facts from feelings


Honestly I don't care if the sensor tech is outdated or not.  I am happy with my 5D3.  However, I also agree Nikon has better sensor.  I want to have better DR on my 5D3, but the truth is that's impossible.  What I can do is just enjoy my 5D3 and give Canon more time and chance.
Canon 5D3 | Samyang 14mm F/2.8 | Sigma 50mm F/1.4 | Tamron 24-70mm F/2.8 VC | Canon 70-200mm F/2.8 IS MK2 | Canon 100mm f/2.8 Macro L | Canon Closed-up 500D | 430EX | Kenko 2x Teleplus Pro 300 | Manfrotto Tripod

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #111 on: September 27, 2012, 07:43:57 PM »
Read noise MAY go down. Its not guaranteed to go down.

It's pretty straightforward to state assumptions under which it does go down (basically, unless it's perfectly correlated, it does go down. If it's uncorrelated and gaussian, it's inversely proportional to sqrt(N) )

Quote
Simple logical exercise. Lets assume a hypothetical sensor has an average read noise of 3 levels. If you take four adjacent deep shadow pixels, A - D, each with a level anywhere from 2 to 4: A2 B3 C2 D4. If we average those pixels together we get (2 + 3 + 2 + 4)/4 = 11/4 = 2.75. Our read noise is 3, and our averaged pixels are 2.75. Well, since we can't actually have fractional levels in an image stored as integers, that's still 3.

Now you're conflating quantization loss with noise. They are two different things.

Anyway, in the above example, we have a read noise of 3 for each pixel. When we average those pixels, our read noise is 1.5 (that's 3/sqrt(4)) for the merged pixel. So we went from 3 +/- 3 to 2.75 +- 1.5.  In other words, we went from being at the noise baseline to being outside it.


Quote
We didn't really change anything.

Yeah, but we did. We dropped our noise baseline from 3 to 1.5. That's an extra stop in the shadows.

Quote
Additionally, a stop is a doubling. You have effectively 9410 levels with a 13.2 stop sensor (2^13.2).

Actually, there is no guarantee that you have that many levels -- it's only true if the scale is linear (let's assume this anyway) AND the second level on your scale is double of the first level. That's the difference between the top and bottom, but it doesn't say anything about the number in between. There could conceivably be either fewer or more than 2^13.2. This is a bit of a digression but I'm only pointing it out because people get confused and thing that the ADC converter is a hard bound on the dynamic range. It needn't be.

Quote
A 14.4 stop sensor would have 21681 levels. That's a massive difference. Logically, just with some basic deduction, does anyone SERIOUSLY believe they are gaining 12,208 additional distinct levels, more than their already amazing 13.2 stop image had in the first place, simply by averaging some slightly noisy pixels when downscaling???

Absolutely. Suppose to make this a little simpler, our read noise is 6 to begin with. Then the "levels" 800 and 803 are indistinguishable.
Then we average  4 pixels, which reduces read noise by a factor of 2, to 3.  From 800 to 860, if read noise is 6, we can't really resolve 60 "levels", we can resolve about 10 (800,806,812,...). When we halve the read noise, we can resolve twice as many distinct levels (e.g. 20: 800, 803, 806 , ... ). But this is somewhat beside the point anyway, because dynamic range is just that -- it's the difference between upper and lower level, not the ability to resolve intermediate levels.

Quote
Simple logical deduction...does anyone really, truly, honestly believe they are gaining that much via the simple act of downscaling?

No, it's not logical deduction -- it's an appeal to (bad, in this case) intuition.

Quote
I mean, the D800 is amazing...but not THAT amazing.

I'm not concerned here with the D800 or cheerleading for one camera or another.
« Last Edit: September 27, 2012, 08:02:04 PM by elflord »

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #111 on: September 27, 2012, 07:43:57 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4627
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #112 on: September 27, 2012, 08:07:22 PM »
Yeah, but we did. We dropped our noise baseline from 3 to 1.5. That's an extra stop in the shadows.

Well, I think this one statement is most significant. Yes, we gained an extra stop in the shadows, however that stop, in terms of luminance levels gained, is insignificant in the larger picture...its 1.5 levels worth, not 6,000+ levels worth. If the D800 camp here is arguing that they are gaining an extra 2.3 stops on the opposite end that I've classically been looking at the problem from, that is an entirely different story, and far more realistic. If I flip my mental model around and look at it from "upside down", then the gain in the D800 is, what...a few dozen levels worth from the read noise improvement over their previous sensors (and maybe a few dozen more relative to Canon sensors)...grand total? Throw in the longer foot that is characteristic of Nikon's default tone curves, and you might have a few hundred levels total, which now realistically boils down to the kind of shadow wiggle-room we've all been seeing (which is no where near, and need not be to see the kind of shadow lifting we have, 12,000+ levels.)

Coming from Canon, I've always seen levels gained by improvement in DR on the highlight end, and that's how I've generally looked at the problem. Takes kind of a half-paradigm shift to think about the entire problem from the shadow end, but it certainly makes more logical sense.

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #113 on: September 27, 2012, 08:41:45 PM »
Yeah, but we did. We dropped our noise baseline from 3 to 1.5. That's an extra stop in the shadows.

Well, I think this one statement is most significant. Yes, we gained an extra stop in the shadows,

Yes, and that's what dynamic range is, right ? As you defined it, it's log2(saturation level) - log2(noise level), so we did gain a stop of dynamic range. Because you can adjust the exposure, an extra stop in the shadows is interchangeable with an extra stop at the other end.

Quote
however that stop, in terms of luminance levels gained, is insignificant in the larger picture...its 1.5 levels worth, not 6,000+ levels worth.

Number of luminance levels is a different thing to dynamic range. However, as I explained, we not only gain the stop of dynamic range, we gain double the luminance levels.

e.g. whereas previously, we could only resolve 800,806,812, ..., with the reduced noise we can resolve 800,803,806,809 ... ,

so it really is 6000 levels worth.

This thinking that before you had the numbers 1-1024 and after you have 0.5 and 1-1024 which is "one more level" is simplistic and wrong. You actually also get 1.5, 2.5, ... etc. You get these because you can resolve more due to reduced noise. Or, if you like, you push them a stop and you get 1-2048.

Quote
If the D800 camp here is arguing that they are gaining an extra 2.3 stops on the opposite end that I've classically been looking at the problem from, that is an entirely different story, and far more realistic.

(1) dynamic range is different from the number of resolvable luminance levels, and (2) reducing noise does increase both, so the distinction is not as important as you make it out to be.

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4627
  • EOL
    • View Profile
    • Nature Photography
Re: Who said Canon cameras suck?!?
« Reply #114 on: September 27, 2012, 08:55:00 PM »
Yeah, but we did. We dropped our noise baseline from 3 to 1.5. That's an extra stop in the shadows.

Well, I think this one statement is most significant. Yes, we gained an extra stop in the shadows,

Yes, and that's what dynamic range is, right ? As you defined it, it's log2(saturation level) - log2(noise level), so we did gain a stop of dynamic range. Because you can adjust the exposure, an extra stop in the shadows is interchangeable with an extra stop at the other end.

Quote
however that stop, in terms of luminance levels gained, is insignificant in the larger picture...its 1.5 levels worth, not 6,000+ levels worth.

Number of luminance levels is a different thing to dynamic range. However, as I explained, we not only gain the stop of dynamic range, we gain double the luminance levels.

e.g. whereas previously, we could only resolve 800,806,812, ..., with the reduced noise we can resolve 800,803,806,809 ... ,

so it really is 6000 levels worth.

This thinking that before you had the numbers 1-1024 and after you have 0.5 and 1-1024 which is "one more level" is simplistic and wrong. You actually also get 1.5, 2.5, ... etc. You get these because you can resolve more due to reduced noise. Or, if you like, you push them a stop and you get 1-2048.

Quote
If the D800 camp here is arguing that they are gaining an extra 2.3 stops on the opposite end that I've classically been looking at the problem from, that is an entirely different story, and far more realistic.

(1) dynamic range is different from the number of resolvable luminance levels, and (2) reducing noise does increase both, so the distinction is not as important as you make it out to be.

When talking about DR on a sensor, I agree, its not the same thing as levels (since were talking about an analog signal). But if were talking about downscaling a D800 image and gaining dynamic range, everything is about levels of luminance. In my original example I simply defined noise in the context of a digitized image as being 3 levels. If we downsample an image and average noise by two fold, then the number of levels that constitute noise is between 1 and 2, and may vary a bit by pixel. I am not sure where your example of "previously, we could only resolve 800, 806, 812...now we can resolve 800, 803, 806, 809" is accurate in the context of downscaling an image. Were not talking about resolving anything here, were talking about a three-component pixel with a 0-16384 level range each, and there is nothing preventing us from using every single one of those levels. The thing that diminishes our post-digitization DR is noise, and averaging it by downscaling...as you described, reduces our noise from 3 levels to 1.5 levels. It doesn't change our ability to have digitized (post-ADC) pixels at any and every level between 800 and 812, it simply adds the ability to have levels between 1 or 2 and 3.

Yes, its a stops worth of improvement, but its not a hugely significant improvement. You state that dynamic range is not the same as levels. No, its not, however a CHANGE in dynamic range could be MEASURED in levels (if your working with a digital image), or it could be measured in electrons, or signal power, etc. In the past I've looked at change in DR from the bottom up, starting in the shadows and gaining as we move towards the highlights. Flipping that problem around in my head, its easier to differentiate the differences between 11, 13, and 14 stops. If we take 2^14, we get the potential maximum level that a fully white pixel in a sensor could be converted into by the ADC: 16348. In a perfect sensor, our entire dynamic range, measured in levels, would be from 0 to 16384. However, working down from the "top", our stops of dynamic range as measured in levels can be divided up into the following:

StopLevels in Stop
18192
24096
32048
41024
5512
6256
7128
864
932
1016
118
124
132
141

If read noise is a whole stops worth (image from Exmor sensor), and it eats away from to "bottom", you aren't losing much. If read noise is a few stops (3 to 4) worth (image Canon sensor), you could be losing 7, maybe 15 distinct levels. This is highly simplified, the conversion from sensor to digital isn't as ideal and linear as this, and actual "level allocation" (for lack of a better term/concept) probably results in fewer levels to the upper stops and more levels to the lower stops. But I think this illustrates my point. Canon sensors lose stops 12-14 (and maybe even part of stop 11) to read noise, while Sony Exmor only loses stop 14 to read noise (from the standpoint of digitized pixels, rather than electrons.)
« Last Edit: September 27, 2012, 09:14:21 PM by jrista »

tnargs

  • Rebel T5i
  • ****
  • Posts: 138
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #115 on: September 27, 2012, 09:26:04 PM »
...I am happy with my 5D3.  However, I also agree Nikon has better sensor. ...

Nikon don't make better sensors, though. It's an open secret that Nikon uses Sony sensors.  :-X This means if Sony catches a cold, Nikon develop pneumonia. If Sony has a production problem, Nikon has a supply problem and a recall problem. If Sony's FF sensor production stalls or has issues, how much will it hurt Sony? Ah, but how much will it hurt Nikon?

Also, Nikon are therefore losing their in-house sensor capability. If Canon have a particular area they want to research and develop their sensors, they just power ahead and do it. If Nikon have a similar wish, they write a begging letter to Sony. If Sony's imaging priorities start to diverge from what Nikon would like, too bad, Nikon products will suffer.

And this flows on to the issue of integration. By controlling all aspects of product development, Canon can develop fully integrated products. One can expect them to use this to make better *cameras*. A sensor is not a camera.  :o 

If you are wondering if this is terribly important, ask Apple. Nikon are becoming the PC of cameras next to Canon's Apple. The PC might have a CPU with a few more 0.1 GHz and an extra MB of cache, but in terms of integrated performance for the end user...... It is impossible for them to keep up for long while they have to take what Sony delivers and built a camera and processor around it with falling in-house sensor tech capability. IMHO.

LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4010
    • View Profile
Re: Who said Canon sensors suck?!?
« Reply #116 on: September 27, 2012, 10:24:24 PM »
I fully understand the point of normalization for comparison. I also think its a fundamentally flawed concept.


You think most of engineering and science is flawed then.

Quote
You need to get some new material, because repeatedly trotting the "Well you have to compare on an equivalent playing field" argument out over and over just becomes abrasive after a while.

And what about your non-stop repeatedly claiming that this basic concept it flawed?

Quote
I KNOW your argument. Listen to mine: What you do in post with scaling DOES NOT TELL YOU what the hardware can do IN TERMS OF DR. It only tells you what SOFTWARE can do in terms of SIMULATING a DR gain. Mucking with an image in post doesn't change the capabilities of the hardware though. ....
I'm a printer. ....

No need to 'play' with software. Just print from the two cameras and stand far enough back from the higher MP print until it looks the same size as the smaller print or the details captured become equal.


Quote
Normalized comparisons tell me about SOFTWARE. They don't tell me anything about DSLR HARDWARE.

Not really true at all.

Quote
I can't photograph a scene with 14.4 stops in a single shot with the D800.

Not maintaining 36MP of detail you can't.

It just lets you give a fair RELATIVE comparison between the two cameras. It's not so much about the actual numbers, unless you do scale to the exact sized used.


LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4010
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #117 on: September 27, 2012, 10:25:17 PM »
whatever did we do back when we had to properly expose.

If you knew how to properly expose you would realize that talk about wanting more DR is not primarily about 'proper exoposure'.

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #117 on: September 27, 2012, 10:25:17 PM »

LetTheRightLensIn

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4010
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #118 on: September 27, 2012, 10:26:06 PM »
Canon cameras don't suck nearly as bad as this thread does.

I agree with that!  ;D

elflord

  • 5D Mark III
  • ******
  • Posts: 705
    • View Profile
Re: Who said Canon cameras suck?!?
« Reply #119 on: September 27, 2012, 10:57:41 PM »
When talking about DR on a sensor, I agree, its not the same thing as levels (since were talking about an analog signal). But if were talking about downscaling a D800 image and gaining dynamic range, everything is about levels of luminance.
 In my original example I simply defined noise in the context of a digitized image as being 3 levels. If we downsample an image and average noise by two fold, then the number of levels that constitute noise is between 1 and 2, and may vary a bit by pixel. I am not sure where your example of "previously, we could only resolve 800, 806, 812...now we can resolve 800, 803, 806, 809" is accurate in the context of downscaling an image. Were not talking about resolving anything here, were talking about a three-component pixel with a 0-16384 level range each, and there is nothing preventing us from using every single one of those levels. The thing that diminishes our post-digitization DR is noise, and averaging it by downscaling...as you described, reduces our noise from 3 levels to 1.5 levels. It doesn't change our ability to have digitized (post-ADC) pixels at any and every level between 800 and 812, it simply adds the ability to have levels between 1 or 2 and 3.

Yes, its a stops worth of improvement, but its not a hugely significant improvement.

It is what it is -- a 1 stop improvement in dynamic range. As we agreed, dynamic range is log2(saturation point) - log2(noise level).

 I'd also point out that because you can either set exposure or use a different gray point, an extra stop in the shadows is interchangeable with an extra stop in the highlights -- you can always expose by a stop lower. So worrying about whether you gain a stop in the shadows or highlights is a bit off base.  Dynamic range is dynamic range, number of levels is a different beast ...

Now regarding the number of levels -- your ADC could have every level between 800 and 812, but that doesn't mean that you have that many distinct levels. At some point, if the noise is large enough, the number of "levels" you have doesn't matter. For example, suppose you start with 16384 levels. Suppose you add two low order bits and randomly assign them. I think we agree that after adding those bits we don't really have 65536 (16384 * 2 *2 ) "levels" even if we "used" that many. Back to the example we were discussing, if we have a 14 bit ADC and our noise level is 3, the lowest order bit is close to random. 

The number of levels we have is the number of levels divided by the number of noise levels (assuming read noise doesn't change across the dynamic range) -- it's essentially exp(dynamic range) * some constant

By the way, when we pool multiple pixels, we don't just have possible 16384 levels any more -- we have 16384 multiplied by the number of pixels (we get multiples of .25 when we average. Or if you don't like fractions, you can just add the pixel values. Either way, you end up with 65536 distinct levels). Of course because of the above this doesn't mean that we can distinguish between all of them.

Quote
If read noise is a whole stops worth (image from Exmor sensor), and it eats away from to "bottom", you aren't losing much. If read noise is a few stops (3 to 4) worth (image Canon sensor), you could be losing 7, maybe 15 distinct levels.
This is highly simplified, the conversion from sensor to digital isn't as ideal and linear as this,

I think it's perhaps a bit too simplified. Recall my previous point -- dynamic range at the bottom is interchangeable with dynamic range at the top because you can always underexpose or overexpose.

Now of course if you insist on putting a hard limit on the number of bits available for the signal, you are more likely to suffer quantization loss with a lower dynamic range. For example, if you have 14 bits to represent 12 sops of dynamic range, you get less quantization loss than if you use 14 bits to represent 15 bits of dynamic range. In practice it seems to me that quantization loss (at least in RAW) is not the problem. Also, as I pointed out, if your read noise is (has standard deviation of) 3, you don't "really" have 14 bits worth of distinct levels (the lowest order bit is almost as good as randomly assigned), so you really would get twice as many levels if you could reduce noise by a factor of 2.

Now if you pool multiple pixels you do get more levels. The number of levels you get grows linearly with the number of pixels you pool, but as I pointed out, the noise goes in inverse proportion to sqrt(N), so your true number of levels increases by a factor of sqrt(N). But again, this is different from dynamic rang.e




« Last Edit: September 27, 2012, 11:10:04 PM by elflord »

canon rumors FORUM

Re: Who said Canon cameras suck?!?
« Reply #119 on: September 27, 2012, 10:57:41 PM »