5D3 same max dynamic range as the 5D2???

Status
Not open for further replies.
neuroanatomist said:
Here's a question: when 'measuring' the dark noise level in the 'side masked area', what is the probability that the numbers provided for the dark signal are meaningful? He reports average values of 1024 for the 5DII and 2048 for the 5DIII - exact 2n values. Personally, I find that suspicious, and it seems more likely that those values are not actual signal, but rathet result from the camera firmware setting those pixels, which are outside the image area, to an arbitrary value as it writes out the .CR2 file. In that case, both the absolute value of the dark signal and the noise of that signal are not valid for DR determination.
Good point. Do you reckon the darkest meaningful parts of an image (blown-out black) would have a significantly lower value?
Also, if(?) you can take the blown-out highlights to be the same value every time (i.e. blown-out white for a specific sensor will always be the same value for any picture with that sensor in RAW files), is it not strange that he actually got the same DR number as DxO predicted (for the Mk2)?
 
Upvote 0
Most of this technical jargon is way above my head ::)

However, I do wonder how anyone can fairly test 2 cameras, one they have or have used, the other they've never laid their hands on or even know anyone who has for any length of time ???

It'd be interesting to hear all the various opinions of the people who will be using the 5Diii AFTER they've have it and have been using them for at least a month or so.
 
Upvote 0
Tijn said:
For a RAW file linked to a sensor, isn't "blown-out white" always the same value in the RAW? It's blown out at some value, right? Does that value change from picture to picture?

Do we know it's 'blown out white'? You might have noticed that reds tend to blow out more easily than blues, and a blown out red/green will show up as blown white, but may still be below peak for blue, due to the spectral sensitivities of the Bayer mask. That's one reason DxO measures noise in separate color channels then averages them. But since this guy is testing 'the same way as DxO' maybe he's doing that, but just didn't waste another 5 minutes to include that in the description of his method.
 
Upvote 0
neuroanatomist said:
Do we know it's 'blown out white'? You might have noticed that reds tend to blow out more easily than blues, and a blown out red/green will show up as blown white, but may still be below peak for blue, due to the spectral sensitivities of the Bayer mask. That's one reason DxO measures noise in separate color channels then averages them. But since this guy is testing 'the same way as DxO' maybe he's doing that, but just didn't waste another 5 minutes to include that in the description of his method.
Good, so there's also the possibility that his blown out white area wasn't blown out on all channels. I think that's a good reason to take his results with a big grain of salt. And hope they are indeed wrong. :)

Still I think you should give him a bit of credit for trying, he's not a 'bad guy' at all. With the concerns you've just expressed about the black measurement and the blown-out whites measurements, this whole discussion (flame war..?) could have been quite a bit shorter and there wouldn't be so much ad personam pollution. No need for yelling at one another, they're valid concerns. It's something completely different from someone yelling "your machine doesn't have dials, therefore your method must be faulty". Still interesting he got the same values with the 5Dmk2, however. Perhaps he had better raw files for those.
 
Upvote 0
Tijn said:
Good point. Do you reckon the darkest meaningful parts of an image (blown-out black) would have a significantly lower value?

Also, if(?) you can take the blown-out highlights to be the same value every time (i.e. blown-out white for a specific sensor will always be the same value for any picture with that sensor in RAW files), is it not strange that he actually got the same DR number as DxO predicted (for the Mk2)?

The dark signal may or may not be lower, but more importantly, the noise may be different, and that's the denominator in the equation (in fact, I'm not sure why he's subtracting the dark signal from the max first - the standard calculation is full well capacity over read noise, which translates to log2 (average peak / dark std dev).

Semantics, but I wouldn't say DxO 'predicted' DR for the 5DII - they measured it. What this guy is doing is trying to predict it, and I'll take a proper measurement over a prediction any day. I'm not saying he's definitely wrong, just that there's not enough data to go on. If he downloads the IR RAW files for 10 more cameras, does the same test, and comes out spot on with DxO's measurements for them, that would be convincing.
 
Upvote 0
Came across this on the troll-infested forums of DPR (eewww!):

http://forums.dpreview.com/forums/read.asp?forum=1032&message=40826910

Pathetic regurgitation of the band-of-brothers mantra
Posted by David Franklin
Date/Time 5:59:42 PM, Tuesday, March 06, 2012 (GMT)


Look, I am not a defender or "fan" of Canon in general, and certainly not of the 5DIII, which I have never even held, much less shot and tested. But I am a working pro who shoots mostly at base or low ISO, sometimes of very technically difficult subjects with difficult lighting and lots of PP. I have never, ever, in tens of thousands of exposures using a 1DsIII (supposedly very close in quality to the 5DII), had a single issue with "banding." I'm not saying that banding is impossible to show after torturing a file to an extent that is beyond the realm of reasonably expected real-life use, perhaps from shooting a very difficult image very, very badly, with no planning or execution of alternate exposures to fall back on. But, in the history of photography, this has been called failure on the part of the photographer, rather than an issue with which to pillory a camera (or sensor, or film) company. Showing banding is apparently a very nice tech hobby for some, but not a terribly important issue in cases of even decent, much less good, photography.

And, by the way, just for the heck of it, I decided to do a small experiment. Because I don't have access to any raw file or truly perfected raw converter for a Canon 5DIII, I decided to look at that paragon of sensor goodness, the Nikon 800. I picked, pretty much at random, the only D800 file I had on my hard drive, the low ISO "Library" sample, an otherwise very amazing image for its great detail with very limited deep shadows, making it not the best image example to look for banding faults. When I merely applied the 100% shadow lift in PSCS5, and looked at the deepest shadow areas (very very small areas in an image that is otherwise very evenly lit for mid and low tones), and examined it at 200-300%, voila, what appears to me to be cross-hatched shadow banding appears.

So let's wait for some real verifiable testing with real verifiable Canon raw files and consider the extreme nature of the image manipulation before we pronounce some unverifiable and dubiously achieved judgmment on the ultimate quality of 5DIII files.

Regards,
David
 
Upvote 0
neuroanatomist said:
It's basing a conclusion on a questionable analysis of a single image.

On that...

Something that's creasing me up about yer man's assertions is that I've been able to torture-test umpteen D7000/K-5 files into banding of a sort practically indistinguishable from that the pixel-peepers on DPR have dragged out of the 5D Mk III file - so what does that prove about the Sony sensor?

Nothing, Real World.

I've also been able to drag 7D files up by four or five stops without any problems with the banding that some "experts" insist is guaranteed from that camera in such circumstances, and I'll bet a large lump of cash that I'll be able to do likewise with 5D Mk III files.

It's actually a really easy issue to deal with if you simply use the right converter - for example, have you seen how much clean detail Lr 4 can bring out with the Shadows slider? I was playing last night with some 7D files (shrubs in bright light with black shadows, as it happens), and could render the shadow areas almost "daylight" with no banding penalty.

DPP and - of all things - Photodirector, are also excellent in how they deal with shadows without banding.

And who on Earth needs (emphasis intentional) that many stops of shadow recovery anyway?
 
Upvote 0
It's only 4:30 in the morning here so dont jump all over me, but in terms of a digital image, just thinking logically, there's a very finite white and a very finite black. In terms of photoshop for instance, black is 0, white is 255. You cannot extend that number range any more than that. Given that a stop, in essence is 100% more light than the prior stop, I suppose camera manufacturers can in essense desensitize files... make the white detail even more subtle, making shadows even more subtle giving the illusion of more DR, more stops, but A) there's no getting around digital files limits, and B) there will be even less information in each stop range, am I right?
 
Upvote 0
LetTheRightLensIn said:
3kramd5 said:
So... what are you doing? Blowing the highlights and measuring, blowing the shadows and measuring, and computing the magnitude between them? If you repeat the same calcs for every 5D3 sample do you get the same thing? If you repeat for every 5D2 image do you get the same 11.2? Or are you just measuring the DR of a single image?

I'm not doing anything. I'm using IR's files. Thankfully they blew the highlights on some specular highlights so that is where I got the raw saturation levels from. The dark current noise I measured from the masked area of the file that was cut off from light. it seems to be around +/- .1 stops for across three quick tries on files, doing the same thing my 5D2 values happen, by chance to match DxO exactly to the tenth. different copies might vary +/-.15 or so perhaps unless you got a real weird copy

Setting aside the log function to represent it in stops, I'm just trying to understand the calculation.

Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

So when you say you're measuring noise, are you reading random values from that black area (which in theory should be 0) and determining the minimum level at which noise no longer occurs?

Then a similar procedure is done in a blown out area?

Then the magnitude between dark and light is DR?
 
Upvote 0
3kramd5 said:
Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. ;) Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

220px-Crop_Factor.JPG


So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.
 
Upvote 0
neuroanatomist said:
3kramd5 said:
Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. ;) Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

220px-Crop_Factor.JPG


So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.

Those regions are being illuminated, but what's there to record those values beyond the area of the sensor?

Does the sensor physically extend beyond the 36x24 area and they disable the band of pixels outside the perimeter? I guess that makes sense from a design standpoint (to ensure you get the full desired frame, you cut sensor a little larger and hold back the outliers).

I'd never really given a lot of thought to sensor design before. Thanks for bearing with me.
 
Upvote 0
altenae said:
We are all spending hours and hours on numbers.
Very sad. Very sad.

Some time ago I saw an image taken from a MF and I was blown away.
I checked dxo and the DR was worse than the Sony Nex 10.

There are two things :
Numbers and what your eyes see.

I am going out now to take some pictures.
Or should I stay home to do some more calculations.

Please guys use the camera for where it was made for.
altenae said:
Numbers , numbers, numbers.
Go outside take pictures.
If you are happy and your customer is happy, then who cares if the DR is 10 or 11.

The same story everytime a new camera is about to release.

Guys i agree with both of you, its all about the image.

Now lets get it straight.
This is Canonrumors forum. Its a specific Brand product discussion point. Its not about the trade its about the tool.
If i want to talk about photography i'll go and discuss it in a photography forum. If i want to talk about the tools, then here is the right place.
 
Upvote 0
3kramd5 said:
Those regions are being illuminated, but what's there to record those values beyond the area of the sensor?

Does the sensor physically extend beyond the 36x24 area and they disable the band of pixels outside the perimeter? I guess that makes sense from a design standpoint (to ensure you get the full desired frame, you cut sensor a little larger and hold back the outliers).

No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.
 
Upvote 0
neuroanatomist said:
No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.

Well, for whatever reason, if they do indeed merely use software to black out border pixels that have been exposed, then it seems that they have little bearing actual DR. As they say in engineering analysis, garbage in = garbage out.

Maybe there is a correlation between actual exposed black and what value Canon chooses to apply to those pixels, however, in which case the method may translate to actual DR.

iunno.
 
Upvote 0
LetTheRightLensIn said:
(and if you look at all the Canon press they kept talking about it being better.... at mid and high isos. I was hoping they just forget to mention improved low iso, but i guess they didn't mention it because they didn't do much there this time. at least the high iso stuff should be better though.)

Just shoot at higher a iso, moar DR, problem solved. There's more than one way to starch your nose.

TIMTOWTDI
 
Upvote 0
neuroanatomist said:
No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.

This is exactly correct. All canon sensors have two areas of "black masked pixels" to the left and right edges of the sensor. Based on the CR2 format, there are three columns of pixels on each side. There have been many reports that the base value of these pixels is fixed at 1024 +/- some deviation of 3 +/- 2 to 3 units (0 to 6 units). So you might see values here between 1018 to 1030. The deviation around 1024 may be due to read noise, can't say for sure. Either way...using the masked pixels to determine black level will most likely not produce valid results. (Why Canon does this or exactly how these masked pixels are intended to be used is not entirely known, the general assumption is that they are for calibration purposes. If one were willing to dig into Canon's public RAW processing algorithms, the reason could probably be determined...I have not done so myself, and none of the resources I've found have any definitive information about exactly how these masked pixels are used.)
 
Upvote 0
neuroanatomist said:
3kramd5 said:
Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. ;) Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

220px-Crop_Factor.JPG


So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.

Ok, but the sensor is 24*36mm, so whatever light hits the sensor is your image. The sensor is not round. Again, correct me if I'm wrong. ;)
 
Upvote 0
Status
Not open for further replies.