A Rundown of EOS 7D Mark II Information

dtaylor said:
neuroanatomist said:
So, in other words...no, downscaling the image would not make more of the steps on the wedge fall within the DR when the picture was taken. Thanks for your definitive answer. ::)

Now I'm not so sure Neuroanatomist. In scaling his answer down to one word the answer became bold. An increase in DR? ;D

Perhaps merely a decrease in DRivel... ;)
 
Upvote 0
dtaylor said:
LetTheRightLensIn said:
If you shrink it down to 35mm film size you are also shrinking everything about it down and all the grain and noise and details become much smaller and if you then find the distance that is the smallest that the 35mm film was resolving and then avg all the little stuff on the shrunken 8x10" frame over that smallest scale the 35mm film was resolving then you get a cleaner signal at that scale.

You do not get more detail, therefore you do not get more photographic dynamic range. Blocked up shadows and highlights will still be blocked up. On a step wedge the same number of steps will be gray, black, and white.

Your problem is trying to compare high frequency noise to noise of lower frequency as if they were the same thing. You are trying to filter out the high freq detail but then acting like you need to still keep all that higher frequency noise. If you want to compare what you'd get from each film printed to the same size and viewed from the same distance you have to normalize your measured numbers.


and see Fred Miranda site where a long time Canon fan and site owner compared 5D3 and D800 and found the same large difference and went out and bought an A7D supplement his 5D3

I would rather see his RAW files. My bet is that the difference is not as dramatic as whatever processing led him to believe it was.

That's not saying I fault anyone for wanting an Exmor sensor. Heaven knows I've spent extra $$$ for small gains. And processing can be easier with Exmor. But Canon is not that far behind, and this DxO nonsense is out of hand.

Well his files show reasonably similar to what directly measuring the raw shows, maybe there is some difference due to raw converters cooked in for each, but the DxO measured difference is large and the typical comparison using converter looks large, maybe it makes it a bit off this way or that, but if you were to program a raw converter to ahndle the nkon and canon files the exact same ways then you'd see the same difference as DxO measures, which is pretty large.

So no the DxO measurements for SNR and DR and such are not out of hand at all, only the fanboy nonsense is. Now if you want to say their OVERALL sensor scores or many of their lens scores are out of hand, OK, that is fair game, maybe even their overall high iso rating is fair game to pick at since the way they weight things is perhaps a bit suspect, but don't call all the little direct measurements all out of hand made up nonsense.
 
Upvote 0
neuroanatomist said:
dtaylor said:
neuroanatomist said:
So, in other words...no, downscaling the image would not make more of the steps on the wedge fall within the DR when the picture was taken. Thanks for your definitive answer. ::)

Now I'm not so sure Neuroanatomist. In scaling his answer down to one word the answer became bold. An increase in DR? ;D

Perhaps merely a decrease in DRivel... ;)

Yet another deep answer. When you can't trick 'em you insult 'em. One almost wonders if you don't own some huge amount of Canon stock or what your deal is.

You'd make a great politician, willing to say whatever it takes and obfuscate as needed and smart enough to know how to do so.
 
Upvote 0
It's simple – downsampling an image does not recover data that were clipped at capture. If the scene DR exceeds the DR of the sensor, those data are lost, and no amount of downsampling will bring them back. Period.

LetTheRightLensIn said:
You'd make a great politician

Seriously, there's no call for such grevious insults! :o
 
Upvote 0
LetTheRightLensIn said:
Your problem is trying to compare high frequency noise to noise of lower frequency as if they were the same thing.

I'm not comparing noise at all. Clipped shadows and highlights do not magically reveal detail because the noise level went down. There is no detail to reveal at that point.

DxO's definition of dynamic range is the definition some guy looking at an oscilloscope might come up with if he had never touched a camera. It is not photographic dynamic range.

Well his files show

Where can they be downloaded?
 
Upvote 0
jrista said:
My point, in all of my comments, is that a touch UI is not the thing Canon NEEDS to focus in, and it shouldn't be the one feature that people use to decide whether to guy the 7D II or not. If the 7D II hits the streets with the same old "classic" Canon sensor technology...that, in my honest opinion, is a MASSIVE FLUB!!

I wonder though, from a business perspective, just how groundbreaking you (as Canon) want this new sensor to be? If it is groundbreaking and truly amazing, how many sales of the 5D3 and 1Dx are sacrificed? Not because those customers would go to the 7D, but because they would hold off for the 5D3 and 1Dx's successors.

I think we'll see a toned down new tech and then a relatively quick upgrade to the full-frame bodies with the full fledged new tech.

dtaylor said:
neuroanatomist said:
LetTheRightLensIn said:
You'd make a great politician

Seriously, there's no call for such grevious insults! :o

Wow...a politician...that's worse then insulting someone's mother! :o

As an actual politician: Ouch.
 
Upvote 0
dtaylor said:
LetTheRightLensIn said:
Your problem is trying to compare high frequency noise to noise of lower frequency as if they were the same thing.

I'm not comparing noise at all. Clipped shadows and highlights do not magically reveal detail because the noise level went down. There is no detail to reveal at that point.

DxO's definition of dynamic range is the definition some guy looking at an oscilloscope might come up with if he had never touched a camera. It is not photographic dynamic range.

Well his files show

Where can they be downloaded?

Don't know. Maybe he still has them, maybe if you PM him, maybe he erased then already or doesn't want to bother.
 
Upvote 0
neuroanatomist said:
It's simple – downsampling an image does not recover data that were clipped at capture. If the scene DR exceeds the DR of the sensor, those data are lost, and no amount of downsampling will bring them back. Period.

LetTheRightLensIn said:
You'd make a great politician

Seriously, there's no call for such grevious insults! :o

all we are trying to do is make a fair relative comparison between two cameras

This is a bit different but imagine a 32bit CD had a bunch of noise, tons, in the least bit and some in the next two least sig and was perfect otherwise and they measured the noise above the base there at 32bit scale. And then they measured an 8bit CD that had a little noise in it's LSB and none otherwise. And then they pretended the LSBits for the 32bit CD are the same as the 3 least sig bits on the 8bit CD and compared them directly, that;s kinda like ScreenDR. And PrintDR is kinda like if they realize that you can't over the three lowest bits of the 32bit CD over the 3 lowest bits of the 8bit CD and then say man the lowest three bits on that 32 bit CD are looking so random compared to the lowest three on that 8bit CD man that 32bit CD stinks! When if you first applied a bit shift to align like bit to like bit to normalize it you'd see the 32bit CD was giving a perfect signal over all of the 8bits the 8bit CD was storing while the 8bit CD was putting noise on it's least bit.

How is it fair to compare the LSBits of the 32bit CD to the 8bit CD's 3 least sig bits as if they were the same scale and same thing?

I mean if all you want to use to relatively compare cameras using DxO ScreenDR I can go grab me some Canon 10D tech and make a 1000 PIXEL (not mega, just pixel) sensor out of it and it will get a much better ScreenDR score than the 1DX will. So gee I guess that 1DX really stinks! Such terrible DR, I open a file from both and view each at 100% and I see so much more noise over 1000 pixels on the 1DX file, but that 10D tech camera man those 1000 pixels look perfect! But no because why are you trying to compare a 1000 pixel crop from the 1DX image to the complete 1000 pixel image from the 10D-derived monster?
 
Upvote 0
dtaylor said:
... Clipped shadows and highlights do not magically reveal detail because the noise level went down. There is no detail to reveal at that point.

DxO's definition of dynamic range is the definition some guy looking at an oscilloscope might come up with if he had never touched a camera. It is not photographic dynamic range.

You are both right and wrong at the same time ???.

Yes, downsizing does nothing for clipped shadows and highlights.

OTOH, though, the one and only definition of DR in signal processing is basically the ratio of max signal vs noise floor.
By downsizing, you are lowering the noise floor - and voila, DR gets improved too.
That's per the one and only definition of DR, that is 8).
Clipped shadows and highlights remain clipped.
 
Upvote 0
The way DR is measured here you simple compare max signal to the noise floor and the max here is just the perfect white, channel 100% blown, max well value. And we just want to compare the stnd deviation about the blackpoint at the same energy scale when comparing the cameras relatively to each other. In this case the chose something like an 8MP scale What DR in this way it can deliver at 100MP scale is lower than it can deliver at 10MP scale. The max white point stays always the same value and the noise about the blackpoint gets lower as you filter out the higher energy scales of noise as you normalize it to a lower and lower target point.
 
Upvote 0
dtaylor said:
LetTheRightLensIn said:
Your problem is trying to compare high frequency noise to noise of lower frequency as if they were the same thing.

I'm not comparing noise at all. Clipped shadows and highlights do not magically reveal detail because the noise level went down. There is no detail to reveal at that point.

DxO's definition of dynamic range is the definition some guy looking at an oscilloscope might come up with if he had never touched a camera. It is not photographic dynamic range.

Well his files show

Where can they be downloaded?

nobody is clipping highlights here, we take max signal as pure saturated channel.
 
Upvote 0
Botts said:
jrista said:
My point, in all of my comments, is that a touch UI is not the thing Canon NEEDS to focus in, and it shouldn't be the one feature that people use to decide whether to guy the 7D II or not. If the 7D II hits the streets with the same old "classic" Canon sensor technology...that, in my honest opinion, is a MASSIVE FLUB!!

I wonder though, from a business perspective, just how groundbreaking you (as Canon) want this new sensor to be? If it is groundbreaking and truly amazing, how many sales of the 5D3 and 1Dx are sacrificed? Not because those customers would go to the 7D, but because they would hold off for the 5D3 and 1Dx's successors.

I think we'll see a toned down new tech and then a relatively quick upgrade to the full-frame bodies with the full fledged new tech.

Not really, since the 5D4 surely won't be 12fps + and the 1DX will offer FF and who knows what else extra.
SO if you want 12fps+ you can't wait on the 5D4 and the 1DX maybe you could wait but will you have the extra thousands to get the 7D2 in FF format plus some more bells and whistles and bulk? Many won't.
 
Upvote 0
LetTheRightLensIn said:
jrista said:
dtaylor said:
We are now WELL into the era of significantly improved DR.

Basically 12+ vs. 13+ stops. The DR meme is driven entirely by BS DxO tests that aren't even physically possible (i.e. claims of >14 stops from a 14-bit ADC).

Actually, it's more like 10.x stops vs. 13.x stops. I agree, DXO's PrintDR numbers are BS. Just use DXO's ScreenDR numbers, which are literal measurements taken directly from RAW, and a far more trustworthy number. Canon IS behind by about two stops. That is a FACTOR OF FOUR TIMES. DXO would have you believe it was closer to three stops, or EIGHT times...I agree, BS, and highly misleading. That doesn't change the fact that two stops is still a meaningful difference...always has been.

once again, wrong wrong wrong, which is so bizarre because then you flip around and say that photosite density doesn't matter for noise and only sensor size does!!!! that is like saying 1+1=2 and no 1+1 does not equal 2 at the same time.

I think I may begin to see part of our disconnect. Maybe a little clarification of what I think of when I say some of these things would help.

So, first off, I do believe that only sensor size really matters from a fundamental IQ standpoint. I believe that "noise" is relative to sensor size. That's a fairly general statement, maybe I've been lax in my specificity in the past. So, to clarify this first point...I believe that photon shot noise is relative to sensor size. Very specifically, I believe that the total amount of photon shot noise, which affects the signal top to bottom, from the highlights to the shadows, which is an intrinsic part of the real image signal itself, is fundamentally relative to total sensor area.

In that respect, I believe larger sensors will always outperform smaller sensors given similar technology, for identical framing. Assuming non-similar technology, I believe that it is possible, for a short period of time, for a sensor of smaller area to outperform a sensor of larger area...but only so long as the larger sensor's technology is inferior. I believe the generational gap between the small and large sensor would need to be fairly large for the smaller sensor to outperform a larger sensor...within a single generation, I honestly do not believe that any smaller sensor would outperform a larger sensor in terms of overall IQ.

I believe this, because if you frame a subject identically in frames of different physical sizes, the larger the frame, the more total light you gather. That's it. I don't really think that needs any further qualification. More light, better IQ. It's better if you don't normalize, it's better if you do normalize. More total light gathered per unit area of subject, better IQ. It's as simple as that.
---

Alright, second. Read noise. I consider read noise to be a fairly distinct form of noise, different in nature and impact than photon shot noise. I do NOT believe that read noise has anything to do with pixel size or sensor size. I believe read noise has to do with the technology itself. I believe read noise is a complex form of noise, contributed to from multiple sources, some of them electronic (i.e. high frequency ADC unit), some of them material in nature (i.e. sensor bias noise, once you average out the random noise components, is fixed....as it partly results from the physical material nature of the sensor itself, it's physical wiring layout, etc.) I believe read noise affects overall image quality, but in a strait up comparison of two images from two cameras with identical sensor sizes, read noise in an invisible quantity. It doesn't really matter how much you scale your images, whether you scale them up or down, whether you normalize or not. Before any editing is performed, read noise is an invisible deep shadow factor, it cannot usually be seen by human eyes.

In this respect, two landscape photos of the same scene taken with different full frame cameras are all largely going to look the same. Photon shot noise is going to be the same, it may just be more finely delineated by a sensor with smaller pixels. Normalize them all, without any other edits, and you aren't going to notice much of any difference between the images. The most significant differences are likely to be firmware/setting related...a Daylight white balance setting will probably differ between cameras (one may be slightly warm, another slightly cold), small nuances of exposure may differ between cameras (one may slightly overexpose, another may slightly underexpose), there may be nuanced differences in color rendition that cater to different personal preferences.

When it comes to read noise, to me, that is all about editing latitude. Because it's a deep shadow thing, it doesn't manifest until you start making some significant exposure adjustments. You have to lift shadows at very low ISO by several stops before the differences between a camera with more sensor+ADC DR and a camera with less sensor+ADC DR really start to manifest. Those differences only matter at ISO 100 and 200, they are significantly diminished by ISO 400, and above that the differences between cameras are so negligible as to be nearly meaningless...sensor size/photon shot noise totally dominate the IQ factor.

I do believe that normalization is important to keep the frequency of photon shot noise, which is the primary visible source of noise in images that have not been edited, at the same frequency for comparison purposes. I do believe that normalization will and should show differences between larger and smaller sensors. I do not believe, however, that normalization of a non-pulled image is going to have any impact on how deep the blacks appear to an observer. I believe the only thing that can actually measure the differences in the deep shadows, where read noise exists, are software algorithms. I do believe that having lower read noise means you have better editing latitude when editing a RAW image in a RAW editor, and that for the purposes of editing, lower read noise, which leads to increased dynamic range (primarily by restoring what would have otherwise been lost to read noise in the shadows) is a good thing, and something that can and does certainly improve certain types of photography. This is the fundamental crux of my belief that DXO's PrintDR numbers are very misleading, and why I prefer to refer to their ScreenDR numbers...as the increase in DR that you gain from having lower read noise is only really of value WHEN editing a RAW image and lifting shadows. Otherwise, I really don't care about comparing cameras within a "DXO-specific context"...I care about comparing cameras based on what you can actually literally do with them in real life. (I KNOW you disagree with this one, but we should just agree to disagree here, because neither of us is ever going to win this argument. :P)

That is my stance on these things. I am pretty sure you'll disagree in one way or another, and that's ok. However I do not believe that my assessment of these things is fundamentally wrong. I believe it may be different than your assessment, or DXO's assessment for that matter. But I do not believe I have a wrong stance on this subject. I separate photon shot noise and the impact it has on overall IQ (which is significantly greater) from read noise, and the impact it has on the editing latitude you might experience when adjusting exposure of a RAW image in a RAW editor at an unscaled, native image size.
 
Upvote 0
Botts said:
I think we'll see a toned down new tech and then a relatively quick upgrade to the full-frame bodies with the full fledged new tech.

Yup. If a 7DII is announced (the 5DIV is also a possibility, mind you), it will at best have the ISO of the 5DII.
So, the 6D and 5DIII will still remain better.
The new tech should bring more substantial DR improvements, though.

And like I said, don't rule out the possibility of a high-resolution 5DIV at Photokina.
 
Upvote 0
LetTheRightLensIn said:
The way DR is measured here you simple compare max signal to the noise floor and the max here is just the perfect white, channel 100% blown, max well value.

And this is not photographic dynamic range.

x-vision said:
OTOH, though, the one and only definition of DR in signal processing...

...is not the same as the one and only definition of DR in photography. ;)

Appreciate your post, you get why there's confusion, just trying to illustrate for those who insist DxO DR measurements have any bearing on reality.
 
Upvote 0
LetTheRightLensIn said:
neuroanatomist said:
It's simple – downsampling an image does not recover data that were clipped at capture. If the scene DR exceeds the DR of the sensor, those data are lost, and no amount of downsampling will bring them back. Period.

LetTheRightLensIn said:
You'd make a great politician

Seriously, there's no call for such grevious insults! :o

all we are trying to do is make a fair relative comparison between two cameras

This is a bit different but imagine a 32bit CD had a bunch of noise, tons, in the least bit and some in the next two least sig and was perfect otherwise and they measured the noise above the base there at 32bit scale. And then they measured an 8bit CD that had a little noise in it's LSB and none otherwise. And then they pretended the LSBits for the 32bit CD are the same as the 3 least sig bits on the 8bit CD and compared them directly, that;s kinda like ScreenDR. And PrintDR is kinda like if they realize that you can't over the three lowest bits of the 32bit CD over the 3 lowest bits of the 8bit CD and then say man the lowest three bits on that 32 bit CD are looking so random compared to the lowest three on that 8bit CD man that 32bit CD stinks! When if you first applied a bit shift to align like bit to like bit to normalize it you'd see the 32bit CD was giving a perfect signal over all of the 8bits the 8bit CD was storing while the 8bit CD was putting noise on it's least bit.

How is it fair to compare the LSBits of the 32bit CD to the 8bit CD's 3 least sig bits as if they were the same scale and same thing?

I mean if all you want to use to relatively compare cameras using DxO ScreenDR I can go grab me some Canon 10D tech and make a 1000 PIXEL (not mega, just pixel) sensor out of it and it will get a much better ScreenDR score than the 1DX will. So gee I guess that 1DX really stinks! Such terrible DR, I open a file from both and view each at 100% and I see so much more noise over 1000 pixels on the 1DX file, but that 10D tech camera man those 1000 pixels look perfect! But no because why are you trying to compare a 1000 pixel crop from the 1DX image to the complete 1000 pixel image from the 10D-derived monster?

Andyes it's true you don't some capture signal and chop off bits and have more SNR as you chopped signal and noise at the same rate. And you don't get some normalized value when looking at your full RAW. You get what it gives. But we are trying to compare things relative to one another here. Take a camera with great tech and 36MP and one with poor tech an 3MP, print both to the same size and view from the distance and the great tech 36MP print probably looks better than the one from the poor tech 3MP camera even if you were to print a 3MP crop from the 36MP camera and that would look rough in comparison. When they measure a middle gray SNR ratio they are just measuring about a perfect solid scale invariant middle gray and measuring the deviations and then normalizing the scale so they are comparing the deviation at the same scale of detail. And if you sample that down the deviation from the solid middle gray goes down and the number goes up and they just want to make sure they are measuring that deviation from the perfect gray swatch at the same frequency for the noise.

They are not saying that if you take a photo of some complex scene that if you downscale you reduced noise while magically keeping the same complete signal, that is something else. If you do that it looks less and less noisy but it also looks less and less with the spatial details.
 
Upvote 0