A Rundown of EOS 7D Mark II Information

neuroanatomist said:
Marauder said:
But I think your onto something with the water concept. The basis of this rumour is that the 7D2 will be heavily weather sealed and designed for harsh conditions and that means exposure to rain. If the reliability of capacitive touch technology is intermittent under those conditions, that would be a definite negative for a camera explicitly designed for use by wildlife photographers in all weather conditions.

I don't think it's that big a deal. Canon could put a note in the manual, as they do for other things of that nature, such as reduced battery performance in the cold.

Possibly. Do you have an alternative explanation? Do you think a touch screen would be deemed too fragile? I'm not particularly troubled by its absence myself, but it's a desirable enough feature for many and one that has been successfully deployed on the 70D, as well as 3 rebels and the M series. I would think its absence on the 7D2 is likely due to some sort of technical hurdle--be it actual durability, longevity of the capacitive touch technology or it's potential issues in adverse conditions, which are more likely to be encountered by the 7D2's target photographer than in the Rebels, or 70D. Something fairly compelling must have lead them to decide to leave this feature out, given it's considerable popularity on the models in which it appears.
 
Upvote 0
PureClassA said:
Far as I know the gorilla glass on the iPhone is not the conductive surface but rather fused to it. I'm guessing perhaps that in so making a very rugged weather sealed hefty glass LCD, that heft may in turn prevent the touch sensor (which I believe for apple is thermal) from operating properly. It may work but it may not be terribly reliable. Just a guess

Yes, very possibly.
 
Upvote 0
Okay, I really didn't mean to turn this into a "touchscreen thread."

But, the more comments I read the more I am coming around to the view that either this piece of the rumor pie is wrong or Canon has something quite different in mind for the 7DII.

To recap:

Touchscreen is a not a new technology. It's been used commercially for 20 years or more. It's ubiquitous. Anyone who has a smart phone has touchscreen technology. So, it's not like implementing it would be difficult or require major new research and development.

Canon has already implemented it in other models without any major issues.

The durability argument doesn't seem to hold water, since there should be no reason why a touchscreen is any less durable than an LCD screen.

It's a redundant interface, so if there are certain conditions where it might not work as well (rain, cold, etc.) it doesn't really matter because one can always revert to the button, joystick, click-wheel methods to accomplish the same things.

Not implementing a touchscreen will be a major drawback for video production. Canon's dual-pixel sensor technology is highly dependent on touchscreens and loses much of its functionality without a touchscreen.

Weather sealing of a touchscreen is much easier and much more reliable than weather sealing buttons, joysticks, clickwheels, hot shoes, etc. etc. These components are much more likely to fail and leak when exposed to moisture than a well-sealed touchscreen.

Touchscreens are the preferred and expected interface for many customers, especially for customers who use tablets or smart phones.

None of us has access to Canon's marketing or engineering research. So, of course, all of this is only speculation until we know for sure what technology the 7DII will actually incorporate. But, as the primary purpose of all this speculation is largely entertainment, I would say that if I were to place a bet, I would still bet on a touchscreen and if I'm wrong, it will be fascinating to learn why they did not implement this technology.

I think Marauder sums it up quite nicely:

Marauder said:
Something fairly compelling must have lead them to decide to leave this feature out, given it's considerable popularity on the models in which it appears.
 
Upvote 0
unfocused said:
Okay, I really didn't mean to turn this into a "touchscreen thread."

But, the more comments I read the more I am coming around to the view that either this piece of the rumor pie is wrong or Canon has something quite different in mind for the 7DII.

To recap:

Touchscreen is a not a new technology. It's been used commercially for 20 years or more. It's ubiquitous. Anyone who has a smart phone has touchscreen technology. So, it's not like implementing it would be difficult or require major new research and development.

Canon has already implemented it in other models without any major issues.

The durability argument doesn't seem to hold water, since there should be no reason why a touchscreen is any less durable than an LCD screen.

It's a redundant interface, so if there are certain conditions where it might not work as well (rain, cold, etc.) it doesn't really matter because one can always revert to the button, joystick, click-wheel methods to accomplish the same things.

Not implementing a touchscreen will be a major drawback for video production. Canon's dual-pixel sensor technology is highly dependent on touchscreens and loses much of its functionality without a touchscreen.

Weather sealing of a touchscreen is much easier and much more reliable than weather sealing buttons, joysticks, clickwheels, hot shoes, etc. etc. These components are much more likely to fail and leak when exposed to moisture than a well-sealed touchscreen.

Touchscreens are the preferred and expected interface for many customers, especially for customers who use tablets or smart phones.

None of us has access to Canon's marketing or engineering research. So, of course, all of this is only speculation until we know for sure what technology the 7DII will actually incorporate. But, as the primary purpose of all this speculation is largely entertainment, I would say that if I were to place a bet, I would still bet on a touchscreen and if I'm wrong, it will be fascinating to learn why they did not implement this technology.

I think Marauder sums it up quite nicely:

Marauder said:
Something fairly compelling must have lead them to decide to leave this feature out, given it's considerable popularity on the models in which it appears.

You may be correct--this part of the rumour may not be accurate. CR seems pretty confident, but until we see a CR3 or an official spec, it's all still up in the air. Although it's not a key feature for me, it's absence would be odd given its successful implementation on several models. I just hope the other parts of the rumour ARE true--high frame rate, superb AF and advanced new sensor tech. This camera has a LOT of people very excited--yours truly included!
 
Upvote 0
Marauder said:
Do you have an alternative explanation? ... Something fairly compelling must have lead them to decide to leave this feature out...

At this point, the postulate of the good Friar William – he hailed from Ockham, incidentally – still holds. The simplest explanation, which is usually best, is that omission of a touchscreen from the 7DII Is just a rumor, and therefore quite possibly false.
 
Upvote 0
jrista said:
Alright, second. Read noise. I consider read noise to be a fairly distinct form of noise, different in nature and impact than photon shot noise. I do NOT believe that read noise has anything to do with pixel size or sensor size. I believe read noise has to do with the technology itself. I believe read noise is a complex form of noise, contributed to from multiple sources, some of them electronic (i.e. high frequency ADC unit), some of them material in nature (i.e. sensor bias noise, once you average out the random noise components, is fixed....as it partly results from the physical material nature of the sensor itself, it's physical wiring layout, etc.) I believe read noise affects overall image quality, but in a strait up comparison of two images from two cameras with identical sensor sizes, read noise in an invisible quantity. It doesn't really matter how much you scale your images, whether you scale them up or down, whether you normalize or not. Before any editing is performed, read noise is an invisible deep shadow factor, it cannot usually be seen by human eyes.

OK, now here is where you lost me.

And let me ask you this:
You say that given the same sensor tech a larger sensor does better because it collects more light, OK sounds good (so long as the MP difference isn't crazy extreme, with current tech, if you compared a 400MP FF sensor to an 8MP sensor the the 400MP probably would show a bit of a penalty for having pixels quite that small, although at that density they might swap to BSI which could help though true, but to rid those issues, imagine they just kept it the same type of sensor).

You say that the above only holds so long as the sensor tech is the same or at least not getting to be radically far off so you agree that better sensor tech can make a difference, OK.

You agree that a 36MP FF camera shouldn't do much worse than a 24MP FF camera if they use the same tech, because you have to normalize so you compare the photon shot noise of same frequency to same frequency if you are trying to figure out whether the higher MP could deliver a 24MP result that would be as good in terms of noise, OK.

But you say that suddenly for read noise we can only compare photosite to photosite and that you can't normalize as you did above to see if the higher MP might do as well compared at the same scale. Why? How is this noise magically behaving differently and scale invariant? It sounds like are mixing up the relative invariance between how each photosite does for read noise with the way the noise results scale which have nothing to do with one another.

I mean what if sensor A has 40MP of photosites that have the same read as sensor B that has 10MP of photosites that have the same read. Now you say to look at only the ScreenDR and compare the tiny photosite of the 40MP camera to the big one of the 10MP camera and you say they do about the same and get about the same score. But you are comparing photosites that are at only 1/2 the scale of the others as if they were the same scale so if you want to know what the relative comparison between the two was in terms of how you could do in this regard on an even basis, well you are not doing that. You are just seeing how the 40MP camera would do if you took full advantage of the extra res of the 40MP, but nobody says you have to take advantage of the extra res. If you compare the 40MP camera the 10MP scale it would probably give a slightly better overall result or just call it the same to be simple as the 10MP camera. The PrintDR numbers would tell you that. But just using your ScreenDR numbers you'd think that there is no way you could ever be able to use the new camera to match the old camera's result.



When it comes to read noise, to me, that is all about editing latitude. Because it's a deep shadow thing, it doesn't manifest until you start making some significant exposure adjustments. You have to lift shadows at very low ISO by several stops before the differences between a camera with more sensor+ADC DR and a camera with less sensor+ADC DR really start to manifest. Those differences only matter at ISO 100 and 200, they are significantly diminished by ISO 400, and above that the differences between cameras are so negligible as to be nearly meaningless...sensor size/photon shot noise totally dominate the IQ factor.

Yes (although I'd extend it to say that matter to ISO400 and start to tail off a lot above ISO800, but it depends exactly what you are comparing and how picky). But what does this have to do with normalization?

I do believe that normalization is important to keep the frequency of photon shot noise, which is the primary visible source of noise in images that have not been edited, at the same frequency for comparison purposes. I do believe that normalization will and should show differences between larger and smaller sensors. I do not believe, however, that normalization of a non-pulled image is going to have any impact on how deep the blacks appear to an observer. I believe the only thing that can actually measure the differences in the deep shadows, where read noise exists, are software algorithms. I do believe that having lower read noise means you have better editing latitude when editing a RAW image in a RAW editor, and that for the purposes of editing, lower read noise, which leads to increased dynamic range (primarily by restoring what would have otherwise been lost to read noise in the shadows) is a good thing, and something that can and does certainly improve certain types of photography. This is the fundamental crux of my belief that DXO's PrintDR numbers are very misleading, and why I prefer to refer to their ScreenDR numbers...as the increase in DR that you gain from having lower read noise is only really of value WHEN editing a RAW image and lifting shadows.

OK, that last bit really makes no sense at all. What does Screen vs Print measurement have to do with whether one has looked into the shadows or not??? How does viewing an image at 100% view or both scaled to fit the same size area on the screen have anything to do with whether you are looking at shadows or not?

And the first part too, since why do you only need to normalize noise to the same frequency if it it's noise that you instantly notice with any tone curve but don't need to apply it to noise that you notice only after looking into shadows by changing the tone curve??? What if the RAW converted shifted the mid-point way high up? Then suddenly there is not need to normalize photon noise either because the RAW converter defaults to a high mid-tone placement??

Otherwise, I really don't care about comparing cameras within a "DXO-specific context"...I care about comparing cameras based on what you can actually literally do with them in real life. (I KNOW you disagree with this one, but we should just agree to disagree here, because neither of us is ever going to win this argument. :P)

Not between us I guess since it's 1 vote against 1 vote, but at least DxO and 90% of the info out there agrees with me (it's just you and a handful of others on here and DPR who don't). ;)

That is my stance on these things. I am pretty sure you'll disagree in one way or another, and that's ok. However I do not believe that my assessment of these things is fundamentally wrong. I believe it may be different than your assessment, or DXO's assessment for that matter. But I do not believe I have a wrong stance on this subject. I separate photon shot noise and the impact it has on overall IQ (which is significantly greater) from read noise, and the impact it has on the editing latitude you might experience when adjusting exposure of a RAW image in a RAW editor at an unscaled, native image size.

OK, well at least in the end the final statement here is correct, if not particularly sensible in terms of comparisons, in that you admit that you are just simply choosing to not normalize for read noise and choosing to only care about how a full RAW image does against a full RAW image, both viewed at 100% when it comes to read noise even if you choose to not compare at 100% view, but to compare fairly for mid-tone noise.

But I mean you directly state that you are comparing two totally different things things.

And I still await to hear why it makes sense to only compare mid-tone noise at normalized scale and NOT at 100% view, but it makes sense to only compare read noise at 100% view. Why does it make sense to compare apples to apples for mid-tone noise but makes sense to compare apples to oranges for deep tone noise???

and for that matter, when you want to simply see how a full RAW does, taking advantage of the full res of each RAW, you want to still normalize mid-tone noise, when in that case you should not be normalizing either mid-tone noise or shadow noise.

I mean either you want to see how the new camera will do with it's RAWs taken to full resolution advantage and get an idea how they will process and look if you used all the extra res in which case you don't normalize and you do 100% view comparisons for all noise, low, mid or upper tone. Or you want to compare the cameras fairly and see if one is noisier than the other and then you need to normalize all the noise, low, mid or high tone.

I don't get the point in comparing one always normalized and one never normalized. I mean you can do that if you want, who knows why, but whatever, but that is one thing, to then say that DxO is raw BS and the PrintDR is fake, has no bearing on reality, and nobody should ever compare in any way using that, blah blah blah, is just flat out wrong.
 
Upvote 0
Roo said:
Straightshooter said:
Makes you wonder indeed why canon wouldn't invest time and money to be checking websites like CR full time so they can read all these 'pearls of wisedom'?! ::)

I do know a couple of local Canon reps that come on here when they need a laugh ;D

Which would explain why their low ISO quality hasn't improved since 2007. (or more likely the big guys simply didn't want to spend $$$ to make new fabs, which are very expensive it is true)
 
Upvote 0
In case someone doesn't get it:

You won't see more steps in a shot of a transmission step wedge by downscaling, the downscaled image will look exactly the same unless you downscale so much that you see pixelization. If you compare a shot taken with a 8mpix sensor against shot taken with a 36mpix sensor where each pixel has exactly the same DR in both sensors however...
 
Upvote 0
neuroanatomist said:
Marauder said:
Do you have an alternative explanation? ... Something fairly compelling must have lead them to decide to leave this feature out...

At this point, the postulate of the good Friar William – he hailed from Ockham, incidentally – still holds. The simplest explanation, which is usually best, is that omission of a touchscreen from the 7DII Is just a rumor, and therefore quite possibly false.

Well now that we've concluded the discussion of a touchscreen – which might be defined as "merely trivial," it appears we can get back to the discussion of dynamic range, which would fall into the category of "manifestly trivial."

Have at it guys. On this issue I couldn't care less.
 
Upvote 0
LetTheRightLensIn said:
Not between us I guess since it's 1 vote against 1 vote, but at least DxO and 90% of the info out there agrees with me (it's just you and a handful of others on here and DPR who don't).

Have you even looked at other test sites? DxO is always the odd ball out when it comes to DR tests. Their results are different from Imatest as well as straight up transmission step wedge tests every single time. The latter two are usually in close agreement.

DxO simply has no idea what photographic dynamic range is.

Which would explain why their low ISO quality hasn't improved since 2007. (or more likely the big guys simply didn't want to spend $$$ to make new fabs, which are very expensive it is true)

Both Canon's FF and APS-C sensors have better DR today then they did in 2007. Even their 18 MP sensor has improved with each "minor" change/iteration. I suspected this the first day I processed RAWs from my M. Looked it up and sure enough, about 1.5 stops more DR then the original 18 MP sensor in my 7D.
 
Upvote 0
msm said:
In case someone doesn't get it:

You won't see more steps in a shot of a transmission step wedge by downscaling, the downscaled image will look exactly the same unless you downscale so much that you see pixelization. If you compare a shot taken with a 8mpix sensor against shot taken with a 36mpix sensor where each pixel has exactly the same DR in both sensors however...

You will see the exact same DR in both shots. No different then if you shot 8x10 Velvia 50 and 35mm Velvia 50 and compared them in terms of DR.
 
Upvote 0
unfocused said:
Well now that we've concluded the discussion of a touchscreen – which might be defined as "merely trivial," it appears we can get back to the discussion of dynamic range, which would fall into the category of "manifestly trivial."

But Canon hasn't updated their sensors since 1969 and Sonikon sensors get 9,001 more stops of dynamic range and some say if the 7D mkII sensor isn't revolutionary Canon will die!

Or so I read in a forum on the Internet last Thursday ;)
 
Upvote 0
To get us back on topic, a rundown of EOS 7D Mark II information.....

What we know so far is in the list below:

Point 1: We don't even know if it will be called the 7D Mark II

That's it! That's all we know! ...... but let's see if we can get over 1000 posts to discuss this wealth of information...
 
Upvote 0
dtaylor said:
msm said:
In case someone doesn't get it:

You won't see more steps in a shot of a transmission step wedge by downscaling, the downscaled image will look exactly the same unless you downscale so much that you see pixelization. If you compare a shot taken with a 8mpix sensor against shot taken with a 36mpix sensor where each pixel has exactly the same DR in both sensors however...

You will see the exact same DR in both shots. No different then if you shot 8x10 Velvia 50 and 35mm Velvia 50 and compared them in terms of DR.

First of my examples yes you will same DR, second of example wrong, you will see different DR and that is precisely why comparing print DR makes sense and screen DR is of limited value. If signal to noise performance per pixel is equal, a higher megapixel sensor will always capture more information. Why this happens is already explained in one of the other 2000 DR threads here.
 
Upvote 0
Sporgon said:
But this isn't dynamic range, it's latitude, and these two are confused, especially when comparing Canon vs Sony Exmor. The actual dynamic range is not as dramatically different as the Exmor missionaries like to champion, but the Exmor does have considerably more latitude in the extreme low lights. However most of us never have a need to lift low lights to the extent it becomes a problem with Canon.

So I would say down sampling does have a slight advantage in latitude, but this has nothing to do with dynamic range.

I initially skimmed over this but wanted to come back to it because you nailed it.
 
Upvote 0
msm said:
dtaylor said:
msm said:
In case someone doesn't get it:

You won't see more steps in a shot of a transmission step wedge by downscaling, the downscaled image will look exactly the same unless you downscale so much that you see pixelization. If you compare a shot taken with a 8mpix sensor against shot taken with a 36mpix sensor where each pixel has exactly the same DR in both sensors however...

You will see the exact same DR in both shots. No different then if you shot 8x10 Velvia 50 and 35mm Velvia 50 and compared them in terms of DR.

First of my examples yes you will same DR, second of example wrong, you will see different DR and that is precisely why comparing print DR makes sense and screen DR is of limited value. If signal to noise performance per pixel is equal, a higher megapixel sensor will always capture more information. Why this happens is already explained in one of the other 2000 DR threads here.

As dtaylor previously stated, there's a difference between the generic definition of dynamic range (as applied to signals of all types) and the meaning of photographic dynamic range. Get a Stouffer step wedge and a light table and try it out...
 
Upvote 0
I'm going to snip most of the text here, because this is getting long.

LetTheRightLensIn said:
But you say that suddenly for read noise we can only compare photosite to photosite and that you can't normalize as you did above to see if the higher MP might do as well compared at the same scale. Why? How is this noise magically behaving differently and scale invariant? It sounds like are mixing up the relative invariance between how each photosite does for read noise with the way the noise results scale which have nothing to do with one another.

I mean what if sensor A has 40MP of photosites that have the same read as sensor B that has 10MP of photosites that have the same read. Now you say to look at only the ScreenDR and compare the tiny photosite of the 40MP camera to the big one of the 10MP camera and you say they do about the same and get about the same score. But you are comparing photosites that are at only 1/2 the scale of the others as if they were the same scale so if you want to know what the relative comparison between the two was in terms of how you could do in this regard on an even basis, well you are not doing that. You are just seeing how the 40MP camera would do if you took full advantage of the extra res of the 40MP, but nobody says you have to take advantage of the extra res. If you compare the 40MP camera the 10MP scale it would probably give a slightly better overall result or just call it the same to be simple as the 10MP camera. The PrintDR numbers would tell you that. But just using your ScreenDR numbers you'd think that there is no way you could ever be able to use the new camera to match the old camera's result.


LetTheRightLensIn said:
And I still await to hear why it makes sense to only compare mid-tone noise at normalized scale and NOT at 100% view, but it makes sense to only compare read noise at 100% view. Why does it make sense to compare apples to apples for mid-tone noise but makes sense to compare apples to oranges for deep tone noise???

and for that matter, when you want to simply see how a full RAW does, taking advantage of the full res of each RAW, you want to still normalize mid-tone noise, when in that case you should not be normalizing either mid-tone noise or shadow noise.

I don't get the point in comparing one always normalized and one never normalized. I mean you can do that if you want, who knows why, but whatever, but that is one thing, to then say that DxO is raw BS and the PrintDR is fake, has no bearing on reality, and nobody should ever compare in any way using that, blah blah blah, is just flat out wrong.

There is noise, and there is editing latitude. In pretty much every thread on here that ends up with the DR debate, what do people talk about? The amount of noise they can see, or the amount they can lift the shadows? In all the threads I've been party to, it all ultimately comes down to how much you can lift shadows. In none of the DR debates I've ever been party to has anyone ever said "You don't see as much noise with an Exmor sensor." No, the thing everyone always says, and the thing everyone always tries to demonstrate, is "Look how much I lifted the shadows! Look, I have a fully detailed sun, and detailed shadows, in this landscape photo. Oh, and look over here, the Canon sensor has tons of nasty red banding noise when I lift the shadows."

As far as I can tell, as far as most consumers are concerned, DR all boils down to EDITING LATITUDE. It means more shadow lifting.

So, why do I treat them differently? First, you are not wrong about comparing cameras...you need to normalize. And normalization affects read noise as much as it affects photon shot noise. But that is comparing final IQ. That's fine and dandy...but I believe people are misusing final IQ (of UNEDITED images) to refer to editing latitude. It's THAT, the use of normalized images to refer to editing latitude, that I believe is wrong. I believe DXO's publishing of Print DR exclusively on their ratings, completely ignoring Screen DR entirely, has lead to the misinterpretation of REAL WORLD editing latitude in a RAW.

Since everyone always ultimately arrives at "pulling shadows" in DR debates at one point or another, I'm always harping on that point. More DR means less noise, and normalizing means even less noise for larger sensors, but we cannot edit normalized images. We only edit full size images. So, from an editing latitude standpoint...I believe Print DR is invalid. For one thing, the Print DR numbers at DXO are only valid if you downsample an image to exactly that size. I don't think many people actually downsample their images to exactly 8x12 @ 300PPI all the time...hell, I think it's actually probably quite rare. So always referring to 14.4 stop of DR when discussing shadow lifting ability is plain and simply wrong in all the infinite other possibilities for image size. And that's not even mentioning that when it comes to the types of photography that tons of DR are most useful for, say landscapes, your probably UPsampling, rather than downsampling, which makes it even more invalid.

The other thing is that we don't edit RAW images after downsampling them. They would no longer be RAW. We edit RAW images at native size. So we deal with the native noise levels and noise frequencies when we are pulling shadows. So yes, normalization will reduce all noise, including read noise, we can't downsample our RAWs...we must edit them at native size. If we take DXO's Print DR numbers as a reference for editing latitude, they would have you believe that you have more than an additional stop of editing latitude with a D800, and nearly two additional stops of editing latitude with a D810. That is plain and simply false. Hence the reason I treat read noise uniquely in the case of editing latitude.

Finally, as far as visible noise goes from a human perception standpoint...read noise only exists in the deep shadows. It doesn't really matter if you have 38e- worth of read noise (as in the 1D X), or ~3e- worth (D800) or ~6e- worth (D810). You can't see it at native size. You still can't see it after normalization. Perceptually, read noise is inconsequential from a visual standpoint. Photon shot noise, on the other hand, or what you have called midtone noise (I think that's a bit of a misnomer...photon shot noise exists at every level of the signal, from the highlights down to the utter depths of the shadows, well below the read noise floor), affects the entire signal, and is the prime source of noise that we actually perceive in our photographs.

So, I don't believe that read noise frequencies are consequential to normalization...you can't see them anyway. That leaves photon shot noise as the primary noise culprit we are dealing with when normalizing images for comparison. Sure, read noise frequencies get normalized as well, but only a computer algorithm can tell the difference, so as far as I'm concerned, normalization of read noise frequencies is immaterial. When it comes to photon shot noise, well that is a part of quantization of the incoming photonic wavefront itself. Photon shot noise is ultimately determined by frame size, as your gathering the same amount of photons regardless of what your pixel size is. If your using a 1D X, each pixel is gathering more photons than the pixels of a D800. At native size, the D800 will appear noisier on a per-pixel basis, however after normalization there won't be any significant difference (in noise...the D800 still certainly maintains the advantage in overall detail, no question). There won't be any difference, because the amount of visible noise in those two photographs really has nothing to do with read noise...it has to do with the total quantity of light gathered. Now, the D800 has higher Q.E., so it should have an advantage...but at the same time, it also has a lower fill factor (more pixels, still an FSI design, so more die space has to be reserved for wiring and readout transistors.) I think in a normalized context, where a D800 image was downsampled to the same size as a 1D X image at ISO 100, the D800 would probably have a slight edge, as I don't think it's fill factor is going to entirely negate it's increased Q.E. However, fundamentally, the overall amount of perceptual noise (photon shot noise) in the images has to do with total sensor area. Read noise is there, and it is less in the D800...but we can't see the difference with our eyes. The shadows are shadows, both cameras have more dynamic range than any computer screen can handle anyway (~11stops for Canon, ~13 some stops for Nikon)...so the stuff in the shadows is buried several stops below the limit of a computer screen regardless.

The only time read noise becomes a meaningful factor is when your pulling shadows. THEN, and only then, does the advantage of having LESS read noise really become a meaningful issue. In that case, the D800, and any other Sony Exmor based camera, wins, hands down, no contest. However, and here is where DXO comes in again...A D800, D810, A7r, A7s, etc. don't have an 8x light gathering advantage over a Canon camera (as DXO's PrintDR numbers would have you believe). In fact, its about HALF that, one stop less, or a 4x advantage. Personally, I think being off by 100% is a meaningful thing. If DXO was saying the D800 had a 4.1x advantage over Canon cameras, I'd have never said a peep. But saying the D800 has nearly an 8x advantage over Canon cameras...yeah, I have a problem with that.

So...I keep read noise levels in the context of discussions on DR (which pretty much ALWAYS end up referring to shadow lifting ability at some point), distinct from the whole concept of normalization. Because were talking about editing latitude, something that cannot be compared in a normalized context (at least, as far as I see it.)

Well, I don't think I can explain my stance any better than that. I'm guessing you still disagree, but that's ok. Nothing either of us can do about that at this point. :P
 
Upvote 0
neuroanatomist said:
msm said:
dtaylor said:
msm said:
In case someone doesn't get it:

You won't see more steps in a shot of a transmission step wedge by downscaling, the downscaled image will look exactly the same unless you downscale so much that you see pixelization. If you compare a shot taken with a 8mpix sensor against shot taken with a 36mpix sensor where each pixel has exactly the same DR in both sensors however...

You will see the exact same DR in both shots. No different then if you shot 8x10 Velvia 50 and 35mm Velvia 50 and compared them in terms of DR.

First of my examples yes you will same DR, second of example wrong, you will see different DR and that is precisely why comparing print DR makes sense and screen DR is of limited value. If signal to noise performance per pixel is equal, a higher megapixel sensor will always capture more information. Why this happens is already explained in one of the other 2000 DR threads here.

As dtaylor previously stated, there's a difference between the generic definition of dynamic range (as applied to signals of all types) and the meaning of photographic dynamic range. Get a Stouffer step wedge and a light table and try it out...
such is the fun when an analog system (film) goes digital and confusion about measurement and calibration multiplies.....

I come from an electronics background and to me, it's SNR (Signal to Noise Ratio) to measure performance of electronics, not DR. I have always thought of DR as the equivalent of "stops", an observed/perceptual scale, not a measurable scale with electronics test equipment.
 
Upvote 0
Don Haines said:
To get us back on topic, a rundown of EOS 7D Mark II information.....

What we know so far is in the list below:

Point 1: We don't even know if it will be called the 7D Mark II

That's it! That's all we know! ...... but let's see if we can get over 1000 posts to discuss this wealth of information...

It will be called the EOS VII Mark II....because that wouldn't be confusing at all... ::)
 
Upvote 0