Canon R6 Mark III Dynamic Range Officially Measured

The R6iii is quite sad for me. I really wanted better implementation of the pre-shooting in photo which the mark 3 got. But other than that I got no improvements (Fv mode still limited for no reason, same LCD, no top LCD, etc) and it has more Mpix (larger files that I don't need), heavier body and less battery life.

I absolutely understand that this camera is amazing for most people, just sharing my thoughts. The sad part is that the pre-shooting implementation could be done via FW update (I know, Canon wouldn't do that).
 
  • Sad
Reactions: 1 user
Upvote 0
What I don't fully understand from the article: even if the dynamic range with electronic shutter is the same between the Canon and Sony, isn't there still a benefit to the 14bit files which he Sony delivers? Like in color for example.

The article seems to dismiss this advantage as irrelevant given the DR findings discussed.
I'd like to see the difference. I think that the "14-bit" is just a marketing point and doesn't do much in reality since the DR is about 11 stops. And the DR is already at the limit to be visible.
 
  • Like
Reactions: 1 user
Upvote 0
Considering Canon pulls this all off with front side illumination, I think is pretty good - now the question is, why and when are they going to move to BSI? they obviously have it. "in theory".
I can't see any reason why FSI and BSI would be the same cost to produce as there are more steps in fabrication and potential yield issues associated with it... stacking layers even more so. "Stacked sensor" has more marketing spec sheet clout than FSI/BSI in my opinion... even "partially stacked" gives marketing teams a useful spec sheet tickbox.

On that basis alone, Canon may not have the volume to reduce costs for the R6iii sensor to warrant including a higher cost sensor and would be content with the profitability of the existing model. Reading lines twice may also increase heat but Sony has been good at managing it and battery life in a smaller body for some time now.

Sony moving to DGO will mean implementations in future Canon bodies is likely though.
 
  • Like
Reactions: 1 user
Upvote 0
Upvote 0
How much difference do you realistically expect with BSI in a full frame sensor? My understanding is that the benefits of BSI are inversely related to pixel size, they are very significant for smartphone sensors with pixel sizes well under 2 µm, but there's much less (or even non meaningful) benefit with larger pixels. BSI was initially developed to enable higher pixel densities for small sensors. The observation that you can achieve similar DR with FSI and BSI full frame sensors is consistent with there being minimal benefit at these large pixel sizes. I've also read that BSI sensors are more expensive to produce, and if that's true it begs the question why use them (other than marketing, which is certainly a valid reason).

it opens up other design criteria and allows Canon to eke out the remaining speed and efficiency that its sensor can deliver. Keep in mind that Canon's doing 75MP and 90MP sensors right now. It also allows for more complex wiring and electronics that aren't possible with a front side illuminated sensor.

they are certainly more expensive - the precision in which the post-lithography tools have to etch and grind the layers off the bottom, and then also add in another supporting layer and so forth - is expensive - but that's relative costs - it may cost less than $50 per sensor and less at scale. (spitballing a rough cost). Also the more canon does this at scale the cheaper stacked sensors will become.
 
Upvote 0
I think you can read the sensor as many times as you want and then you reset it (with the reset transistor) before the next exposure. Am I wrong?

I think I was honestly confusing CCD with CMOS sensors in my head. Yes, it can be read more than once at different amplification levels.

After doing some more reading, i'm going to clean up that section today.

thanks!
 
  • Like
Reactions: 1 user
Upvote 0
Even though the mechanical shutter results are similar dynamic range results in Bill's analysis, one thing that showed itself to be very improved was the dynamic range while using the electronic shutter, with the EOS R6 Mark III showing a near full stop increase in dynamic range, even though, when I looked at dpreview's results, it showed the EOS R6 Mark III as being at best 1/3 to 1/2 a stop better than the EOS R6 Mark II, and certainly not a full stop improved.

See full article...

With the R6 II, Canon did not use noise reduction (NR) on the low ISOs, as can be seen by them being marked with circles on the Photons To Photos chart.

However with the R6 III, Canon seems to be using quite heavy noise reduction (NR) on the low ISOs, as shown with the triangles on the Photons To Photos chart.

Considering how close the R6 II and R6 III are in DR from ISO200 and up, it seems likely (not certain, but likely) that the "gain" in DR is due to the heavy application of NR with the new camera at low ISOs. I suspect that if there was a way to turn NR off, the R6 II & R6 III charts would closely track <200 ISO.
 
  • Like
Reactions: 1 user
Upvote 0
How much difference do you realistically expect with BSI in a full frame sensor? My understanding is that the benefits of BSI are inversely related to pixel size, they are very significant for smartphone sensors with pixel sizes well under 2 µm, but there's much less (or even non meaningful) benefit with larger pixels. BSI was initially developed to enable higher pixel densities for small sensors. The observation that you can achieve similar DR with FSI and BSI full frame sensors is consistent with there being minimal benefit at these large pixel sizes. I've also read that BSI sensors are more expensive to produce, and if that's true it begs the question why use them (other than marketing, which is certainly a valid reason).
I am strongly inclined to agree with you. There is also much we don't know (certainly, I don't and I have a fair bit of IC manufacturing experience from back in the day) about the detail of BSI and stacked sensors. Most of the online descriptions of stacked sensors are just a regurgitation of the marketing piece that Sony put out a few years ago. I can't find any good engineering information other than BSI sensors are about 1.1 microns thick. Back lapping a 300 mm water to a uniform 1.1 micron thickness sounds like a pretty good challenge. Clearly there needs to be something supporting the resulting device. Is it "glued" to a substrate before lapping, or is a huge layer of PSG built up on the "front" side before lapping?. Now to stacking. We have this scary-thin device that is over an inch long and now we are going to attach it to several more "layers" of silicon. The attachment is either some kind of solder or weld connection at many (thousands) of points across the chip and the power level in the layers is likely dissimilar. That leads to the likelihood of warping with such a large device. None of the online descriptions that I can find discuss any of these issues, but surely Canon is taking them into consideration.

It seems to me that figuring out how to make a photosensitive layer on top of the circuitry (like Panny and Fuji were working on with the organic sensor) would make more sense. There has been some work mentioned using quantum dots in such a structure, but I haven't heard any huge success stories. This article suggests the big dogs are in the hunt https://www.canonoutsideofauto.ca/2...volutionizing-camera-image-quality-heres-how/ . The detail is very thin and Canon is not specifically mentioned, but hard to believe they are not looking at this option. The next few years could be interesting.
 
  • Like
Reactions: 1 user
Upvote 0
Ugh, over the next year we gonna hear nothing else from the Sony fanboys but "Well my Sony has 1 stop more DynAmIc RaNgE. Canon lags 20 years behind."

We're back to 2018 again. If only Canon would use the dual data they already get with the dual pixel sensor. Cause each pixel captures a slightly different exposure. And there's software that can read both exposures and calculate a picture with higher dynamic range. Why this is not done in-camera is a mystery to me.
I'm not a Sony fan. I've been using Canon for over twenty years, and I'm embarrassed to say that we're still lagging behind, not by one stop, but by two stops in ISO settings from 100 to 800. It's a huge gap. When will Canon finally do something for photographers and not filmmakers???
I don't need 40 fps or 20 fps; 10 or 5 fps will suffice, but I want to have a dynamic range similar to the competition. There was so much hype before the release of the R6 Mark3, and what? Again, I'm so disappointed. I was waiting for this camera with high hopes, but it looks like I'll have to buy the old R5, which was never a leader, and nothing better has been released since its launch... :(
 
  • Like
  • Haha
  • Sad
Reactions: 3 users
Upvote 0
I'd like to see the difference. I think that the "14-bit" is just a marketing point and doesn't do much in reality since the DR is about 11 stops. And the DR is already at the limit to be visible.
My understanding of this is, that 12 or 14bit should not make a real difference on the image you get out of the camera. But it should have a significant impact on raising the shadows in post production, as there are more "steps" in the blacks in a 14bit file than there are in 12bit.
So if you tend to boost shadows quite a lot, you should not discard the 12bit limitations of canons electronic shutter.
 
Upvote 0
it opens up other design criteria and allows Canon to eke out the remaining speed and efficiency that its sensor can deliver. Keep in mind that Canon's doing 75MP and 90MP sensors right now. It also allows for more complex wiring and electronics that aren't possible with a front side illuminated sensor.

they are certainly more expensive - the precision in which the post-lithography tools have to etch and grind the layers off the bottom, and then also add in another supporting layer and so forth - is expensive - but that's relative costs - it may cost less than $50 per sensor and less at scale. (spitballing a rough cost). Also the more canon does this at scale the cheaper stacked sensors will become.
A 100 MP FF sensor would have pixels of ~3 µm, so probably starting to see benefit from BSI over FSI at that point. For example, Canon did put a BSI sensor in the PowerShot V1, and that 1.4-type sensor (2x crop factor) has pretty similar DR to the larger sensor in the R7 that has a similar 3.2 µm pixel size. Of course, if the goal is to increase readout speed then a stacked sensor will do that, and those have to be BSI regardless.

I suppose the best answer to the question of when Canon will switch to BSI is when the benefits outweigh the costs. It seems that was the case for the newly-developed 1.4-type sensor in the V1. It would probably make sense for a 90 MP FF sensor, maybe or maybe not for a 75 MP FF sensor (3.4 µm pixels).
 
  • Like
Reactions: 2 users
Upvote 0
A 100 MP FF sensor would have pixels of ~3 µm, so probably starting to see benefit from BSI over FSI at that point. For example, Canon did put a BSI sensor in the PowerShot V1, and that 1.4-type sensor (2x crop factor) has pretty similar DR to the larger sensor in the R7 that has a similar 3.2 µm pixel size. Of course, if the goal is to increase readout speed then a stacked sensor will do that, and those have to be BSI regardless.

I suppose the best answer to the question of when Canon will switch to BSI is when the benefits outweigh the costs. It seems that was the case for the newly-developed 1.4-type sensor in the V1. It would probably make sense for a 90 MP FF sensor, maybe or maybe not for a 75 MP FF sensor (3.4 µm pixels).

Good to have you back. Thanks for the explanation.
 
Upvote 0
My understanding of this is, that 12 or 14bit should not make a real difference on the image you get out of the camera. But it should have a significant impact on raising the shadows in post production, as there are more "steps" in the blacks in a 14bit file than there are in 12bit.
So if you tend to boost shadows quite a lot, you should not discard the 12bit limitations of canons electronic shutter.
I understand that. But dynamic range is the measure between the darkest point and the brightest (simplified) then if there is 11 stops of DR then you don't need more than 11-bit container. Yes I know, there could be more values in the middle but that generally doesn't happen as the values are just counts of photons and it is pretty linear. And the measurements of the DR are already at the edge of visibility so it would be probably even harder to see any more difference between 12bit and 14bit container. The two extra bits is just random numbers = noise.
But I'd be happy if someone could show me otherwise.
 
Upvote 0
And the measurements of the DR are already at the edge of visibility so it would be probably even harder to see any more difference between 12bit and 14bit container. The two extra bits is just random numbers = noise. But I'd be happy if someone could show me otherwise.
Compare the DR for the R5 with mechanical shutter (14-bit), when set to H+ for high-speed shooting (13-bit, HS in the plot below), and when set to electronic shutter (12-bit, ES below).

Screenshot 2025-12-15 at 1.41.17 PM.png
 
Upvote 0
I understand that. But dynamic range is the measure between the darkest point and the brightest (simplified) then if there is 11 stops of DR then you don't need more than 11-bit container. Yes I know, there could be more values in the middle but that generally doesn't happen as the values are just counts of photons and it is pretty linear. And the measurements of the DR are already at the edge of visibility so it would be probably even harder to see any more difference between 12bit and 14bit container. The two extra bits is just random numbers = noise.
But I'd be happy if someone could show me otherwise.
I'm confused by this response. If I produce a Greyscale from black to white and quantise it with a fixed number of points, let's say 4bit so 16 points on that scale, one point is for black and one is for white which are the minimum and maximum values, I have 14 points of grey in between. Every value in between these points will be rounded up or down to the nearest point, which means I'm losing that information in the digital conversion.
Why would increasing the amount of points to round up or down to, be random numbers?
As far as I know the signal that comes from the photodiode is analog so there's basically infinite information available between black and white.
 
  • Like
Reactions: 1 user
Upvote 0
As far as I know the signal that comes from the photodiode is analog so there's basically infinite information available between black and white.
Yes, a photodiode is analog and the information between black and white can be quantized into any number of discrete bits. The point being made is that the relevant definition of dynamic range for photography is the difference between black (the noise floor) and white (the full well capacity of a pixel). At the pixel level, that measurement is the 'engineering DR' and at the image level it's the 'photographic DR'. The former is determined by pixel size (and underlying technology), the latter is mainly determined by sensor area (and underlying technology).
 
  • Like
Reactions: 1 user
Upvote 0
Compare the DR for the R5 with mechanical shutter (14-bit), when set to H+ for high-speed shooting (13-bit, HS in the plot below), and when set to electronic shutter (12-bit, ES below).

View attachment 227127
A good reminder for me as I have been using MS for 14 bit but are generally shooting at >ISO800.
ES in the future for indoor sports then as there is no difference in dynamic range.
:)
 
Upvote 0
Considering Canon pulls this all off with front side illumination, I think is pretty good - now the question is, why and when are they going to move to BSI? they obviously have it. "in theory".
Stacked sensors are always BSI sensors, due to the way that stacked sensors work. So, Canon has the ability to put BSI sensors into everything, they just choose not to.
 
  • Like
Reactions: 1 user
Upvote 0