New information about the upcoming Canon EOS R5C [CR3]

RunAndGun

EOS RP
CR Pro
Dec 16, 2011
481
168
You have seen the new Ronin 4D, right?
And it was designed from the outset with an internal system. The R5 was not. Where is the extra space that they are going to need to slide the filters to and from? They would have to make the body bigger to accommodate the system.
 
  • Like
Reactions: 1 users

danivar

I'm New Here
Feb 14, 2021
11
8
If they can put it in the C70, then they can make it work on all RF mount bodies. The distance from the lens to the sensor doesn’t change.
1. The R5 body is smaller than the C70.
2. The R5 has a mechanical shutter and IBIS - both of which are incompatible with the C70's ND-filter system.
 
Last edited:

danivar

I'm New Here
Feb 14, 2021
11
8
Canon's sensor patent of two years ago that achieves this effect didn't seem to need more physical space. I've detailed it already twice in the last couple days, check my recent posts for more info
That's very interesting. I read your previous post explaining the details of the patent.

It seems promising and I look forward to seeing it in practice. However I doubt it will make its debut in the upcoming R5C as I would expect it to use the same sensor as the current R5. I would be very excited to be proven wrong in this regard though.
 
  • Like
Reactions: 1 user

Finn

EOS M6 Mark II
Mar 6, 2021
83
60
Dynamic range does not change with bit depth. The sensor always has the same dynamic range. All changing the bit depth does is change how many descrete levels exist between the brightest and darkest pixel levels.
Your last sentence is almost the exact definition of dynamic range.
The bit depth of the A/D on the sensor greatly impacts the dynamic range of said sensor.
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
27,547
7,322
Your last sentence is almost the exact definition of dynamic range.
The bit depth of the A/D on the sensor greatly impacts the dynamic range of said sensor.
The definition of dynamic range is the ratio of the brightest to the dimmest intensities that can be captured (the latter being slightly above the noise floor). Bit depth affects the quantization of that range.

The commonly used analogy is a staircase. Dynamic range is the height of the staircase, bit depth is the number of steps. If you Increase the number of steps from 12 to 14, each step is smaller but the distance between the floors doesn’t change.
 
  • Like
  • Love
Reactions: 6 users

Doug7131

EOS 7D
Jul 21, 2019
45
162
Your last sentence is almost the exact definition of dynamic range.
The bit depth of the A/D on the sensor greatly impacts the dynamic range of said sensor.
As neuroanatomist said bit depth is how many steps you have between the brightest and darkest pixels. You could get the full dynamic range of any sensor using just 1 bit. Another analogy would be a ruler. The length of the ruler is the dynamic range and the bit depth is the markings on the ruler. Increasing bit depth is like adding mm increments to the ruler instead of cm. It makes the ruler more precise but it do not change the length of the ruler.
Increasing bit depth just allows the camera to more precisely match the analog voltage from the sensor to a digital value.
 

Finn

EOS M6 Mark II
Mar 6, 2021
83
60
The definition of dynamic range is the ratio of the brightest to the dimmest intensities that can be captured (the latter being slightly above the noise floor). Bit depth affects the quantization of that range.

The commonly used analogy is a staircase. Dynamic range is the height of the staircase, bit depth is the number of steps. If you Increase the number of steps from 12 to 14, each step is smaller but the distance between the floors doesn’t change.
Calculations for DR in A/D circuits (measured in dBs) directly factor in bit depth.
 

dirtyvu

EOS 90D
Jan 7, 2019
153
115
CineD does dynamic range test for a lot of popular cameras. Here is the R5 test for ProRes Raw, that should allow the most dynamic range possible from the camera.
I won't go into the testing methodology. According to their tests if you go with internal recording it has 10.8
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
27,547
7,322
Calculations for DR in A/D circuits (measured in dBs) directly factor in bit depth.
That doesn’t change the definition of dynamic range, and the ‘calculation of DR’ that you mention is just the quantization step. The dynamic range is determined by the analog wells – the height of the staircase. The ADC then splits that into bits – the steps. The bit depth determines the number of those steps, regardless of the units used (stops, bits or dB), just like in the analogy by @Doug7131 you can divide your ruler into millimeters, centimeters or inches, but the length doesn’t change.

What that means in practice is that the ADC assigns the ‘brightest’ signal to the highest digital value, the ‘dimmest’ signal to the lowest digital value, and distributes the intervening signers across the digital spread.

If a scene comprises 18 EV between the detail in the sunlit clouds and the troll sitting inside the mouth of his dark cave, and sensor captures 12 stops of dynamic range, 6 EV will be lost on the end(s). If you then run those 12 analog stops through a 20-bit ADC, do you get those 6 lost stops back with an extra two bonus stops? No.
 
  • Like
Reactions: 1 users

LogicExtremist

Lux pictor
Sep 26, 2021
501
348
Looks like people are having a hard time getting their head around the concept of dynamic range!

There's a good explanation here - https://www.bhphotovideo.com/explora/photography/tips-and-solutions/dynamic-range-explained

The simple way to explain bit depth vs dynamic range would be as follows.

Imagine we have a tonal gradient from absolute black to absolute white, with shades of grey in between.

b-to-wh.jpg


The smoothness of the steps between absolute black to absolute white, in other words, the amounts of shades of grey, would be determined by the bit depth. If we represent the image with 8 bits, we have 2^8 = 2x2x2x2x2x2x2x2 = 256 shades of grey. With 16 bits, we have 65,536 shades of grey, which gives us a smoother transition with less obvious steps or banding.

A camera sensor wont be able to capture the whole tonal range from absolute black to absolute white, it will only be able to capture so far into the dark greys before it interprets them as black, and only so far into the light greys before it interprets them as white. The range of actual exposure that it can capture before losing the details in the whites or blacks is the dynamic range of the sensor, measured in exposure values (EV), which is an absolute value or measurement of the brightest to darkest value.

If we look once again at our gradient, the limited dynamic range of a sensor only captures really dark grey to really light grey, so it's only capturing a grey to grey gradient, as shown below, and not the whole absolute black to absolute white gradient,.

gr-to-gr2.jpg


When the sensor digitally encodes what it sees, it interprets either end of the grey gradient as black or white respectively, because it cant discern between really dark grey and absolute black, and really pale grey and white. Essentially the ends are cut off because every tone past a certain point just becomes lost in pure black or pure white.

This truncated or reduced tonal range that the sensor captures can be recorded in coarser or smoother steps, from the darkest to the lightest tones by the bit depth we use. More bit depth just gives less banding, and a smoother transition between the limited tones that were captured by the sensor.

If a scene has 18 EV of light between the darkest and brightest parts, and we use a 12 EV camera, we lose 6 EV of light, 3EV in the shadows and 3EV highlight if we expose in the middle.

We can expose for maximum details in the shadows by over-exposing. We're still capturing 12 EV of light, but this exposure will capture all the dark end, and we'll lose 6EV of the bright highlights instead, so highlight details will be blown out.

Alternately, we can expose for maximum details in the highlights by under-exposing. Once again, we're still capturing 12 EV of light, but this exposure will capture all the bright end, and we'll lose 6EV of the dark shadows instead, so shadow details will be crushed.

With HDR (high dynamic range) photos, we can take all three of these photos, and select the best parts of each, combining them to extend the dynamic range to show details in the shadows and highlight. That what smartphones do, as part of their computational photography, to get skies and shadows correctly exposed in the same scene. That strategy only works with static objects though, because images don't neatly overlay when they move! Another strategy is to use image compositing in photoshop, taking the sky from one photo, forground from another, and pasting it to the middle image on a landscape photo for example.

From this example it should be clear that the bit depth is a variable value, it's essentially the digital resolution that we record our data at, and it determines how smooth the transitions are between tones. It's not the same as thing as the dynamic range of a scene, which can be represented by absolute values of light measured in exposure values (EV), where each step in EV is a doubling of light, which is an analogue phenomenon. A camera sensor's dynamic range represents only a portion of the scene's actual dynamic range when there are very bright and very dark elements in a scene. Increasing bit depth will create smoother transitions through the tones captured within the dynamic range of the camera. Using a sensor with more dynamic range is the only way to get more dynamic range from a camera.

Wow, hope that explanation was worth it! :)
 
Last edited:
  • Like
Reactions: 6 users

sanj

EOS R5
Jan 22, 2012
4,154
1,027
Looks like people are having a hard time getting their head around the concept of dynamic range!

There's a good explanation here - https://www.bhphotovideo.com/explora/photography/tips-and-solutions/dynamic-range-explained

The simple way to explain bit depth vs dynamic range would be as follows.

Imagine we have a tonal gradient from absolute black to absolute white, with shades of grey in between.

b-to-wh.jpg


The smoothness of the steps between absolute black to absolute white, in other words, the amounts of shades of grey, would be determined by the bit depth. If we represent the image with 8 bits, we have 2^8 = 2x2x2x2x2x2x2x2 = 256 shades of grey. With 16 bits, we have 65,536 shades of grey, which gives us a smoother transition with less obvious steps or banding.

A camera sensor wont be able to capture the whole tonal range from absolute black to absolute white, it will only be able to capture so far into the dark greys before it interprets them as black, and only so far into the light greys before it interprets them as white. The range of actual exposure that it can capture before losing the details in the whites or blacks is the dynamic range of the sensor, measured in exposure values (EV), which is an absolute value or measurement of the brightest to darkest value.

If we look once again at our gradient, the limited dynamic range of a sensor only captures really dark grey to really light grey, so it's only capturing a grey to grey gradient, rather then the whole absolute black to absolute white gradient, as shown below.

gr-to-gr2.jpg


When the sensor digitally encodes what it sees, it interprets either end of the grey gradient as black or white respectively, because it cant discern between really dark grey and absolute black, and really pale grey and white. Essentially the ends are cut off because every tone past a certain point just becomes lost in pure black or pure white.

This truncated or reduced tonal range that the sensor captures can be recorded in coarser or smoother steps from the darkest to the lightest tones by the bit depth we use. More bit depth just gives less banding, and a smoother transition between the limited tones that were captured by the sensor.

If a scene has 18 EV of light between the darkest and brightest parts, and we use a 12 EV camera, we lose 6 EV of light, 3EV in the shadows and 3EV highlight if we expose in the middle.

We can expose for maximum details in the shadows by over-exposing, we'll still capture 12 EV of light, but the range will capture all the dark end, and we'll lose 6EV of the bright highlights instead, so highlight details will be blown out.

Alternately, we can expose for maximum details in the highlights by under-exposing, we'll still capture 12 EV of light, but the range will capture all the bright end, and we'll lose 6EV of the dark shadows instead, so shadow details will be crushed.

With HDR (high dynamic range) photos, we can take all three photos, and take the best of each, combining them to extend the dynamic range, so we have details in the shadows and highlight. That what smartphones do, as part of their computational photography, to get skies and shadows correctly exposed in the same scene. That strategy only works with static objects though, becuase images don't neatly overlay when they move! Another strategy is to use image compositing in photoshop, taking the sky from one photo, forground from another, and pasting it to the middle image on a lanscape photo for example.

From this example it should be clear that the bit depth is a variable value, it's essentially the digital resolution that we record our data at, and it determines how smooth the transition is between tones. It's not the same as thing as the dynamic range of a scene, which can be represented by absolute values of light measured in exposure values (EV), where each step in EV is a doubling of light, and is an analogue phenomenon. A camera sensor's dynamic range represents only a portion of the scene's actual dynamic range when there are very bright and very dark elements in a scene. Increasing bit depth will create smoother transitions through the tones captured within the dynamic range of the camera. Increasing the sensor dynamic range is the only way to get more dynamic range from a camera.

Wow, hope that explanation was worth it! :)
THANK YOU SIR!
 
  • Like
Reactions: 1 user

Gazwas

EOS RP
Sep 3, 2018
253
224
So after reading all the above comments, linked reviews and the excellent DR explanation above, do we think if Canon simply keeps the same sensor from the R5 but lets 8K run for longer it would be a total waist of time as it has none of the qualities we look for in a cinema camera like super flat log profiles, low shadow noise and smooth highlight roll off?
 

danivar

I'm New Here
Feb 14, 2021
11
8
So after reading all the above comments, linked reviews and the excellent DR explanation above, do we think if Canon simply keeps the same sensor from the R5 but lets 8K run for longer it would be a total waist of time as it has none of the qualities we look for in a cinema camera like super flat log profiles, low shadow noise and smooth highlight roll off?
I think that's the most likely scenario by a margin unfortunately. Canon will most likely put their premium video sensors in their true cinema cameras first, not debut such a sensor in a stills body.

That said, while R5 sensor doesn't have the best DR on the market it still produces excellent quality video.
 
  • Like
Reactions: 4 users

jvillain

EOS RP
Sep 29, 2018
324
269
Do you have any examples of such electronic ND that requires no physical space?

What is an 'electronic equivalent' to ND, sir?
I believe the FX6 fits the bill. I don't think we are to far away from continuous digital ND being standard on any self respecting "cinema" camera just as auto focus is starting to spread. Will we be able to call the R5C a real cinema camera? Until they release it I reserve judgement. The addition of proper time code is an indication that they may want to make it that.

Looks like people are having a hard time getting their head around the concept of dynamic range!
...
Wow, hope that explanation was worth it! :)
Nice write up. Thanks for putting in the effort. The images don't do what yu are hoping though becuase every one is looking at them on 8 or 10 bit monitors so they both look the same. The display side still lags the capture side as far as DR an steps are concerned.

I am sure you know but for those that don't when you pull those images into a NLE they usually get imported into an even larger color and gamma space then you can pull down the highlights and lift the shadows in order to get all that dynamic range to fit into what will work on a diplay and then render that into a smaller color space and gammut file format that your dispay can handle like rec709 or if the diplay can handle it rec2020.
 

danivar

I'm New Here
Feb 14, 2021
11
8
I believe the FX6 fits the bill. I don't think we are to far away from continuous digital ND being standard on any self respecting "cinema" camera just as auto focus is starting to spread. Will we be able to call the R5C a real cinema camera? Until they release it I reserve judgement. The addition of proper time code is an indication that they may want to make it that.

The FX6 has a physical ND filter that folds down in front of the sensor. It's electronically variable in strengths, but minimum 2 stops.

And like other ND-systems, the one in the FX6 is also incompatible with having IBIS and/or a mechanical curtain shutter.

To be able to fit ND filters in the rumoured R5C something along the lines of what SwissFrank describes in a previous post would have to be invented.
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
27,547
7,322
Thanks for putting in the effort. The images don't do what yu are hoping though becuase every one is looking at them on 8 or 10 bit monitors so they both look the same.
Either my iPhone has a display of >10 bits, or you’re not looking closely enough. (It’s the latter, of course.)

Look more carefully, and you’ll see that the second image has less black on the left and less white on the right, presumably because @LogicExtremist trimmed the ends of the first image to simulate the sensor DR being less than the scene DR. Flipping back-and-forth between the images using the site’s image viewer makes it even more obvious.
 
  • Like
Reactions: 1 users

DBounce

Canon Eos R3
May 3, 2016
475
515
And it was designed from the outset with an internal system. The R5 was not. Where is the extra space that they are going to need to slide the filters to and from? They would have to make the body bigger to accommodate the system.
So you believe Canon was not farsighted enough to see the benefits of including an ND system on a mirrorless body? I think you underestimate Canon.