Canon is thinking about more lenses like the RF 600mm f/11 STM and RF 800mm f/11 STM

Quarkcharmed

EOS R5
Feb 14, 2018
1,223
1,078
Australia
www.michaelborisenko.com
You don't see anything in a raw file until it has been processed. What you see then is only one of a near infinite number of possible interpretations legitimately derived from the information in the raw file. There's no such thing as an unprocessed or unedited raw file displayed on a screen. If you're not selecting how the raw data is being converted to a viewable image, then whoever wrote the default development profile has decided it for you.

That's basically exactly what I said, but it feels like you disagree or misinterpreted my message?

To maximize image quality, one should maximize the amount of light falling on the sensor until just before the highlights begin to clip.
To maximise image quality one should maximise the information in raw file, and maximising the exposure is one of the tools.
If it isn't possible to do that at ISO 100 (because, for example, the subject is moving and the light is dim), then the best image quality for a given amount of light entering the camera is the *highest* ISO that doesn't allow clipping!

Generally it's best to use ISO values where the camera uses the analog gain. For Canon R5 it's ISO 100 (base) and 400.
It's not practical for action/sports though.
 

Joules

doom
CR Pro
Jul 16, 2017
1,563
1,881
Hamburg, Germany
You mentioned in your above reply that the amplification is done before ADC - I don't doubt you (as I am no expert here), but is that something that is mentioned in some technical document for the latest Canon sensors?
As stated in the equivalency thread, what I wrote there is based on my understanding and not a statement of fact. That understanding is based on various sources of information found on the internet and also my own experience and experiments. Therefore I am glad to be confronted with a different and perhaps better explanation for the effects I see in these experiences and experiments.

I unfortunately can't point you to a technical document that states as a fact that Canon sensors apply analog amplification to the image. You certainly can find information supporting this hypothesis, though. Here is an abstract technical view of a sensor clearly showing this and here is a stack exchange post compiling a few more in depth links on the subject.

If no form of analogue amplification took place and it all was handled digitally, I don't see how these two facts could be explained: a) A high ISO image has less dynamic range and b) you can't replicate the image quality in a high ISO image by taking a low ISO image from my 80D and brightening it in post.

Let's say we have a sensor with 2 pixels, and an analogue to digital converter with 8 bit depth, so the numbers we can represent are integers between 0 and 255. Let's say our pixels are sized so that when they are fully saturated, they create a voltage of 255 and 0 when they are completely free of charge. That means we don't have to bother with units or math, there's just a 1:1 relation between the physical signal that we are measuring and the digital scale that we are measuring it on. Let's say we image a scene with one medium half and a dark half, so that out pixel voltages are 8 and 128.

If ISO is just a digital multiplikation, our ADC spits out (8, 128) as output - Now, in order to explain the loss of dynamic range, the multiplication would have to be applied before saving these numbers as RAW file. Let's say we want the dark section of out image to look like a midtone, so we need to multiply by 16 (raise by 4 stops). If ISO 100 is our base ISO where multiplication is 1, we are now at ISO 1600 and our image values are (8*16, 128*16) = (128, 2048), but due to out 8-bit depth, we can't represent numbers larger than 255 and the bright part get's clipped so that our image is actually (128, 255). That would explain the loss of DR then, but not why it is necessary. Why actually apply this digital multiplication? If it is a RAW file anyway, why not simply store the ISO setting (multiplier) as an EXIF and allow it to be changed in post without throwing away any data during capture, just like white balance for example? After all, just because high ISO shots throw away highlight data they don't save any drive space, right? So what's the benefit of doing it this way? Also - Where is the noise I'm seeing coming from? Your multiplication hardware (or algorithm) is seriously broken, if it introduces noise in integer multiplications.

What I believe to be the case is that there's circuitry infront or in the ADC step, that handles the ISO multiplication in hardware on the actual voltage, rather than digitally after the fact. In that sense, you could argue that if our example was a bit less lucky and the pixel output a range from 0 to ~16 and we convert that to numbers ranging from 0 to 255, that would be amplification. More likely, I believe the ADCs in use can't sample as low an input voltage as small pixels provide accurately, and therefore the amplification is applied before or during the sampling. That's just what makes sense to me, so if somebody can dispute it, go right ahead please!

As for the jerk you see in sensors with "dual native ISOs", I understand that to be the result of using to different amplifier circuits - one that can sample high voltages coming from the pixels really well by only slightly amplifying them (low gain), and one amplifier circuit better for dealing with low pixel voltages that can handle greater amplifications better (high gain). You use the low gain one for smaller ISOs and the high gain one for higher ISOs. At the point where there is the jerk, the switch happens. I understand that you can't expect to get the same level of quality with just one circuit because the real world components don't behave as perfectly linear and consistent as would be required.

This is mainly how I explain myself the effect in Bill Claffs read noise chart. From his notes:
"The shape of the curve can tell you something about the amplifier circuitry of the camera.
[...]
Curved curves [...] show evidence of being dominated by ADC read noise.
Curves with a sharp drop in the analog range [...] show evidence of the use of dual conversion gain.
Quite a few cameras stop analog gain before reaching the "Hi" ISO values"
(emphasis and shortening by me)

I interpret the charts and comments like this: If no noise was added through the analog gain, the curve would look perfectly linear, as a higher ISO only multiplies the read noise already coming from the pixel readout circuit. This is not the case for the older Canon sensors, as there is noise added to the amplified signal in the amplification process, and the amount of this noise is not lineary related to the ISO value. As the chart shows, Canon has made improvements in this aspect of the noise between generations, but with the R5, the simply use to different circuits that can both be used only in the range where they behave linearly and the noise they add is small compared to other sources of noise.

Unfortunately, the material I have read about this either goes into such technical depth that with my time and background I can't read it properly currently, or is so surface level (and partially wrong), that I don't regard it as proper material to be quoted in support of or against my understanding.
 
Last edited:

usern4cr

R5
CR Pro
Sep 2, 2018
964
1,308
Kentucky, USA
As stated in the equivalency thread, what I wrote there is based on my understanding and not a statement of fact. That understanding is based on various sources of information found on the internet and also my own experience and experiments. Therefore I am glad to be confronted with a different and perhaps better explanation for the effects I see in these experiences and experiments.

I unfortunately can't point you to a technical document that states as a fact that Canon sensors apply analog amplification to the image. You certainly can find information supporting this hypothesis, though. Here is an abstract technical view of a sensor clearly showing this and here is a stack exchange post compiling a few more in depth links on the subject.

If no form of analogue amplification took place and it all was handled digitally, I don't see how these two facts could be explained: a) A high ISO image has less dynamic range and b) you can't replicate the image quality in a high ISO image by taking a low ISO image from my 80D and brightening it in post.

Let's say we have a sensor with 2 pixels, and an analogue to digital converter with 8 bit depth, so the numbers we can represent are integers between 0 and 255. Let's say our pixels are sized so that when they are fully saturated, they create a voltage of 255 and 0 when they are completely free of charge. That means we don't have to bother with units or math, there's just a 1:1 relation between the physical signal that we are measuring and the digital scale that we are measuring it on. Let's say we image a scene with one medium half and a dark half, so that out pixel voltages are 8 and 128.

If ISO is just a digital multiplikation, our ADC spits out (8, 128) as output - Now, in order to explain the loss of dynamic range, the multiplication would have to be applied before saving these numbers as RAW file. Let's say we want the dark section of out image to look like a midtone, so we need to multiply by 16 (raise by 4 stops). If ISO 100 is our base ISO where multiplication is 1, we are now at ISO 1600 and our image values are (8*16, 128*16) = (128, 2048), but due to out 8-bit depth, we can't represent numbers larger than 255 and the bright part get's clipped so that out image is actually (128, 2048). That would explain the loss of DR then, but not why it is necessary. Why actually apply this digital multiplication? If it is a RAW file anyway, why not simply store the ISO setting (multiplier) as an EXIF and allow it to be changed in post without throwing away any data during capture, just like white balance for example? After all, just because high ISO shots throw away highlight data they don't save any drive space, right? So what's the benefit of doing it this way? Also - Where is the noise I'm seeing coming from? Your multiplication hardware (or algorithm) is seriously broken, if it introduces noise in integer multiplications.

What I believe to be the case is that there's circuitry infront or in the ADC step, that handles the ISO multiplication in hardware on the actual voltage, rather than digitally after the fact. In that sense, you could argue that if our example was a bit less lucky and the pixel output a range from 0 to ~16 and we convert that to numbers ranging from 0 to 255, that would be amplification. More likely, I believe the ADCs in use can't sample as low an input voltage as small pixels provide accurately, and therefore the amplification is applied before or during the sampling. That's just what makes sense to me, so if somebody can dispute it, go right ahead please!

As for the jerk you see in sensors with "dual native ISOs", I understand that to be the result of using to different amplifier circuits - one that can sample high voltages coming from the pixels really well by only slightly amplifying them (low gain), and one amplifier circuit better for dealing with low pixel voltages that can handle greater amplifications better (high gain). You use the low gain one for smaller ISOs and the high gain one for higher ISOs. At the point where there is the jerk, the switch happens. I understand that you can't expect to get the same level of quality with just one circuit because the real world components don't behave as perfectly linear and consistent as would be required.

This is mainly how I explain myself the effect in Bill Claffs read noise chart. From his notes:
"The shape of the curve can tell you something about the amplifier circuitry of the camera.
[...]
Curved curves [...] show evidence of being dominated by ADC read noise.
Curves with a sharp drop in the analog range [...] show evidence of the use of dual conversion gain.
Quite a few cameras stop analog gain before reaching the "Hi" ISO values"
(emphasis and shortening by me)

I interpret the charts and comments like this: If no noise was added through the analog gain, the curve would look perfectly linear, as a higher ISO only multiplies the read noise already coming from the pixel readout circuit. This is not the case for the older Canon sensors, as there is noise added to the amplified signal in the amplification process, and the amount of this noise is not lineary related to the ISO value. As the chart shows, Canon has made improvements in this aspect of the noise between generations, but with the R5, the simply use to different circuits that can both be used only in the range where they behave linearly and the noise they add is small compared to other sources of noise.

Unfortunately, the material I have read about this either goes into such technical depth that with my time and background I can't read it properly currently, or is so surface level (and partially wrong), that I don't regard it as proper material to be quoted in support of or against my understanding.
Thanks, Joules. I guess only the engineers inside Canon or other inside professionals will know exactly what's going on. But your insights into what the charts show does make sense. I'll assume that the dual circuitry (below/above ISO 400) is chosen and with those circuits there is an amplification based on ISO before the ADC, which is before each single color pixel element is stored to the raw file. This would make the most sense.

I normally like to make a quick guess as to how to choose settings for a shot, so I'll usually do this:
* First choose my f# for desired DOF (often wide open) and ISO 100 and let the camera set the exposure speed,
* If that speed is too slow (which enables visible subject motion), up the ISO to 400. If still too slow I'll up the ISO accordingly and stop when the speed is acceptable.
* If it's so bright that the speed wants to be faster than the camera (R5) can do (1/8000") then don't use ISO L (50), but rather increase the aperture from wide open (eg f1.2) to f1.4 or 1.8 etc as needed to get back down to max speed.

I'll try to refrain from saying what "does" or "doesn't" happen with the R5 sensor exposure details, as I'm just making an educated guess based on the limited (and insufficient) data available.

Thanks again for your detailed post!
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,428
2,009
*EDIT* - I'm trying to get clarification on this issue (thanks Joules), so what I've mentioned below may not be correct (well, that wouldn't be the first time :ROFLMAO: ! )

I'm not following you here. Using the same speed and f# for both (which does not cause highlight clipping) will capture the same #photons for both images. If you shoot RAW, the ISO while viewing later (or in post) will show the ISO 100 picture darker, but adding +4 EV in post will get the same brightness. For non-clipped images in RAW, changing in post ISO(if possible) and EV offset have the same effect and they're just software values.

Now, if you shoot jpg then the encoding of the image in camera memory will be a cause of difference due to the ISO 100 encoding with 4 fewer bits (hence a darker image). But that's not caused by the ISO setting itself, but by improper encoding range of the jpg image which drops the low 4 bits of the image instead of saving it.

So the ISO is not inherently causing noise. Jpg (but not RAW) encoding is causing noise.
If I'm not understanding something, please "illuminate" me! ;)

The difference is that when the amplification is done at the sensor, the noise added by the path between the amplifier and the ADC is not also amplified. When you wait and multiply the digital numbers, you also amplify that additional noise added between the analog amplifier and the ADC.

You also lose the smaller steps between each value. If you multiply the numbers derived from the ADC when shot at ISO 100 amplification by four stops to ISO 800, that's a multiplication factor of 8X, so all of your new digital values will be [(multiples of 8) -1], or eight steps apart. You'll have no 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, ... 16366, 16368, 16369, 16370, 16371, 16372, 16373, 16374, 16376, 16377, 16378, 16379, 16380, 16381, or 16382. Your tonal gradations will be much rougher, equivalent to 11-bit ADC instead of 14-bit ADC.
 

usern4cr

R5
CR Pro
Sep 2, 2018
964
1,308
Kentucky, USA
The difference is that when the amplification is done at the sensor, the noise added by the path between the amplifier and the ADC is not also amplified. When you wait and multiply the digital numbers, you also amplify that additional noise added between the analog amplifier and the ADC.

You also lose the smaller steps between each value. If you multiply the numbers derived from the ADC when shot at ISO 100 amplification by four stops to ISO 800, that's a multiplication factor of 8X, so all of your new digital values will be [(multiples of 8) -1], or eight steps apart. You'll have no 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, ... 16366, 16368, 16369, 16370, 16371, 16372, 16373, 16374, 16376, 16377, 16378, 16379, 16380, 16381, or 16382. Your tonal gradations will be much rougher, equivalent to 11-bit ADC instead of 14-bit ADC.
I would have to see the actual raw data to know that what you said (about 8x resulting in stored data exactly 8 steps apart throughout the entire range) is true or not. If I had to guess (which is why I'm here :ROFLMAO: ) I would not expect that to be the case. I would expect a 14-bit ADC to have slop in its reading (no matter what the manufacturer might claim) and I would expect to see almost a continuous range of values with a bell shaped peak at each of the steps you mention. If you don't amplify the signal before the ADC, this slop will distort the values stored and amplification afterwards in display/post would then amplify that slop, whereas amplification before the ADC would allow the ADC slop to not be amplified and thus be as accurate a signal as the electronics allow. I'm also guessing that what I'm suggesting is probably the very reason they chose to amplify the signal before the ADC.

Again, I'm guessing that's how it works. I've never seen the actual raw data from a photo of a proper test pattern at different exposures. In fact, has anyone ever published the exact format that the sensor stores the uncompressed .CR3 dual pixel element data so that a programmer (which I am) could write code to decode the picture themselves and thus run an actual test for it?
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,428
2,009
I would have to see the actual raw data to know that what you said (about 8x resulting in exactly 8 steps apart throughout the entire range) is true or not. If I had to guess (which is why I'm here :ROFLMAO: ) I would not expect that to be the case. I would expect a 14-bit ADC to have slop in its reading (no matter what the manufacturer might claim) and I would expect to see almost a continuous range of values with a bell shaped peak at each of the steps you mention. If you don't amplify the signal before the ADC, this slop will distort the values stored and amplification afterwards in display/post would then amplify that slop, whereas amplification before the ADC would allow the ADC slop to not be amplified and thus be as accurate a signal as the electronics allow.

Again, I'm guessing that's how it works. I've never seen the actual raw data from a photo of a proper test pattern at different exposures. In fact, has anyone ever published the exact format that the sensor stores the dual pixel element data so that a programmer (which I am) could write code to decode the picture themselves and thus run an actual test for it?

Digital is digital. There are no fractional steps. Multiplying a string of integers by 8 results in integers that are all at least 8 steps apart.

Analog is continuous, there are no steps (larger than the charge created by a single electron) at all until it is digitized.

Though I do not have time right now to go hunt it down, I have seen such numbers. There is definitely the "stairstep" effect. Keep in mind that pretty much all raw data has curves applied to it before exporting as a viewable image at lower bit depth. Even if the end product is 8-bit, stretching 11-bit equivalent steps between values will cause more banding than stretching 14-bit steps before converting to an 8-bit format for output.

Your guess is wrong.
 

usern4cr

R5
CR Pro
Sep 2, 2018
964
1,308
Kentucky, USA
Digital is digital. There are no fractional steps. Multiplying a string of integers by 8 results in integers that are all at least 8 steps apart.

Analog is continuous, there are no steps (larger than the charge created by a single electron) at all until it is digitized.

Though I do not have time right now to go hunt it down, I have seen such numbers. There is definitely the "stairstep" effect. Keep in mind that pretty much all raw data has curves applied to it before exporting as a viewable image at lower bit depth. Even if the end product is 8-bit, stretching 11-bit equivalent steps between values will cause more banding than stretching 14-bit steps before converting to an 8-bit format for output.

Your guess is wrong.
I invite you to provide proof from the manufacturer that they do, or do not, provide the amplification before the ADC and storage to the .CR3 uncompressed file. Until then, we're both guessing.
 
  • Haha
Reactions: Michael Clark

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,428
2,009
I invite you to provide proof from the manufacturer that they do, or do not, provide the amplification before the ADC and storage to the .CR3 uncompressed file. Until then, we're both guessing.

The reason it is called analog amplification is because it is done prior to the Analog-to-Digital Convertor. There are tens of thousands of sources on the internet for you to peruse that all talk about this. Try googling "digital camera analog amplification?"
 

usern4cr

R5
CR Pro
Sep 2, 2018
964
1,308
Kentucky, USA
The reason it is called analog amplification is because it is done prior to the Analog-to-Digital Convertor. There are tens of thousands of sources on the internet for you to peruse that all talk about this. Try googling "digital camera analog amplification?"
It doesn't matter to me what they do, I'm just curious what the manufacture officially claims they actually do. If you can get an answer from the manufacturer on this subject then I'd love to hear it. I'm done posting on this issue.
 
  • Haha
Reactions: Michael Clark

EOS 4 Life

EOS RP
Sep 20, 2020
374
261
The difference is that when the amplification is done at the sensor, the noise added by the path between the amplifier and the ADC is not also amplified. When you wait and multiply the digital numbers, you also amplify that additional noise added between the analog amplifier and the ADC.

You also lose the smaller steps between each value. If you multiply the numbers derived from the ADC when shot at ISO 100 amplification by four stops to ISO 800, that's a multiplication factor of 8X, so all of your new digital values will be [(multiples of 8) -1], or eight steps apart. You'll have no 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, ... 16366, 16368, 16369, 16370, 16371, 16372, 16373, 16374, 16376, 16377, 16378, 16379, 16380, 16381, or 16382. Your tonal gradations will be much rougher, equivalent to 11-bit ADC instead of 14-bit ADC.
It is not as simple as that digital signal amplification can be done intelligently.
It often leads to better looking results than analog amplification.
It also often leads to worse looking results.
Digital processing does not need to be done in-camera so a lot of us prefer to do it ourselves when we have more control.
This one of the reasons I hate when people compare the noise level between cameras.
I especially hate when they do that with RAW.
The real comparison should be how much can be fixed and the time and effort to fix it.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,428
2,009
It is not as simple as that digital signal amplification can be done intelligently.
It often leads to better looking results than analog amplification.
It also often leads to worse looking results.
Digital processing does not need to be done in-camera so a lot of us prefer to do it ourselves when we have more control.
This one of the reasons I hate when people compare the noise level between cameras.
I especially hate when they do that with RAW.
The real comparison should be how much can be fixed and the time and effort to fix it.

You can't take the same values and somehow miraculously recover which ones were slightly higher or lower before they were all digitized to the exact same number. That information was lost at the ADC and no longer exists unless it was recorded elsewhere in a higher bit encoding scheme.
 

usern4cr

R5
CR Pro
Sep 2, 2018
964
1,308
Kentucky, USA
RF 400mm f5.6
rf 300mm f4
rf 100-400mm f8

any thoughts on the idea?
I think the primes would be wonderful lenses! And much needed as the 75mm entrance pupil for the primes would be ideal, and much smaller than big whites, but they would still be sizeable lenses. I'd hope they had L quality and a large max. magnification (0.25x or higher) which would make them ideal for close ups of flowers (etc) with massive background blur. The Olympus 300mm f4 pro was a truly magnificent (M43) lens, and if Canon could match that build & close-up quality then it would be wonderful.

A 100-400 f8 would be appreciably smaller & lighter than the RF 100-500 f4.5-7.1 which would have value for those wanting smaller, lighter, & less expensive options. Maybe they'd make this a non-L version?
 

Lucas Tingley

Canon EOS RP
Nov 27, 2020
101
55
I have small amounts of money, so i need these.:ROFLMAO:

they could make them cheaper with a stepping motor instead of an ultrasonic motor
 
Last edited:
<-- start Taboola -->