As stated in the equivalency thread, what I wrote there is based on my understanding and not a statement of fact. That understanding is based on various sources of information found on the internet and also my own experience and experiments. Therefore I am glad to be confronted with a different and perhaps better explanation for the effects I see in these experiences and experiments.
I unfortunately can't point you to a technical document that states as a fact that Canon sensors apply analog amplification to the image. You certainly can find information supporting this hypothesis, though.
Here is an abstract technical view of a sensor clearly showing this and
here is a stack exchange post compiling a few more in depth links on the subject.
If no form of analogue amplification took place and it all was handled digitally, I don't see how these two facts could be explained: a) A high ISO image has less dynamic range and b) you can't replicate the image quality in a high ISO image by taking a low ISO image from my 80D and brightening it in post.
Let's say we have a sensor with 2 pixels, and an analogue to digital converter with 8 bit depth, so the numbers we can represent are integers between 0 and 255. Let's say our pixels are sized so that when they are fully saturated, they create a voltage of 255 and 0 when they are completely free of charge. That means we don't have to bother with units or math, there's just a 1:1 relation between the physical signal that we are measuring and the digital scale that we are measuring it on. Let's say we image a scene with one medium half and a dark half, so that out pixel voltages are 8 and 128.
If ISO is just a digital multiplikation, our ADC spits out (8, 128) as output - Now, in order to explain the loss of dynamic range, the multiplication would have to be applied before saving these numbers as RAW file. Let's say we want the dark section of out image to look like a midtone, so we need to multiply by 16 (raise by 4 stops). If ISO 100 is our base ISO where multiplication is 1, we are now at ISO 1600 and our image values are (8*16, 128*16) = (128, 2048), but due to out 8-bit depth, we can't represent numbers larger than 255 and the bright part get's clipped so that out image is actually (128, 2048). That would explain the loss of DR then, but not why it is necessary. Why actually apply this digital multiplication? If it is a RAW file anyway, why not simply store the ISO setting (multiplier) as an EXIF and allow it to be changed in post without throwing away any data during capture, just like white balance for example? After all, just because high ISO shots throw away highlight data they don't save any drive space, right? So what's the benefit of doing it this way? Also - Where is the noise I'm seeing coming from? Your multiplication hardware (or algorithm) is seriously broken, if it introduces noise in integer multiplications.
What I believe to be the case is that there's circuitry infront or in the ADC step, that handles the ISO multiplication in hardware on the actual voltage, rather than digitally after the fact. In that sense, you could argue that if our example was a bit less lucky and the pixel output a range from 0 to ~16 and we convert that to numbers ranging from 0 to 255, that would be amplification. More likely, I believe the ADCs in use can't sample as low an input voltage as small pixels provide accurately, and therefore the amplification is applied before or during the sampling. That's just what makes sense to me, so if somebody can dispute it, go right ahead please!
As for the jerk you see in sensors with "dual native ISOs", I understand that to be the result of using to different amplifier circuits - one that can sample high voltages coming from the pixels really well by only slightly amplifying them (low gain), and one amplifier circuit better for dealing with low pixel voltages that can handle greater amplifications better (high gain). You use the low gain one for smaller ISOs and the high gain one for higher ISOs. At the point where there is the jerk, the switch happens. I understand that you can't expect to get the same level of quality with just one circuit because the real world components don't behave as perfectly linear and consistent as would be required.
This is mainly how I explain myself the effect in
Bill Claffs read noise chart. From his notes:
"The shape of the curve can tell you something about the
amplifier circuitry of the camera.
[...]
Curved curves [...] show evidence of being dominated by ADC read noise.
Curves with a sharp drop in the analog range [...] show evidence of the use of dual conversion gain.
Quite a few cameras stop analog gain before reaching the "Hi" ISO values"
(emphasis and shortening by me)
I interpret the charts and comments like this: If no noise was added through the analog gain, the curve would look perfectly linear, as a higher ISO only multiplies the read noise already coming from the pixel readout circuit. This is not the case for the older Canon sensors, as there is noise added to the amplified signal in the amplification process, and the amount of this noise is not lineary related to the ISO value. As the chart shows, Canon has made improvements in this aspect of the noise between generations, but with the R5, the simply use to different circuits that can both be used only in the range where they behave linearly and the noise they add is small compared to other sources of noise.
Unfortunately, the material I have read about this either goes into such technical depth that with my time and background I can't read it properly currently, or is so surface level (and partially wrong), that I don't regard it as proper material to be quoted in support of or against my understanding.