There is noise reduction circuitry. It's called CDS, or correlated double sampling. There is usually a CDS unit per column, which samples dark current before an exposure is made, and that sampling is subtracted from the pixel charge as each row is read.
Ah. I assumed that was being done in software rather than hardware.
CDS? CDS has to be done in hardware, since it requires sampling the actual dark current moving through the circuit. The closer the sampling is to the time the dark current is subtracted, the more accurate. This means that for shorter exposures, analog CDS is very accurate.
The first Exmor design, the ones used in still photography sensors, used only digital CDS. The later Exmor designs actually use a dual CDS design, one analog CDS stage and one digital CDS stage. The analog stage takes care of most of the dark current noise, and the digital CDS stage takes care of any residual. As far as I know, the dual-CDS Exmors are only used in video camera sensors at the moment, but I suspect that won't remain that way for long. I actually suspect that the A7s sensor uses a dual CDS approach.
Sony Exmor sensors use column-parallel ADC. They moved the ADC onto the sensor die, and hyperparallelized them. That means each ADC unit in an exmor is only responsible for handling a few thousand pixels, instead of a few million pixels, every fraction of a second. That allows a lower frequency to be used, so the frequency of the clock is lower than the frequency of noise in the circuit itself.
I knew they'd moved it onto the die. I didn't know about the parallelization. That's an interesting approach. I'd be curious whether the use of lots of ADCs causes banding problems like it does for the 5DmkIII.
From what I've read about Sony Exmor, since the ADCs are per-column, that allows the potential to tune each ADC to handle column response differential. The responses of each ADC can be normalized to eliminate vertical column banding.
In the case of both the 7D (to a fairly strong degree) and the 5D III (very slightly), there is noticable vertical banding that correlates with each set of readout channels. In the 7D, you can clearly tell that each vertical band is 8 pixels wide, which corresponds with the 8 readout channels. In the 5D III, the effect is very subtle, so I figure Canon must have figured out a way of tuning or otherwise correcting for the readout differential for each ADC channel.
Anyway, there is potential for vertical banding with parallel ADCs, but it can always be tuned out or otherwise corrected for. With lower frequency per-column ADCs it's easier to fine-tune each ADC.
That might improve sampling accuracy, but at first glance, I would think that you could achieve similar benefits with oversampling. Maybe not.
It sounds like you understand audio signal processing. While I think some aspects of standard signal processing apply, there are a lot of differences with spatial signal processing. I don't know standard audio signal processing all that well, so I can't say how sampling techniques might apply, but my gut (based on what I do know about spatial signal processing) tells me that there really isn't going to be much in the way of multi- or over-sampling the signal. It generally comes out of the sensor "as is", with the exception of what CDS does.
Now, I do know that Sony, Nikon and a few other manufacturers do some things differently than Canon. It's often called "processing", but in general it's simple things. For example, Canon uses a bias offset in their design to handle the sensor bias signal, where as Sony and Nikon clip the bias signal out entirely (cleaner deep shadow noise, but you lose a good chunk of deep shadow.) For normal photography, clipping seems to be better, however for astrophotography (an arena where Canon cameras are almost synonymous with "modded DSLR") a bias offset is a far better approach as it means with more advanced noise removal techniques, you can recover a hell of a lot more signal from DEEP within the read noise. (Since that signal is clipped in Sony and Nikon sensors, its just gone, discarded, not recoverable.)
(Sony also move the clock and power supply themselves off to a remote corner of the Exmor die, which reduces potential thermal sources and, at least according to Sony's paper on the Exmor design, reduces noise from high frequency components within the ADC units themselves.)
Hmm. I guess that makes sense. With my audio hat on, when I hear someone talk about moving an ADC clock away from the ADC, my mind screams "Aaaah! The jitter! It burns!", but I suppose that jitter doesn't affect this use case very much, because the value isn't changing....
I don't gather, from the patents and papers, that the Exmor design was easy to achieve. When you look at the sensor layout, you can see in the upper left corner there is a clock, PLL, and a couple other components. Then you have the pixel array, with the photodiode, per-pixel amplifier, and the row/column activate and read wiring. Below that along the bottom you have the CP-ADC units, which contains a ramp ADC, the CDS/Pixel register (CDS readout counts negative, pixel readout counts positive, CDS is effectively "automatic"), and then some more electronics to ship the signal off the die. There are a few other components as well, although it's been long enough that I don't remember all of them.
Anyway, however Sony did it, they seem to think that moving the high frequency components off to an isolated area of the die reduced noise and jitter in the ADC units, which is part of the reason the Exmor readout is so clean. Plus, since each ADC is only responsible for reading out a few thousand pixels they can be clocked slower (whatever the image height is, basically, so in a 6000x4000 pixel sensor, each ADC unit is only responsible for 4000 pixels per read, vs. say Canon's which are responsible for 2.5 million pixels per read).
So I wouldn't say that moving the ADC unit closer to the detectors really has anything to do with reducing noise.
Well, the more important thing is for the first gain stage to be as close as possible to the detectors. Any noise bleeding into the signal at that point is going to be massively amplified, so you would want to have as little distance there as possible. I'd expect the distance from there to the ADC to matter, albeit not nearly as much.
The gain is applied by the amplifiers, not the ADC. Maybe you have the two mixed up? While I'll admit I haven't read patents for every possible image sensor design, in the case of CMOS sensors, every pixel always has an amplifier. They are built into the readout logic for each and every "pixel". Now, in some sensor designs use a "shared pixel" design where two or more photodiodes will share some readout logic. Usually, in shared pixel designs, there is one amplifier for every two pixels, connected diagonally. This allows for a larger (longer) amplifier, which I guess improves effectiveness or efficiency (this gets into a realm of CMOS transistor design that is a bit beyond my level of understanding...but I believe it falls into the same category as FinFET and Tri-gate technology...a long thin "fin" of a transistor with multiple source and drain connections allows for cleaner, lower noise, lower head electron transfer).
Anyway, yes, all pixels do have an amplifier right in the pixel, although not all pixels have their own amplifier. Some amplifiers are shared among pixels, however sharing allows for more efficient use of die space, meaning larger amplifier transistors and larger photodiodes, so higher efficiency overall.
One caveat, Canon cameras have two amplifiers. There is of course the per-pixel amplifiers. These kick in AT read time, so they amplify the signal in the pixel directly before anything else happens to it, so it's before any additional noise is added to the signal. However, to achieve the highest ISO settings (usually the top two or three), Canon also uses an off-die, downstream secondary amplifier. This secondary amp is also a source of noise in Canon sensors. I don't know why they do this, however I found a rather old article somewhere a couple of years ago that indicated that Canon somehow determined that the downstream amplifier was actually less noisy. I don't know enough about the specifics to be able to say one way or myself for sure...but I guess I'm willing to trust that Canon knows what they are doing.
Increasing the parallelism of the ADC units, allowing each one to operate at a lower frequency, has a lot to do with reducing noise. Because the ADC units are on-die with Exmor, it also means that the signal is converted from analog to digital immediately...rather than after transit across a bus and through who knows how many additional electronics.
And I suspect you can probably use less signal amplification, because you don't have to send the analog signal a long distance across a bus.
I'm not sure in this case. I'm sure that sending the signal over the bus introduces noise, however for the most part, amplification occurs in the pixels before any transfer across a bus. The one exception would be the top two ISO settings in Canon cameras (not the expanded settings, the top two native ISO settings), which uses a downstream amp.
Regardless, I think digital readout is the way of the future. Digital signals can be transmitted with error correction, and at very high speeds, without having to be concerned about analog noise interfering with the signal. With transistor sizes on sensors dropping to around 65nm now, that leaves a TON of room on the die for complex logic. I really hope Canon moves to a fully on-die system soon. I know they already tested the some of their patents, like their dual-scale CP-ADC and some other enhancements on the 120mp APS-H sensor, where they were able to achieve 9.5fps "low noise" readouts. God only knows when they might actually employ the technology in the actual sensors that go into actual consumer products, though.
Canon also has patents for some other interesting technology. Such as a read-time power disconnect, which decouples pixels being read from the power source, which, at least theoretically as I understand it, could potentially eliminate dark current entirely as a contributor of read noise. That would help shadow noise performance a lot when shooting in higher temperatures...such as outdoors, in the sunlight, for birds, wildlife, landscapes, etc. (I know that my 7D can get pretty hot when I'm out in the sun trying to photograph birds or wildlife...which can take a lot of time to get close, get the right angle, etc.)
Are we talking about ringing on the power supply rails here, or something else?
No, it was a fairly specific patent about a specific transistor setup around the pixels and some other logic in the sensor to disconnect the active
power supply during readout (I think there was still some power from capacitors...can't remember). I'll see if I can find the patent again. It's interesting, but it was long ago enough now that I honestly don't remember the specifics.
At one point in time, I'd found a gold mine of patents for Canon. Stuff going back to the early 2000's. I probably still have the bookmark in my old Opera 12 bookmarks file. I'll see if I can dig it up, and hopefully the site is still around. Canon has a lot of cool patents, but they don't seem to employ them. At least, not in their stills cameras (I think they have used some of these patents in their video sensors....but that's nothing unusual, it seems everyone in the CMOS sensor game these days implements all the coolest stuff in video sensors.