Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.
That concept seems really interesting!
I've had thoughts about why no one seem to have adopted something similar to a logarithmic amplifier. That was something I saw in certain radar equipment, where one typically sent out a few kW, and expected to get signal back that were only some fW (10-15W). However, you couldn't be sure on the returning signals strength, so the receivers had to cope with signals that were several magnitudes greater - without frying the entire array of discrete components/transistors/tubes.
In short, that "problem" were solved with stages of amplifiers that, when saturated, automatically opened up for the next stage to take care of the signal handling without ever hitting any ceilings, or frying any components.
In sensors you would've the problem of miniaturising this concept and making some 20 million photon receivers behave identical, but all that counts in the end is counting photons. Every pixel is there for the sole purpose of counting the number of photons that hits it (preferably coming in from the lens). And you don't want to fill your buckets.
Since most of us take our shots in temperatures above 0K, we always have to deal with thermal noise. A logarithmic approach to handling our combinations of signal + noise wouldn't be bad.
Sorry for sidestepping the original idea of this thread.
What your talking about is a photomultiplier. That is actually a very different concept still, and is neither similar to the multi-bucket pixels nor DPAF.
Photomultipliers do use mult-stage amplifiers to significantly amplify extremely low signals by a significant magnitude, without requiring ultra-specialized amplifiers that can do so without frying themselves. But that's just a more complicated means of amplifying a weak signal. It doesn't actually improve the signal strength itself, so it can neither reduce noise, nor support something like HDR.
The multi-bucket pixel concept in that paper effectively embeds analog memory into the pixel. Global shutter sensors already do this, but they only have a single memory (when the exposure is done, every pixel's charge is immediately pushed to it's memory at once, the pixels are reset, then the memory can be read out in the background while the next exposure occurs.) Multi-bucket memory allows charge to be pushed to memory more than once, which expands the dynamic range by N times. At the point of read, the charge stored in each bucket is then binned as the pixels are read out.
That is significantly different than a photomultiplier, as instead of amplifying the signal (which also amplifies the noise, and does not actually improve the quality of the signal itself), it allows longer exposures combined with multiple "memory pushes" to literally enhance the quality of the signal itself WITHOUT amplification. THAT....that is what is so intriguing about the multibucket concept.