I'm thinking one likely avenue for future digital camera upgrades is in the way RAW data is captured and/or returned to the user.
For example, as a sensor remains active, light photons hit diodes on the sensor and build up an electrical charge. I wonder if it's possible in the future for the end user of a '1/50 or longer exposure' shot for example to get a series (or matrix) of various images (binary numbers) recorded to the sensor and be able to layer and manipulate each series in the matrix individually.
I could see this allowing us to to visualize the evolution of the captured image as it collects on the sensor, and in editing, allow us to pick and choose the most appropriate selection or combination of selections for any given area of the final image.
I could also see changes in the way RAW data is captured and delivered eventually leading to the ability to set/alter ISO speed after the fact (at least to some degree) in RAW editing.
One other improvement this could lead to potentially, is image stabilization after the shot is taken. I know there are filters that help with this already but I am thinking if it were possible to separate out exposures at various timings of the image capturing process and then layer them in post that this would be an advantage for post image stabilization.
I will admit I know too little about the back end of our current RAW files structure and as well, I know too little about the current processes of exposure on digital imaging sensors, but I know just enough to allow my imagination to run wild, which is why I am posting.