I don't need tons of megapixels, but if I can't take a picture in complete darkness and recover 24 stops of DR in post this will be a total fail. Its 2014 Canon!
High ISO is often portrayed - even if only jokingly - like that, but a lot of us wildlife folks (outside of the sunny tropics) could make good use of clean shots in the 12800-25600 range, easily.
At some point I suspect quantum efficiency of sensors will reach the 70-80% level (at least, I hope it happens someday.) Once it does, we can expect a real world improvement of about 2x for high ISO settings. To get any better than that, we would need larger sensors.
I'd settle for another stop - actually roughly what the 1Dx can do (from what I've seen) but with more megapixels for cropping. I once read an article about someone using medium format for bird photography, but I don't think that would be practical for most people, given the extra size lenses would have to be for the same reach (*unless* the extra MP allowed for so much cropping as to cancel it out).
The 1D X does not even get one full stop. It's largely a perceptual thing regarding how good the 1D X looks, but technically, the 1D X is only a fraction of a stop better, and the 5D III when downsampled gets similar results (not quite as good due to less total photodiode area).
When it comes to smaller pixels and croppability, your going to lose high ISO noise performance. I've mentioned this in other topics, but overall, high ISO performance is fundamentally due to total sensor area and quantum efficiency. It is a higher Q.E. and a larger sensor area that makes the 1D X better in the long run, not it's pixel size. Once you bring cropping into the picture, especially with smaller pixels, then you start to experience worsening high ISO performance. Your not only putting fewer pixels on subject, your using a smaller area of the sensor, which means less total light for your subject.
There really isn't any way that a FF sensor with smaller pixels will produce better results than a FF sensor with bigger pixels. It will have more detail, but per-pixel noise will be higher, so cropping means more noise. Cropping a 1D X means less per-pixel noise, but also less detail. It's a tradeoff...low noise, or more detail. For any given sensor area, the only way to improve noise performance is to improve Q.E. The 1D X has 47% Q.E., which means to actually double high ISO noise performance with the 1D neXt, you need 94% Q.E. The 5D III actually has 49% Q.E., which means you need 98% to double it's noise performance. That's not going to happen...not with consumer-grade devices. The highest grade Astro CCD sensors that have 82% Q.E. or more, Grade 1, are exceptionally expensive. They also require significant cooling (usually with two- or three-stage peltier, or themo-electric, cooling), which requires SIGNIFICANT power over what a DSLR normally draws.
Hopes of a super high resolution sensor that performs as well or better than a 1D X when cropped is just a pipe dream. It will resolve more detail, but that detail will be more noisy, not less noisy.
Well that's depressing. How much would you say future improvements in software noise reduction will improve the final output?
Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.
Fundamentally, though, pretty much every algorithm suffers from the same core problem, to varying degrees: They blur detail. Your most basic denoising algorithm takes high frequency data and blurs it by a certain amount...for each pixel, it takes some component of the surrounding pixels, generates an averaged result (with some given weight, usually attenuated by some UI control somewhere), and replaces the original pixel value with the weighted average value. Do that for each and every pixel, and each and every pixel ends up blending itself with it's neighboring pixels. There are varying matrix sizes, i.e. 3x3, 6x6, that can be used when performing a very basic noise reduction, that will spread the effect out more or less.
Wavelets and deconvolution tend to be more intelligent about how they reduce noise. They either try to generate a "kernel" based on the information in the image, or try to break up the image into multiple spatial frequency levels, and apply different degrees of noise reduction on each wavelet level, in an attempt to preserve certain frequencies while blurring others, with the ultimate goal of preserving detail. Problem with these algorithms is that, while they can reduce noise without blurring detail as much, they often suffer from greater artifact introduction...halos or excessive acutance or blotching, things like that.
Noise reduction is best applied in extreme moderation, in which case it will always have very significant limitations. It can only take you so far, and the less noisy your images start out as, the better the results will be. This is one of the reasons why the "low" resolution images from the 1D X clean up so well...1D X pixels start out with significantly more dynamic range than sensors with smaller pixels, so there is less per-pixel noise to start with, so a minimal amount of NR is perceived as being more effective (it really isn't, there was less noise to start with, so less noise to remove, so a small amount of NR is has a greater relative effect than with images that start with more noise to remove. In other words, to ridiculously simplify things down to simple numbers, if a 1D X has noise of 7, and a 5D III has noise of 12, and you reduce noise by 5, the 1D X is left with noise of 2, where as the 5D III is left with 7...it's as bad after NR as the 1D X was before NR.)
Noise reduction algorithms are already extremely powerful and extremely intelligent. I recently purchased software called PixInsight, which is primarily an astrophotography processing program, but it's tools can be used on regular photos as well. It has a whole suite of noise reduction tools that work in different ways. Depending on the kind of noise you have, and the region of your image that you wish to denoise, PixInsights noise reduction tools can be more effective than any other tool...but as advanced as they are, they are still not perfect. Wavelets still introduce mottling and blotching, deconvolution can still introduce halos, median sharpening and denoise can still introduce sparkles and panda eyes, etc.
The best way to reduce noise is to increase the rate of conversion of light to charge in a pixel, increase the maximum charge of each pixel, increase the total maximum charge of the sensor, etc. The more light you can convert into charge in a given time, the less noise you will have. I don't expect to see a major jump in Q.E. any time soon....I suspect Canon's next round of sensors will be around 51-53%, maybe 56% at most, up from the current 47-49%. That will certainly help in the noise department, but it is no where even remotely close to supporting a true one-stop improvement in noise. It's less than a third stop improvement in noise (less than a tenth stop improvement in noise, even!) Elimination of color filters in favor of color splitting, a reduction in heat conversion (i.e. with light pipes or BSI), reduction in reflection (i.e. with black silicon), etc. can all increase the rate at which photons convert to charge, and increase Q.E. These technologies exist, lots of patents exist, however I don't see any patents for these specific kinds of technology from Canon, so I don't expect them to show up in Canon's next sensor designs. A layered sensor is capable of converting more light to charge per pixel, however that charge is divvied up amongst different color channels, so it's effectiveness is attenuated...a foveon-like design from Canon is a step in the right direction, but I don't expect the impact on noise to be all that much (and we'll see a conversion of which color channels are noisiest...instead of blue being noisiest, red is likely to become noisiest, and green will become noisier, while blue would likely experience a modest drop in noise levels.)