You clearly don't understand the primary source of noise. It is impossible to have ISO 100 performance at ISO 6400, while still having comparable sensor resolution to sensors of today. "Noise" is a general term that refers to ALL noise in an image. NOT all noise in an image is from the camera's electronics. Noise caused by camera electronics is called read noise, however read noise only affects the deep shadows, and it is generally only present to a relatively significant degree at lower ISO settings. You are also missing the fact that dynamic range is relative to noise. Eliminate noise, and you effectively have infinite dynamic range (or, in the case of a digitized result, you gain the maximum dynamic range up to your bit depth...whatever that may be...14bits/14stops, 16bits/16stops, 1024bits/1024stops.)
On the contrary, I am well aware of where noise is introduced, both as a consequence of design as well as the increased gain to have a sensor simulate higher ISO sensitivities..
However do not be mislead in the assumption that the digital sensors in modern cameras in any way represent the cutting edge of digital imaging - they do not - they are not even close. Unfortunately real cutting edge technologies result in million dollar digital imaging equipment that is of course not cost effective to build into a consumer product. Additionally do not assume what we know about physics today is all there is in the universe, our knowledge and conceptual understanding of physics has been challenged many times over through human history. Your response asserts your comprehension of imaging technology is limited to any single wafer sensor design, and additionally those limited by todays consumer technology… The Hubble telescope for example can resolve more detail than the D800, with greater dynamic range, and all at much higher ISO ranges because that is what is was designed to do regardless of cost as it was not intended to be a consumer product - yet its total mp count is a mere 5.1mp. It does however use multiple sensors to capture the analog data which is then put back together to produce an image, but clearly showing that 'more mp' is not the only approach to image quality.
In dslr sensor design there are several immediate approaches that could be researched, one being a sensor that is designed to operate at a base signal amplification much higher than current technology (~300 ISO) resulting in a base ISO sensitivity of say around 3200, with the greater gain adjustment at the lower sensitivity end as opposed to current implementation, and only a small increase in gain to achieve 6400-12800. Textbook physics tell us that such an approach would not leave enough signal strength at ISO 100 sensitivity to get readable data (again thinking we know everything about physics) but that could be countered by charging and reading fewer photosites at lower sensitivity settings. Then increasing the number of photo sites charged and read at the higher ISO range. That would of course mean the resolution output of the camera is lower at lower ISO settings and higher at higher ISO settings, or it could simply be set to output say 15mp images during ASIC processing regardless of the actual mp count of the sensor.. There would of course be a massive number of consumers who would feel cheated in some way in buying a 45mp camera that only outputs 15mp images, but hey people are buying a 36mp camera today that has to be downsampled to 8mp in order to generate DxO award winning images so that should not really have any impact as long as it produces the desired output in the end, right…
Another method would be multiple sensors, very much the same method high end digital video camera equipment is designed. With only a small increase in camera size there could be multiple sensors utilized to only read certain spectrums of light, four being the most logical array (Red, Blue Green, and UV to measure intensity) which would yield more color and light intensity data than is captured today by any consumer device. Data that translates to detail, color spectrum, tonal accuracy, and dynamic range..
Yet another method would be a single wafer design where one third of the photosites are dedicated for each primary color spectrum, somewhat similar but further on the approach taken by Fujifilm and their X-Trans sensors (and the original design found in the S2, S3, S5 Pro)..
Fujifilm is probably the best example of what I meant in my original post.. Canon/Sony/Toshiba/Aptina are not actually pushing the boundaries of digital imaging technology, they are catering to the boundaries of consumer marketability. Fujifilm is unfortunately one of the few (if not the only) consumer imaging company actually trying to advance the digital imaging world at this time by working outside the box.. As I stated earlier, and it is to the actual detriment of the technology, it is simply a matter of dollars and cents - for Canon/Sony/Toshiba/Aptina it is cheaper to try and improve current technology than to explore/develop new technology. The major players have too much invested in current technology to explore a new approach, at least not any time soon.
Regarding my ‘unfortunately’ reference to Fujifilm I did not mean that in a bad way, quite the contrary, I love Fujifilm’s approach - What I meant is if new technology like that was being backed by the kind of money/research Canon and other major players spend on 'old-tech' improvements we would already be where I stated we should be in the imaging world.