You clearly don't understand the primary source of noise. It is impossible to have ISO 100 performance at ISO 6400, while still having comparable sensor resolution to sensors of today. "Noise" is a general term that refers to ALL noise in an image. NOT all noise in an image is from the camera's electronics. Noise caused by camera electronics is called read noise, however read noise only affects the deep shadows, and it is generally only present to a relatively significant degree at lower ISO settings. You are also missing the fact that dynamic range is relative to noise. Eliminate noise, and you effectively have infinite dynamic range (or, in the case of a digitized result, you gain the maximum dynamic range up to your bit depth...whatever that may be...14bits/14stops, 16bits/16stops, 1024bits/1024stops.)
On the contrary, I am well aware of where noise is introduced, both as a consequence of design as well as the increased gain to have a sensor simulate higher ISO sensitivities..
However do not be mislead in the assumption that the digital sensors in modern cameras in any way represent the cutting edge of digital imaging - they do not - they are not even close.
Ok, first, you are largely correct, assuming a global context
. Now, I assumed a video and small form factor CIS context, as that is pretty much what we deal with on this forum...and in that context, yes, we are VERY advanced, and it will not be long before we start hitting physical walls. It is because we have encroached upon several physical walls already that there are some truly radical innovations being discovered in the CIS arena.
Second, there ARE physical laws that govern how far we can take CMOS Image Sensor technology. Doesn't matter what the application, or how big the sensor, or how big the pixels. Those physical laws will always apply. Once we run into the limitations imposed by those physical laws, we will have to start doing other things...like backstep. For example, instead of increasing pixel count, we will have to reduce it, in order to gain dynamic range once we have reached the maximum Q.E. possible with the greatest light gathering capabilities per pixel (which might actually involve something fairly radical, such as monochrome sensors with some kind of piezoelectric color filter that is cycled for each color throughout the duration of an exposure). Once all the technological advancements are used up, the only real final option is to make pixels bigger. That will either entail reductions in megapixel count...or larger sensors. But I already mentioned all of these things...
Unfortunately real cutting edge technologies result in million dollar digital imaging equipment that is of course not cost effective to build into a consumer product. Additionally do not assume what we know about physics today is all there is in the universe, our knowledge and conceptual understanding of physics has been challenged many times over through human history. Your response asserts your comprehension of imaging technology is limited to any single wafer sensor design, and additionally those limited by todays consumer technology…
No, it is not limited to todays consumer technology. It is based on a lot of patents and research that have yet to actually be employed in any real-world designs at all, as well as prototypical designs, as well as consumer technology. The context from which my response comes is much broader than simply existing consumer technology. I spend a lot of time on ChipWorks reading about the innovations found in consumer level technology, as well as on Image Sensors World reading about all the latest and greatest innovations in the CIS world (which is pretty up to date as far as yet-to-be-used new research and patents go.)
The Hubble telescope for example can resolve more detail than the D800, with greater dynamic range, and all at much higher ISO ranges because that is what is was designed to do regardless of cost as it was not intended to be a consumer product - yet its total mp count is a mere 5.1mp. It does however use multiple sensors to capture the analog data which is then put back together to produce an image, but clearly showing that 'more mp' is not the only approach to image quality.
Comparing a DSLR with the Hubble Space telescope is a little extreme. Again, I lets try to limit our context contextto what is relevant to hand-holdable camera technology. Hubble's original primary CCD sensor was quite large (larger than medium format, actually about four times larger). It's low megapixel count is actually the very reason why it has much greater dynamic range. I mentioned in my earlier posts that I assumed maintaining pixel size. The most obvious and simplest approach to improving dynamic range/reducing noise levels is to increase pixel size. As a matter of fact, that is exactly what Canon did with the 1D X, and one of the reasons why it's high ISO IQ is so good.
As it stands today, some of Hubbles CCD sensors were upgraded. They use smaller pixels now (although, still quite large at 15 microns), offering more resolution. I believe current Hubble resolution is 16 megapixels, rather than 5.1 megapixels. Still, remember that hubble's sensors are effectively supercooled (actually, since the telescope exists in space, I believe many of it's electronic components are actually heated to keep them at relevant operating temperatures), so current efficiency in hubble's CCDs is significantly better than an uncooled hand-held camera. I also mentioned in this very thread that cooling sensors with peltiers can greatly improve current efficiency, however again...there are physical limits to how far that will take you (especially in your average photography...it isn't like the things most people photograph will actually take advantage of the 0.0001e- of dark current you get at true supercool temperatures...only extreme low light photographers who regularly shoot at very high ISO would see any benefit from 0.01e- dark current, and maybe aurora photographers might benefit from even lower levels at even higher ISO settings.)
So, yeah, Hubble gets much higher dynamic range. It's pixels are also have as much as 15 times more surface area at exceedingly high current efficiencies relative to the average room temperature DSLR or Mirrorless sensor. Even assuming we find a way to supercool DSLR sensors...they are still going to be packing in significantly more pixels in significantly smaller sensor area...so dynamic range is never going to be as good as the MONSTER CCD in the Hubble.
In dslr sensor design there are several immediate approaches that could be researched, one being a sensor that is designed to operate at a base signal amplification much higher than current technology (~300 ISO) resulting in a base ISO sensitivity of say around 3200, with the greater gain adjustment at the lower sensitivity end as opposed to current implementation, and only a small increase in gain to achieve 6400-12800.
First, need to make sure we are on the same page regarding "base" ISO. Base ISO is the ISO where you achieve FWC at Max Saturation. If you made ISO 3200 your "base" ISO, there simply wouldn't be lower ISO settings, or if there were, they would be something akin to ISO 50, where you lose DR to "gain" a lesser ISO setting via exposure trickery. There is Unity Gain, the ISO setting at which 1 Gain gets you 1 ADU. By "base signal amplification", are you referring to "unity gain"?
Textbook physics tell us that such an approach would not leave enough signal strength at ISO 100 sensitivity to get readable data (again thinking we know everything about physics) but that could be countered by charging and reading fewer photosites at lower sensitivity settings. Then increasing the number of photo sites charged and read at the higher ISO range. That would of course mean the resolution output of the camera is lower at lower ISO settings and higher at higher ISO settings,
Why would you have lower resolution at lower ISO settings, and higher at higher ISO settings? That seems inverted to me. When you have a lot of light, it is easy to get more resolution...you don't have to amplify the signal as much. It is when you have LITTLE light that you have to amplify the signal more... OR, you could bin pixels at higher ISO to increase real-world sensitivity, which indeed would reduce resolution for a gain in signal strength. Sure, that is an option...try selling it to the average consumer, though. Dynamic resolution is a quirky feature.
or it could simply be set to output say 15mp images during ASIC processing regardless of the actual mp count of the sensor.. There would of course be a massive number of consumers who would feel cheated in some way in buying a 45mp camera that only outputs 15mp images, but hey people are buying a 36mp camera today that has to be downsampled to 8mp in order to generate DxO award winning images so that should not really have any impact as long as it produces the desired output in the end, right…
The average camera buyer doesn't know that DxO downsamples 36mp images to 8mp. All the average camera buyer knows is that, at least according to DxO, their D800 gets 14.4 stops of DR. Nevermind the fact that as far as RAW editing is concerned, the unscaled Screen DR is the measure that provides the correct DR, which is 13.2 for the D800...most consumers would never know that, instead thinking they have an extra 1.2 stops of DR that simply doesn't exist in their images. That's a detrimental state of affairs if a landscape photographer decides to leave their GND filters at home when they go out to photograph a 14+ stop sunset...oops.
That would be the inherent problem with dynamic resolution (at least, as anything but a niche camera)...few would actually know that at point of sale. They would only discover it through use, assume something was broken, and create a customer service nightmare in their ignorance. Keeping technology viable for consumers does matter in the grand scheme of things. I think a sensor with dynamic resolution that maintained real-world sensitivity is interesting, for sure. I wonder if it is practical, though. I guess for night sky and aurora photographers who downsample and publish on the web and never do anything else with their work, such a camera would be a dream.
In relation to the part of your answer I was originally responding to, dynamic resolution with a sensor that automatically binned pixels at any ISO setting above 100 (in order to maintain actual sensitivity, and achieve the same levels of noise at any ISO) wouldn't be a practical consumer product. You said you thought "we all" would best be served if Canon produced a sensor where ISO 6400 looked the same as ISO 100 in terms of noise. Sorry, but if dynamic binning and dynamic resolution is the only real-world solution to that, I don't really agree...and Canon will never do it anyway. Nikon might do it, they love getting their hands dirty with niche technology that doesn't help their bottom line, but even for Nikon, it seems like a bit of a stretch. The technology has to be viable to the consumer before any manufacturer would really touch it.
Another method would be multiple sensors, very much the same method high end digital video camera equipment is designed. With only a small increase in camera size there could be multiple sensors utilized to only read certain spectrums of light, four being the most logical array (Red, Blue Green, and UV to measure intensity) which would yield more color and light intensity data than is captured today by any consumer device. Data that translates to detail, color spectrum, tonal accuracy, and dynamic range..
I thought I mentioned Three-CCD in my answer (although I may be conflating conversations, as much the same conversation is occurring in multiple threads on this forum.) I agree, Three-CCD would definitely be intriguing as a means of improving both sensitivity and resolution. However, it does NOT solve the problem of making ISO 6400 look like ISO 100. It actually solves the resolution problem...it would let us push resolution more (for a while) without incurring further losses in pixel size, noise, etc.
Yet another method would be a single wafer design where one third of the photosites are dedicated for each primary color spectrum, somewhat similar but further on the approach taken by Fujifilm and their X-Trans sensors (and the original design found in the S2, S3, S5 Pro)..
Fujifilm is probably the best example of what I meant in my original post..
Again, I am taking from a conversation in another thread. X-Trans was just brought up in a thread about AA filters and Moire. Technically speaking, FujiFilm, while they are innovative, have not actually brought us anything significantly better than Canon/Sony/Toshiba/Apina. Fuji once had extra pixels in "dead space" on the sensor die. These extra pixels were monochrome, and were simply used to increase dynamic range. It was slightly effective. It was also completely blown away by Sony with their Exmor technology. COMPLETELY BLOWN AWAY.
X-Trans is another great example of an undiscriminating "improvement". It is intriguing, for sure...however it doesn't actually do a better job at anything than standard Bayer sensor designs from C/S/T/A. X-Trans claims to be moire free. Indeed, it is...however that comes at a cost. It uses a 6x6 pixel grid for interpolation...which inevitably results in a greater degree of blurring. Problem with a 6x6 pixel grid is it is less discriminating about which frequencies NEED to be blurred in order to avoid moire than a classic OLPF. Interpolating 6x6 rather than 2x2 inherently requires greater overlap, so more blur than standard bayer interpolation as well. I spent a lot of time researching X-Trans when Fuji first released the technology, I have looked at quite a few images from those cameras. High ISO performance is great, thanks to the greater degree of pixel averaging offered by a 6x6 grid, however you will never see the same kind of high fidelity image detail from any X-Trans camera that you get from standard bayer sensors. You sometimes also get a bit of haloing around sharp edges that are either particularly dark or particularly bright.
Fuji has some interesting ideas, and they definitely know how to think out of the box. But again...you can't beat physics. All Fuji has done with X-Trans is find an alternative way to blur higher frequency image detail, same as an OLPF. The difference is that the OLPF is more discriminating, and it only blurs frequencies RIGHT around Nyquist, where as X-Trans is less discriminating, and will blur relatively evenly in whatever radius is imposed by their 6x6 pixel interpolation. Personally, I'll keep my OLPF, thanks!
Canon/Sony/Toshiba/Aptina are not actually pushing the boundaries of digital imaging technology, they are catering to the boundaries of consumer marketability.
I'm not so sure about that. Excluding Canon (they do a lot of innovation, but admittedly the percentage of it that is dedicated to sensors seems rather small as of late), Sony, Toshiba, Aptina, and quite a number of other CIS authorities like Omnivision, SiOnyx, Panasonic, etc. are indeed pushing boundaries. You should read Image Sensors World...there are some pretty amazing innovations being created by die hard consumer companies, including Sony. They may not be breaking from a standard bayer as much as Fuji has, however that doesn't diminish the fact that they have made some significant strides for products that most definitely find their way into consumers hands. Sony's Exmor is nothing short of phenomenal, and it is still "just another bayer sensor", albeit with a very innovative approach to digital low noise readout.
Just because you produce products that sell to the consumer doesn't mean you can't be innovative.
Fujifilm is unfortunately one of the few (if not the only) consumer imaging company actually trying to advance the digital imaging world at this time by working outside the box..
Again, I think this is an ill-informed opinion. Fuji has a knack for pixel arrangements. They just recently applied for a patent on a bayer-type sensor with different sized pixels for green, red, blue, and white. It's quirky, it's different, certainly out of the standard box...but...given Fuji's track record of making SIGNIFICANT breakthroughs...I suspect it is also really just more of the same. I don't suspect that Fuji's latest patent will really make any major waves in the long run.
Now, if Fuji keeps pushing this technology, they may be on the right track to creating a sensor with a truly random "retina-style" distribution of pixels in a sensor. THAT would be an intriguing innovation, and one that could truly eliminate moire without any real cost to detail. We'll see, though...Fuji has had other non-standard bayer pixel arrays in the past, and again...none of them really produced IQ that was significantly better (or even better at all) than the competition.
Speaking of the competition, Fuji is not the only one exploring non-standard pixel layouts. Several of the rumors about Sony's supposed 54mp sensor indicate that it will not use a standard bayer layout. Not only are they targeting non-standard layouts, but Sony also filed patents for triangular and hexagonal pixels as well (although I'm honestly not sure how that improves photodiode area, which is the single most significant factor when it comes down to literal sensitivity...so only time will tell if such pixels are actually better.) So it isn't JUST Fuji who is thinking outside the box.
Just being more radical in your designs does not necessarily mean they are better.
As I stated earlier, and it is to the actual detriment of the technology, it is simply a matter of dollars and cents - for Canon/Sony/Toshiba/Aptina it is cheaper to try and improve current technology than to explore/develop new technology. The major players have too much invested in current technology to explore a new approach, at least not any time soon.
Again, ill-informed opinion. All of these companies have a certain amount of their R&D budget dedicated to more extreme innovation. Most of these companies, and others, have made more significant discoveries than Fuji, ones that have demonstrated very significant real-world benefits. Seriously, read Image Sensors World...some of the innovations are pretty cool, and many will indeed change the imaging world.