Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor. Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.
Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)
It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power). Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process? If so, it might just "only" have 14 bit RAW capability. I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not. That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.
Are you planning on buying the new camera, early on?
Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.
Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC.
That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.
Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.
As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.
Very informative points, thank you. And I think it was you who first mentioned "black silicon" on here earlier this year. I recall trying to read more about it, probably a link you posted. I think I read something on Wikipedia about it as well, for what little that is worth.
Thanks for pointing out that the compression would be useless during the read and processing stage. I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind. It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more. You would need large internal buffer capacity. I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha! I guess it would all get designed to work, if the need for really large files came to the fore...or rather when it does.
When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second!
And what type of device or computer is currently capable of that kind of throughput?
The original SATA standard was capable of 1.5Gbit/s, SATA2 was capable of 3.0Gbit/s, and SATA3 is currently capable of 6.0Gbit/s. That would be one of the SLOWEST data transfer rates for modern computing devices. A modern CPU is capable of around 192Gbit/s data throughput on the CPU itself and along its primary buses. A modern GPU is capable of even higher transfer rates in order to process graphics at up to 144 times per second (on 144Hz computer screens), meaning several hundred million pixels at least to the tune of trillions of operations per second requiring data throughputs of hundreds of billions of bits.
In order to handle 120 or 144 frames per second on modern high framerate gaming and 3D screens at 2560x1600 or even 3840x2160 (4k) with 10-bit precision, you would need at least 11,943,936,000bit/s throughput rate from video card to screen. (This is, BTW, the next generation of hardware, already trickling onto the market...high end gaming and graphics computing hardware, running on next generation GPUs and on early 4k SuperHD screens, using interfaces like Thunderbolt, which so happens to operate via a single channel at 10Gbit/s for v1, and 20Gbit/s for v2 via "aggregate" channels.)
General computing currently is capable of very significant data throughput. With the next generation of GPU's paving the way for high performance 4k 3D gaming (and even multi-screen 3D gaming, at that!), the average desktop of 2015 and beyond should be able to handle 150mp image files as easily as they handle 20/30/50/80mp image files from DSLR and MFD cameras today.
Assuming a 120mp camera operates at 9.5fps for 16-bit image frames (just speculating), since Canon has at least demonstrated a sensor like that. The raw per-frame bit size at 16-bit is 1,920,000,000 bits (1.92Gigabit), divide by 8 for bytes, which comes to 240,000,000MB (240Megabyte). Multiply by 9.5 frames per second, and you have a total data throughput of 2.28GB (2.28Gigabyte) per second, or 18.2Gbit/s. A single Thunderbolt v2 aggregate channel would be sufficient to handle that kind of data throughput, and be capable of transferring a full 120mb RAW image onto a computer in around 1 second...assuming you had comparable memory card technology that could keep up (which certainly doesn't seem unlikely
given the rate at which memory card speed is improving.)
The real question is, will onboard graphics processors, DSPs (or rather computing packages, as they are today...usually a DSP stacked with a general purpose processor like ARM and usually a specialized ultra high speed memory buffer), will be able to reach the necessary data throughput rates. As a matter of fact, they already operate at fairly decent speeds. A single 120mp frame is 240MB. With a pair of DIGIC5+, you would be able to process 2 120mp frames per second. The DIGIC5+ chip was about seven times faster than it's predecessor, DIGIC4. If we assume a similar jump for the next DIGIC part, it would be capable of processing 3.36GB/s, more than the necessary 2.28GB/s to process 120mp at 9.5 frames per second, and quite probably enough to handle around 11 frames per second (and still have room for the necessary overhead.)
Given that release cycles for interchangeable lens cameras is usually on the order of several years, we probably wouldn't see next generation memory card performance until 1D X Next and 5D Next ca. 2016 or 2017. Sadly, at least historically, DSLRs have lagged even farther behind in data transfer standards support, so it could very likely be that we don't see comparable interface support in DSLRs and other interchangeable lens parts until 2019/2020. :\ Which means, instead of being able to transfer our giant 100mp+ images in about one second each, we will still have to slog through imports at about a quarter the speed our desktop computer technology is capable of...but were all used to that already. ;P (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)