jrista, I don't see how this press release quantify performance. It says nothing about SNR or DR etc, it doesn't even try to claim that the sensor is competitive in those regards. It may also require hardware around it which is not feasible nor practical in todays cameras. For instance to read out and process all that data would require a lot more readout channels and processing power than what you see in a 1DX today.
Press releases may also typically be written by PR or marketing personnel written for other purposes than to scientifically describe their findings.
As for patents, I don't read them but they don't actually give any data about how well actual their actual implementations perform do they? Without that data we can not tell if its awesome or not. What seems great on paper might be bad in practice.
SNR and DR aren't the epitome of sensor performance, though. They are only factors of sensor performance. Both are heavily affected by readout noise, and it's been demonstrated that column-parallel ADC designs produce less read noise, by at least two companies now (Sony and Toshiba, and I believe other high end sensor manufacturers have similar designs in the works as well). Canon described some kind of hyperparallel on-die ADC for the 120mp APS-H.
Assuming the silicon process was the same generation as the cameras of the time it was released, it's logical to assume it has the same fundamental characteristics as the 1D IV. The 1D IV had around 45% Q.E. and the same DR limitations as all Canon cameras (due to read noise). I see no reason to assume this sensor would be significantly different in those fundamental statistics at worst, better if their highly parallelized readout offers similar improvements as Sony and Toshibas. Canon silicon hasn't really changed much over the years...the most significant improvements each generation are a few percent jump in Q.E.
Your also misunderstanding the point of using a column-parallel
ADC. You actually DON'T need as much processing horsepower to read out more pixels faster when you hyperparallelize
the ADC units. The problem with having too few units is each unit MUST be high powered enough to handle the hundreds of thousands or millions of pixels they have to process. That means higher frequency, and it also means more attention must be paid to the design of those units to limit the amount of noise they add to the signal (and even then, they are noisy parts because of the high frequency).
By using one ADC unit per column, each ADC can operate at a lower frequency. The lower frequency immediately offers a benefit in terms of read noise. Other techniques, such as moving the clock and driver off to a remote area of the die (like Exmor), you can reduce noise even further (Exmor took it one step farther, and used a digital form of CDS, which they claim was better than using analog CDS...however ironically they added analog CDS back into the mix with later version of Exmor for video cameras...now they do both analog and digital CDS). You trade die space for the ability to operate at a lower frequency and power. With a 180nm process, that's a no brainer. This HAS BEEN DONE...both Sony and Toshiba have working CP-ADC designs built into their CMOS sensors that are actually used in consumer products. Sony has a number of technical documents that explain how they achieved exactly what Canon describes in their 120mp APS-H papers and patents...low power high speed readout of high resolution sensors via hyperparallel ADC.
So, even though Canon's 120mp APS-H isn't in an actual consumer grade product that we can buy, it uses technology that mirrors products from other brands that we can buy, and that have been tested. The most telling are Sony security video cams that use Exmor sensors, which can operate at very high frame rates in very low light...they are not only doing high speed readout with very, very low noise and relatively high DR, they are also doing processing with image processors that are packaged to the bottom of the sensor, and wired directly to it.
To be strait, I am speculating a bit, but it's very educated speculation. It isn't like it's just 100% completely unfounded drivel.
For instance Foveon sensors seem like a great technology on paper does it not? No CFA wasting away 2/3rds of the light and no demosaic algorithm interpolating data and making images soft in 100% view. Yet in real life Foveon is outperformed by standard CFA sensors, it gives the resolution but does not perform well in other aspects. Real life performance is what counts and Foveon sensors don't have it (yet, would like to see that change).
As for Foveon, I think your incorrect in your assessment. Foveon only "fails" at ONE thing: resolving power. There have been debates in the past on these forums where Foveon fans claim that because it has a 100% fill factor for all colors, that it has as much or higher RESOLUTION than bayer sensors. Those claims are wrong, as bayer sensors get largely the full benefit of the raw sensor resolution in terms of luminance...they only really suffer in color resolution and color fidelity (both areas where Foveon excels).
For what Foveon is, at it's REAL spatial resolution, they are actually very good. Their red channels are a little nosier, but their blue channels are less noisy than bayer. No surprise, given the layering order of color photodiodes in the Foveon. Even though image dimensions/resolving power for Foveon is lower than in bayer sensors, those smaller images usually exhibit high quality. I do think that color fidelity with Foveon cameras is superior to what I get with my Canon DSLRs (I just like my resolution too much to give it up
). So I think it's unfair to claim that the real-life performance of Foveon is bad or even poor. For what it is
, it's real life performance is very good.
The only drawback of Foveon is it's resolving power...and I truly believe that Sigma has done Foveon a big disservice by trying to upsell it as having more resolution than it really does, or somehow claim that because it gathers full color information per pixel that upsampling it somehow beats bayer sensors for resolution and detail. Actual real-world examples that do exactly that have proven otherwise. Foveon's problem isn't that it's bad technology...it's that Sigma owns it, and Sigma doesn't have the marketing power nor the R&D budget to really make Foveon shine and become a highly competitive alternative. Sigma is much more a lens company than a camera or sensor company, IMO. I do believe it COULD be highly competitive in the hands of a wealthier corporation that could more richly fund it's development.