EOS-1D X Mark II Claims of 15 Stops of DR [CR3]

jrista said:
Out of curiosity, what do you do for a living?

I am a senior research scientist at NASA (28+ years at NASA and have a PhD from Univ of Colorado Boulder, 1985). I design and develop remote sensing space flight hardware as well as develop the analysis algorithms for that HW. I am currently the project scientist for SAGE III - headed to the ISS in August of this year (hopefully) - and manage a team of over-achiever science padawans (but I'll never tell them that they know more than I ever did). In my spare time, I am a (former) coauthor of the the AviStack v2 (now open source) software.

I live, breathe, and eat noise. I have turned crap into gold and cannot wait for the next adventure to begin.

Thanks for asking!
 
Upvote 0
JMZawodny said:
jrista said:
Out of curiosity, what do you do for a living?

I am a senior research scientist at NASA (28+ years at NASA and have a PhD from Univ of Colorado Boulder, 1985). I design and develop remote sensing space flight hardware as well as develop the analysis algorithms for that HW. I am currently the project scientist for SAGE III - headed to the ISS in August of this year (hopefully) - and manage a team of over-achiever science padawans (but I'll never tell them that they know more than I ever did). In my spare time, I am a (former) coauthor of the the AviStack v2 (now open source) software.

I live, breathe, and eat noise. I have turned crap into gold and cannot wait for the next adventure to begin.

Thanks for asking!

I hate the transatlantic time gap.. I keep missing the middle of conversations. I see nothing that Jrista has said that I really disagree with.

It also sounds like we both have to concern ourselves with noise professionally, in my case almost a quarter century of radio development. The astronomy is pure hobby, but parts do occasionally spill over.

Given your background I'd be interested in fully understanding why you think 1e/ADU is important in the noisy systems we have today (I agree in a noiseless system 1e/ADU is optimal). My understanding is that the readout noise alone will disturb the measurement sufficiently not to have to concern ourselves, as Jrista has already stated we normally adjust length of shots to maximise overall performance and that usually means skyglow>K*RN^2, where K is an arbitrary constant, I use K=2 as an absolute minimum.

On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.
 
Upvote 0
I think it boils down to the difference between art and science in the end.

If your a scientist doing science with sensors and imaging, then getting the most accurate replication of the original signal as possible matters. In that case, your probably not going to be using a DSLR. Your not even going to be using an average CCD...your going to be using a Grade 0 CCD camera that has minimal if any defects, with extremely deep cooling, probably deep depletion, and you'll effectively be counting photons. You might even be using emCCD or something similar, where you literally count photons, and get as close to possible to perfectly replicating every signal in as near-perfect accuracy as possible.

At the other end of the spectrum is me. I'm an artist, not a scientist (although I DO love the science, that's not really why I do what I do.) I am not using my images for the purposes of analysis and discovery...I use them to share my view of the universe with the people around me, hopefully with my own artistic bent. There is certainly an inextricable scientific aspect to astrophotography...it's just the nature of the hobby. I could put a lot of extra time into gathering every single photon as efficiently as humanly possible, but it wouldn't be an efficient use of my time, not for the goals I have. It might give me marginally better results, however few would actually notice, and then, they would probably only notice if I had a comparison image produced with my normal techniques to compare with.
 
Upvote 0
rfdesigner said:
I hate the transatlantic time gap.. I keep missing the middle of conversations. I see nothing that Jrista has said that I really disagree with.

When we were doing the AviStack development our team of three had the lead in Germany, me in the US and the third fellow in Australia. Somehow we managed to get everyone together for regular discussions. It was great fun.

rfdesigner said:
It also sounds like we both have to concern ourselves with noise professionally, in my case almost a quarter century of radio development. The astronomy is pure hobby, but parts do occasionally spill over.

Radio is quite a different beast from simply collecting and estimating the number of photon that fall into a bucket. Imagers do not have to worry about phase jitter/recovery, bandwidth, or proper temporal sampling of the signal. These days RF work is a mix of analog front ends and digital (de)modulation. With some of the new direct synthesis chips out there, the analog bits may soon disappear altogether.

rfdesigner said:
Given your background I'd be interested in fully understanding why you think 1e/ADU is important in the noisy systems we have today (I agree in a noiseless system 1e/ADU is optimal). My understanding is that the readout noise alone will disturb the measurement sufficiently not to have to concern ourselves, as Jrista has already stated we normally adjust length of shots to maximise overall performance and that usually means skyglow>K*RN^2, where K is an arbitrary constant, I use K=2 as an absolute minimum.

A noiseless system is never optimal unless the detection system is analog (continuous, not quantized). Digital systems require noise to work properly (specifically when combining samples or doing any sort of statistical manipulations). While we can measure and characterize the various sources of noise, in the end noise is noise. Knowing their origins does allow us to take corrective action and/or design the system properly. At the point of digitization, noise should never be less than 0.5ADU (1-sigma). Lower than that, quantization artifacts emerge and can be quite annoying. Maximizing performance is very subjective and necessarily implies a pre-selected course of action for processing the signal to extract the desired information. As I stated originally, 1e/ADU maximizes both the DR and the amount of information (available levels of signal if you will). Alter that gain and you will give up one or the other. While it has been a while since I've gone shopping for imager HW, back when I bought my FLI camera the standard was to build HW that operated very near 1e/ADU. I believe that is still standard practice. So, don't just take my word on this approach.

Depending upon what your measurement requirements are, it is certainly reasonable to set the gain lower so that it takes more electrons per ADU. Although there are practical limits relating to pixel size and the associated well capacity. The SAGE III detector (developed in the mid 1990's) has a gain of 75e/ADU, but we have a reasonably bright source and a high SNR requirement.

rfdesigner said:
On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.

Interesting, the 5D and 5D2 numbers look very familiar. I can't find my original characterization data and results, so I suspect that my recollection was that the read noise was a relatively constant ADU - not electrons. His data do show that. It is a shame that he did not report the full well capacity at ISO 100. Characterizing these sensors is not difficult. It takes only a handful of exposures and some simple code. I am a bit surprised that he did not specify the values for the R, G, and B channels separately. Perhaps he does, but I did not dig into the details on his site.

I have to run and will edit this later if needed. Cheers!
 
Upvote 0
JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves? What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things. There are existing CCD cameras that are able to achieve this to some degree as well even, as at 1x1 binning the extra output buffer capacity can be used to achieve better gain/DR characteristics (although for some reason it rarely seems ideal, so DR gain is not as significant as some of the prototypical ideas out there; not possible with binning, as the extra output buffer capacity is needed to store the combined charges of the binned pixels.)
 
Upvote 0
jrista said:
JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves?

Typically the output buffer is reasonably matched to the pixel full well. There are always exceptions to this. We did a custom sensor development quite some time ago (15 years or so) where we needed tiny pixels that also needed to have a very large full well. It was a linear array, not 2D, so we simply made the pixels out of photo-diodes that drained to huge, un-illuminated CCD pixels. Performance was excellent, but we had to deal with some non-linearities arising from the FET capacitance. Signal chain design is very important and compromises must be made when you alter specifications (such as the size of the output buffer).

jrista said:
What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things.

I'm still baffled by the perceived need to crank the gain to 1e/3ADU. It seems to me that one should focus on managing the undesirable signal/noise source that is driving you to want each electron to be 3 counts. The current state of the art on-chip digitization methods have no issue reliably counting electrons with reasonable speed. There most certainly are applications where a very large effective full well would improve performance, but it is not immediately clear to me whether such a system would also perform well in signal limited applications as is found in some (most?) aspects of photography.

jrista said:
There are existing CCD cameras that are able to achieve this to some degree as well even, as at 1x1 binning the extra output buffer capacity can be used to achieve better gain/DR characteristics (although for some reason it rarely seems ideal, so DR gain is not as significant as some of the prototypical ideas out there; not possible with binning, as the extra output buffer capacity is needed to store the combined charges of the binned pixels.)

We are getting to the point where there will be little advantage to analog binning vs digital summation.
 
Upvote 0
Canon have very clearly, and exceptionally openly, demonstrated exactly how they measure 15 stops of DR from the C300 II, I see no reason why they would change their methodology for another camera.

https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

Basically they removed the entirely subjective idea of 'how much noise is too much' and counted everything above background noise. So those expecting 15 stops of noiseless DR are going to be disappointed. As always I don't believe that is the entire picture, I believe there will be good improvements in the actual usable RAW data and I am pretty confident I will be happy with the malleability and workability of those RAW files.

I also expect to piss myself laughing at the arguments, lectures and condescending technical micro deconstructions the absence of a 'traditional 15 stops' will create.

What I would point out is that many were very disappointed on the release of the 5DS/R on the expectation of 'better' sensor performance and those expectations not being realised on paper spec sheets, however when people actually started working the files they all seemed exceptionally happy with the real world output. I expect the exact same thing with the 1DX MkII, people will argue it is crap and the end of Canon, until the next distraction comes along, meanwhile those that actually use it will be surprised at how much better than its predecessors it actually is.
 
Upvote 0
rfdesigner said:
On the subject of 5D rn have you seen: http://www.astrosurf.com/buil/50d/test.htm Buil does seem to know his stuff, but then he started with CCDs in the 80s, so he ought to.

I dug up the characterization I did of my 5D back in 2007. Since this is now slightly off topic I'll keep this brief. Compared to Buil's results, I measured (in the green pixels) a "full well" of 15,380 at ISO 400, a read noise of 6.7e, gain of 4.32e/ADU. I put the full well in quotes since the actual full well is used only at ISO 100. Gain was slightly lower in the blue pixels and less than half in the red. Read noise in the red and blue pixels was virtually identical to that in the green. It also varied with ISO in a similar manner. At ISO 100, I calculated a DR of 11.7 stops (3330) with a peak SNR of 260. At ISO 400 the DR was still a healthy 2300 although SNR dropped by half. I apparently never did characterize my 5D2, but I did do my 300D.
 
Upvote 0
JMZawodny said:
jrista said:
JMZ, I do have a question for you. You note that 1e-/ADU maximizes both DR and signal simultaneously. However, does that not imply that your output buffer for amplification is also limited to the same capacity as the pixels themselves?

Typically the output buffer is reasonably matched to the pixel full well. There are always exceptions to this. We did a custom sensor development quite some time ago (15 years or so) where we needed tiny pixels that also needed to have a very large full well. It was a linear array, not 2D, so we simply made the pixels out of photo-diodes that drained to huge, un-illuminated CCD pixels. Performance was excellent, but we had to deal with some non-linearities arising from the FET capacitance. Signal chain design is very important and compromises must be made when you alter specifications (such as the size of the output buffer).

Yeah, they call those multi-bucket or memory-backed pixels these days. Same idea, and from the patents I've read, some of the same problems.

JMZawodny said:
jrista said:
What about more innovative sensor and pixel architectures that have a larger output buffer. You could theoretically get at least just as much dynamic range, (but possibly more) with better sampling of each electron in the pixel, if you could amplify the entire pixel range (say up to 65ke- for a largish pixel sensor with a 16-bit ADC) with an output buffer 3x as large a capacity as the pixels themselves, with 0.33e-/ADU. I've read some papers about prototype sensors (or just patentable ideas) that cover such things.

I'm still baffled by the perceived need to crank the gain to 1e/3ADU. It seems to me that one should focus on managing the undesirable signal/noise source that is driving you to want each electron to be 3 counts. The current state of the art on-chip digitization methods have no issue reliably counting electrons with reasonable speed. There most certainly are applications where a very large effective full well would improve performance, but it is not immediately clear to me whether such a system would also perform well in signal limited applications as is found in some (most?) aspects of photography.
[/quote]

I agree reducing the read noise that gives one reason to want higher gain is a better solution...however sometimes we really do work with EXTREMELY faint details. A decent OIII signal is maybe up to 0.125 photons/second, meaning you get maybe one photon every eight seconds, and if your Q.E. is say around 50% (as is the case for most KAF CCDs, not even that good), then you really only get an electron every 16 seconds. To get a reasonable SNR with 5e- read noise (which really isn't a lot in the grand scheme of things), you would need to wait through at least five 16-second periods just for the signal to match the read noise, and many more for the signal to reach a reasonable SNR (which personally I consider to be no less than 7:1, and some NB imagers I know prefer higher than that). It would take 10 minutes just to reach that minimal desired SNR, and 20-30 minutes at least to get a reasonably strong SNR (per sub, BTW) across the plenum of pixels (thanks, Poisson!). There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels. (I actually know some fellas who have actually done that, 60-90 minute subs to get a rather faint signal on OU4 with IIRC a KAF-8300).

How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.
 
Upvote 0
jrista said:
There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels.

This is why Hyperstar makes a lot of sense, especially for fainter and bigger targets. Switch from f/10 or f/7 to f/2 or so.
 
Upvote 0
jrista said:
How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.

The answer depends upon how much money you are willing to spend. I think your real question is can a company produce these devices at a commercial scale and feed them into reasonably priced consumer goods. Let's wait and see what the 1Dx2 performance looks like. If they are going to keep the full well size in the vicinity of 60ke- they are going to have to produce sensors with ~2e- read noise if the claims of 15 stops of DR are to be realized. I'll predict here that the 1Dx2 will have 16-bit RAW files. Canon has increased the bit depth every time they increased DR. 16-bit files aren't unusual, in fact they are more "normal" than 12-bit or 14-bit files are. We'll know a lot more in 2 weeks.
 
Upvote 0
JMZawodny said:
jrista said:
How easy is it, really, to get your read noise, for reasonably sized pixels in the 5-6 micron range, below 3-5e-? I gather it isn't as easy as it sounds, as most of the sensors I know of that have 1-2e- also have really tiny pixels (2-3 micron tops) and more limited FWC and DR.

The answer depends upon how much money you are willing to spend. I think your real question is can a company produce these devices at a commercial scale and feed them into reasonably priced consumer goods. Let's wait and see what the 1Dx2 performance looks like. If they are going to keep the full well size in the vicinity of 60ke- they are going to have to produce sensors with ~2e- read noise if the claims of 15 stops of DR are to be realized. I'll predict here that the 1Dx2 will have 16-bit RAW files. Canon has increased the bit depth every time they increased DR. 16-bit files aren't unusual, in fact they are more "normal" than 12-bit or 14-bit files are. We'll know a lot more in 2 weeks.

Certainly, it boils down to money. I'm concerned about what is accessible to a middle class to upper middle class consumer. The price for extremely high end equipment skyrockets, tens of thousands to hundreds of thousands of dollars. There already are amazing cameras on the market, like this one:

http://www.andor.com/scientific-cameras/ikon-xl-and-ikon-large-ccd-series/ikon-xl-231

That thing is a superbeast of a camera. It's got MONSTER pixels with a massive FWC, but a mere 2.1e- read noise. MASSIVE dynamic range (17.4 stops!!) with 18-bit encoding. Insane cooling (better than an emCCD) at -100C dT. I mean, that's it, right there. Basically the holy grail. But it almost costs as much as a house! :P

The 1DX II won't have 15 stops of DR. It won't even come close. It'll have about 12.3 stops, like the C300 II. If anyone expects more than that, they are deluding themselves. I won't be disappointed when that's all it delivers. I don't even bother looking to Canon for cutting edge sensor technology anymore...I've had my hopes pumped and dashed far too many times. Canon cameras are fine, but they are far from world shattering when it comes to sensor technology and core image quality.

If anyone is going to deliver a class leading consumer-affordable sensor that has great FWC, low read noise, and excellent dynamic range, something that could really be a powerhouse CMOS sensor for astro, it'll be Sony. I am actually curious why no one has stuck a FF Exmor in a CCD-class camera housing with cooling and all that (sans bayer array, of course) yet...but perhaps it is just a matter of companies like QSI and FLI learning how to use CMOS sensors rather than CCD sensors.
 
Upvote 0
Lee Jay said:
jrista said:
There are much fainter objects out there, like OU4, which I think is at least an order of magnitude fewer photons/second (0.0125), so you would only get a photon every 80 seconds and an electron every 160 seconds. It would take about 20 minutes just to reach the minimal SNR in a single pixel, and an hour to reach a reasonably strong SNR, on a target like OU4, for all pixels.

This is why Hyperstar makes a lot of sense, especially for fainter and bigger targets. Switch from f/10 or f/7 to f/2 or so.

Yeah, a fast aperture can help. It isn't without it's own difficulties, though. Collimating for an f/2 scope is extremely difficult...the focal plane is like 5 microns thick. It becomes more difficult with larger sensors. With a giant aperture like that, you also have problems exposing stars...because star flux and nebula flux are a ratio of ratios, stars saturate EXTREMELY quick with such a fast scope, and by the time your nebula data is deep enough, you have massively clipped stars riddled with reflections. My original plan when I got into astrophotography was to get the 11" EdgeHD with Hyperstar...and while it's still a future goal, I passed on it because of the challenges.

Narrow band imaging also already has star halo challenges. You will usually experience a discrepancy in OIII halo size vs. Ha halo size (vs. SII halo size, if you gather SII as well). Hyperstar exacerbates those problems several fold. That often makes NB channel combinations very challenging, leaving behind funky star halos. It's lead to much more advanced processing techniques like starless tonemapping and RGB star replacement, both of which are not easy and very tedious.

It's always half of one, six of the other with astrophotography. You make a gain in one area by trading off in another. (That is, unless you are independently wealthy and dropping a couple hundred grand on a personal observatory with top of the line gear is also a drop in the bucket. ;)
 
Upvote 0
Bingo. I've cited that same piece over and over and I think that's what we will see. However, that's a super 35 sensor. Obviously this will be FF. So We have to assume more light gathering and less noise compared to C300II. I'm thinking internet blogger standard will show something in the 13-14 range. The 5DSR is in the 12 range and that's 50MP. We're talking a FF, 22MP, on sensor ADC here.... It'll be 13-14 Id wager. Canon will claim 15 based on their measure.

And yes, the 5DSR, for all the silly (pardon my French) bitching and moaning out there by people who dont own one, I have been very, very pleased with my push/pull latitude. It's notably better than my 5D3 and noise on par or better (real world appearance) than even my 6D. And I mean at 1600-6400 ISO. I fully expect the 1DX2 to be 1-2 stops better than that, which should put it in the 13-14 range.


privatebydesign said:
Canon have very clearly, and exceptionally openly, demonstrated exactly how they measure 15 stops of DR from the C300 II, I see no reason why they would change their methodology for another camera.

https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

Basically they removed the entirely subjective idea of 'how much noise is too much' and counted everything above background noise. So those expecting 15 stops of noiseless DR are going to be disappointed. As always I don't believe that is the entire picture, I believe there will be good improvements in the actual usable RAW data and I am pretty confident I will be happy with the malleability and workability of those RAW files.

I also expect to piss myself laughing at the arguments, lectures and condescending technical micro deconstructions the absence of a 'traditional 15 stops' will create.

What I would point out is that many were very disappointed on the release of the 5DS/R on the expectation of 'better' sensor performance and those expectations not being realised on paper spec sheets, however when people actually started working the files they all seemed exceptionally happy with the real world output. I expect the exact same thing with the 1DX MkII, people will argue it is crap and the end of Canon, until the next distraction comes along, meanwhile those that actually use it will be surprised at how much better than its predecessors it actually is.
 
Upvote 0
PureClassA said:
Bingo. I've cited that same piece over and over and I think that's what we will see. However, that's a super 35 sensor. Obviously this will be FF. So We have to assume more light gathering and less noise compared to C300II. I'm thinking internet blogger standard will show something in the 13-14 range. The 5DSR is in the 12 range and that's 50MP. We're talking a FF, 22MP, on sensor ADC here.... It'll be 13-14 Id wager. Canon will claim 15 based on their measure.

And yes, the 5DSR, for all the silly (pardon my French) bitching and moaning out there by people who dont own one, I have been very, very pleased with my push/pull latitude. It's notably better than my 5D3 and noise on par or better (real world appearance) than even my 6D. And I mean at 1600-6400 ISO. I fully expect the 1DX2 to be 1-2 stops better than that, which should put it in the 13-14 range.


privatebydesign said:
Canon have very clearly, and exceptionally openly, demonstrated exactly how they measure 15 stops of DR from the C300 II, I see no reason why they would change their methodology for another camera.

https://www.cinema5d.com/canon-measured-15-stops-dynamic-range-c300-mark-ii/

Basically they removed the entirely subjective idea of 'how much noise is too much' and counted everything above background noise. So those expecting 15 stops of noiseless DR are going to be disappointed. As always I don't believe that is the entire picture, I believe there will be good improvements in the actual usable RAW data and I am pretty confident I will be happy with the malleability and workability of those RAW files.

I also expect to piss myself laughing at the arguments, lectures and condescending technical micro deconstructions the absence of a 'traditional 15 stops' will create.

What I would point out is that many were very disappointed on the release of the 5DS/R on the expectation of 'better' sensor performance and those expectations not being realised on paper spec sheets, however when people actually started working the files they all seemed exceptionally happy with the real world output. I expect the exact same thing with the 1DX MkII, people will argue it is crap and the end of Canon, until the next distraction comes along, meanwhile those that actually use it will be surprised at how much better than its predecessors it actually is.

I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor. I think we would have seen patents for other components that indicated they were going down that path by now, if they were. Given the nature of the C300 II, I see Canon...pushing it...with measurements (even though they have been forthcoming with why they call the C300 II a 15 stop camera, they are still purposely measuring differently than everyone else and accounting for stops where SNR is less than 0dB. NO ONE does that. Such measures are negative decibels for a reason.)
 
Upvote 0
jrista said:
I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor.

But they have already said as much. They said they are switching to on die ADC and that these would be coming to market very soon. That was several months ago. As for expense, can they afford not to invest in this?
 
Upvote 0
JMZawodny said:
jrista said:
I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor.

But they have already said as much. They said they are switching to on die ADC and that these would be coming to market very soon. That was several months ago. As for expense, can they afford not to invest in this?

I would have figured it was an expense Canon couldn't have afforded not to invest in years ago...but they persisted with their previous overall architecture for years since. I guess it is possible that they have moved the 1D X II to an on-die ADC, but they would have had to do that years ago as well, not months ago, as months ago the product would have had to be well into field testing already.

I guess we'll see, but I'm so skeptical of Canon these days, between things like the C300 II "15 stops" negative dB measurements, sketchy comments from Maeda in several interviews, and Canon having one of the lowest sensor patent filing/granted rates of any of the major imaging companies out there. I'm totally in "believe it when I see it" mode with Canon. I can't get my hopes up again. :P Been getting my hopes up since 2009, tired of having them dashed. (Although in this case, I'd be quite happy to have my pessimism thwarted. ;D)
 
Upvote 0
jrista said:
JMZawodny said:
jrista said:
I don't think it'll even hit 13 stops. Canon would have to do a radical rearchitecture, changing their entire design for both the sensor and the off-die components. That would be an extremely expensive endeavor.

But they have already said as much. They said they are switching to on die ADC and that these would be coming to market very soon. That was several months ago. As for expense, can they afford not to invest in this?

I would have figured it was an expense Canon couldn't have afforded not to invest in years ago...but they persisted with their previous overall architecture for years since. I guess it is possible that they have moved the 1D X II to an on-die ADC, but they would have had to do that years ago as well, not months ago, as months ago the product would have had to be well into field testing already.

Obviously they would have had to make the move on a small developmental scale years ago. It should also be obvious that the company would not say anything publicly until the technology was ready for production. I see nothing wrong with the timing of things. We'll know very soon. Prepare to be happy.
 
Upvote 0