New Sensor Tech in EOS 7D Mark II [CR2]

jrista said:
9VIII said:
jrista said:
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

Sorry, maybe I should have communicated that better. I wasn't referring to dual pixel technology, just normal sensors at high resolution.
(If superpixel debayering is really as simple as it sounds, dual pixel technology is completely unnecessary in this context.)

jrista said:
Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

http://www.gizmag.com/canon-120-megapixel-cmos-sensor/16128/

I'm still of the opinion that Canon is only limiting resolution because of either the lack of user infrastructure (flash memory needs to drop in price), or the lack of a practical processor to pair with the sensor (problems with size, battery life, heat, etc...).

My bet is they will ramp up resolution as surrounding technology allows.
 
Upvote 0
9VIII said:
jrista said:
9VIII said:
jrista said:
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

Sorry, maybe I should have communicated that better. I wasn't referring to dual pixel technology, just normal sensors at high resolution.
(If superpixel debayering is really as simple as it sounds, dual pixel technology is completely unnecessary in this context.)

Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

9VIII said:
jrista said:
Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

http://www.gizmag.com/canon-120-megapixel-cmos-sensor/16128/

I'm still of the opinion that Canon is only limiting resolution because of either the lack of user infrastructure (flash memory needs to drop in price), or the lack of a practical processor to pair with the sensor (problems with size, battery life, heat, etc...).

My bet is they will ramp up resolution as surrounding technology allows.

I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.
 
Upvote 0
jrista said:
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

jrista said:
I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

I'm guessing the only reason you mention 90nm and not 30nm is that in this application the cost/benefit ratio favours slightly larger circuits rather than smaller? (you'd only gain minimal surface area but potentially make production much more difficult)
 
Upvote 0
9VIII said:
jrista said:
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.

9VIII said:
jrista said:
I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

I'm guessing the only reason you mention 90nm and not 30nm is that in this application the cost/benefit ratio favours slightly larger circuits rather than smaller? (you'd only gain minimal surface area but potentially make production much more difficult)

Well, I mention 180nm and 90nm because I am pretty sure Canon has the fab capability to manufacture transistors that small. In the smallest sensors, transistor sizes are a lot smaller than that...I think they are down to 32nm for the latest stuff, with pixels around 1µm (1000nm) in size. I think that some of Canon's steppers and scanners can handle smaller transistors, 65nm using subwavelength etching, but I don't know if that stuff has been/can be used for sensor fabrication. I know for a fact that Canon already uses a 180nm Cu fab process for their smaller sensors, so I know for sure they are capable of that. Their highest resolution fabs are around 90nm natively, but again, most of what I've read about them indicates IC fabrication...I've never heard of them being used to manufacture sensors (but there honestly isn't that much info about Canon's fabs...nor who owns them...)
 
Upvote 0
jrista said:
9VIII said:
jrista said:
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.


Darn.
After reading a bit about the various file formats (TIFF is high fidelity, but both huge and still damaging even in 16bit) It sounds like the best that could be done would be just to "prep" the raw file for superpixel debayering by saving it with the two green pixels averaged already. It would be useless for anything else, but you use 25% less space. Not nothing, but not great.
People just need to get used to handling large files.
 
Upvote 0
ScottyP said:
I'll toss it out there.......


APS-H?

Drool...

We can only hope.

Honestly if Canon wants to win the hearts of entry level consumers and make everyone else look bad they would scrap EF-S and move the entire Rebel line to APS-H. Low light is still king and the only guaranteed way to get more of it is with bigger sensors.
 
Upvote 0
9VIII said:
jrista said:
9VIII said:
jrista said:
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.


Darn.
After reading a bit about the various file formats (TIFF is high fidelity, but both huge and still damaging even in 16bit) It sounds like the best that could be done would be just to "prep" the raw file for superpixel debayering by saving it with the two green pixels averaged already. It would be useless for anything else, but you use 25% less space. Not nothing, but not great.
People just need to get used to handling large files.

The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.
 
Upvote 0
jrista said:
The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.


That sounds like it's just a problem with the way the software is handling the data. A TIFF is still assigning full RGB values to each pixel, the debayering is done, and I'm assuming the original values are lost. Whereas if you store the information in a pre-debayered state, even with one pixel averaged out (which was next on the to-do list anyway) it shouldn't be any different from reading the original RAW... with the slight exception that adjusting the value after averaging would be different than adjusting the values of two pixels and then averaging them (I assume that when you adjust things in post it's playing with the RAW numbers before debayering).
But that still sounds like a fairly inconsequential concession to make compared to storing data after debayering.
 
Upvote 0
9VIII said:
jrista said:
The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.


That sounds like it's just a problem with the way the software is handling the data. A TIFF is still assigning full RGB values to each pixel, the debayering is done, and I'm assuming the original values are lost. Whereas if you store the information in a pre-debayered state, even with one pixel averaged out (which was next on the to-do list anyway) it shouldn't be any different from reading the original RAW... with the slight exception that adjusting the value after averaging would be different than adjusting the values of two pixels and then averaging them (I assume that when you adjust things in post it's playing with the RAW numbers before debayering).
But that still sounds like a fairly inconsequential concession to make compared to storing data after debayering.

There are some things in a RAW editor that must be done before debayering (i.e. whitebalance), and some that are usually done after debayering. It's just that some things are more effectively performed with an original digital signal, and others with a full RGB color image. Exposure and white balance are the two main things that benefit most from being processed in the original RAW, where the signal information is pure and untainted with any error introduced by conversion to RGB.

You also have to realize that RGB binds the three color components together....they cannot be shifted around much in an independent way, not like you can with RAW, without introducing artifacts. At least, not with real-time algorithms. There are other tools, like PixInsight (astrophotography editor) that have significantly more powerful, mathematically intense, and often iterative processes that put most of the tools in something like Lightroom to shame. One example is TGVDenoise...which is capable of pretty much obliterating noise without affecting larger scale structures or stars at all. Problem is, at an ideal iteration count (usually around 500) on a full RGB color image, running TGVDenoise can take several minutes to complete. And that is just one small step in processing a whole image.

So sure, with the right tools, you can probably do anything with a 16-bit TIFF. It's just that with lower precision but significantly faster algorithms like are often found in standard tools like Lightroom, you either end up with artifacts, or run into limitations with the data or the algorithm that won't let you push the data around as much.
 
Upvote 0
9VIII said:
ScottyP said:
I'll toss it out there.......


APS-H?

Drool...

We can only hope.

Honestly if Canon wants to win the hearts of entry level consumers and make everyone else look bad they would scrap EF-S and move the entire Rebel line to APS-H. Low light is still king and the only guaranteed way to get more of it is with bigger sensors.

LOL
I'm so glad someone else said it
now where's my popcorn...
 
Upvote 0
Let's hope they com up with something new this time, I have already bought an Fuji X-T1 to complement my 5D3. I hope that the 7D2 motivates me to keep my lenses and keep on using my Canon gear instead of taking the complete Fuji train instead. Can we see a new mirrorless cameras as well Canon?

/ Stolpe
 
Upvote 0
jrista said:
9VIII said:
jrista said:
Superpixel sounds like what you want. I actually wish that mainstream RAW editors like Lightroom would offer that as an option, honestly. Some people care more about color fidelity and tonal range than resolution, and having LOTS of pixels with superpixel debayering would be a huge bonus for those individuals.

Using it in post sounds nice, but what we're after is a way to save space on the memory card. Could a camera use superpixel debayering as a part of the image capture process and still save the file in RAW format?

Nope. Once you debayer, or do any kind of processing to the data, your no longer RAW. Canon does offer the sRAW and mRAW settings. Those are what, at best, you could call semi-RAW. They are closer to a JPEG in terms of actual storage format (YCbCr encoding, or luminance+Chrominance Blue+Chrominance Red), but everything is stored in 14-bit precision. It's also encoded such that you have full luminace data, basically a luminance value for every single OUTPUT pixel, but the Cb and Cr data is sparse, it's encoded from multiple pixels (I forget if it is a 1x2 short row, or a full 2x2 quad), and that encoded value is stored as a single pair of 14-bit Cb/Cr values for every 2 or 4 luminace pixels (I think exactly how many color pixels are encoded per luminance pixel depends on whether your sRAW or mRAW). Now the luminace is encoded per output pixel. If your mRAW, I think that's basically 1/2 the area of the full sensor, and for sRAW is basically 1/4 the area of the full sensor. So your luminance information is encoded from however many source pixels are necessary to produce the right output pixels. I think 2x2 for sRAW, something along the lines of 1.5x1.5 for mRAW. (There is a spec on the formats somewhere, it's been a long time since I've read it...my description above is not 100% accurate, but that's the general gist...basically, a 4:2:1 or 4:2:2 encoding of the image data.)

You definitely save space with these formats, but I have experimented with them on multiple occasions, and your editing latitude is nowhere remotely close to a full RAW. You can shift exposure around a moderate amount, but you have limits to how far down you can pull highlights, how far up you can push shadows, how far you can adjust white balance, etc.

9VIII said:
jrista said:
I am guessing it is more than that. Let's say Canon's next move would be to 3.5µm pixels. With a 500nm process, the actual photodiode, assuming a non-shared pixel architecture, would then actually be barely 2.5µm in size at most (once you throw wiring and readout logic transistors around it.) With a shared pixel architecture you might be able to make it a little larger. On the other hand, if you drop from a 500nm process to a 180nm process, the photodiode area could be close to 3.14µm. (This assumes that wiring and transistors only require a single transistor's width border around the photodiode...it's usually not quite that simple, at least based on micrograph images of actual sensors and patent diagrams.) With a 90nm process, the photodiode could be up to 3.3µm.

I think the 500nm process is really limiting for Canon now. They COULD do it, there is nothing that prevents them from creating a 3.5µm pixel sensor with 2.5µm photodiodes...but I don't think it would be competitive. The smaller photodiode area wouldn't gather as much light as competitors sensors that are fabricated with 180nm or 90nm processes, and they would just be a lot noisier.

I am really, truly hoping Canon has moved to a significantly more modern fabrication process with the 7D II sensor. I think that alone would improve things considerably for Canon's IQ.

I'm guessing the only reason you mention 90nm and not 30nm is that in this application the cost/benefit ratio favours slightly larger circuits rather than smaller? (you'd only gain minimal surface area but potentially make production much more difficult)

Well, I mention 180nm and 90nm because I am pretty sure Canon has the fab capability to manufacture transistors that small. In the smallest sensors, transistor sizes are a lot smaller than that...I think they are down to 32nm for the latest stuff, with pixels around 1µm (1000nm) in size. I think that some of Canon's steppers and scanners can handle smaller transistors, 65nm using subwavelength etching, but I don't know if that stuff has been/can be used for sensor fabrication. I know for a fact that Canon already uses a 180nm Cu fab process for their smaller sensors, so I know for sure they are capable of that. Their highest resolution fabs are around 90nm natively, but again, most of what I've read about them indicates IC fabrication...I've never heard of them being used to manufacture sensors (but there honestly isn't that much info about Canon's fabs...nor who owns them...)

It's magic!

When a bunch of tech stuff flies by way above my head, if I try to watch for awhile, (with only my eyes, because my brain cant keep up) ...it turns into a sleeping pill! :o

But I go to sleep thinking that it's good to know that SOMEBODY knows these things. :)
 
Upvote 0
Quackator said:

The organic compound patent is only patenting the molecule, and electrochromic molecule (molecules or polymers that change color in the presence of an electric current), but the implications are very interesting. It's an adjustable organic filter...which has very high transmittance in one mode, then has lower transmittance in another, the "color" mode. In color mode, it looks like the color would be more blueish or cyan colored, as some light is transmitted, but most IR is blocked. If they could refine the capability...it might lead to adjustable color filters. This was something I hypothesized a couple years ago....sensors with a high refresh rate during exposure, with dynamic color filters. You would get 100% fill factor for all colors, without the need to layer the sensor. That means you get higher sensitivity.

(Note, this is not what the patent describes...however it is something it could imply:) Assuming this compound leads to a more controllable transmittance, it might be possible for Canon to create a full-color sensor with a single layer of organic film that changes for red, green, and blue channels during the length of the exposure. You could theoretically even switch the filter to full transparency mode for a "luminance" channel. This would be WAY better than a Foveon-style sensor, as you have problems with noise in the red and a bit in the green channels do to their depth within the silicon. With a dynamic color filter, especially one with high transmittance, you could gather far more light. You could even shorten the sub-exposures in each color channel, expose for longer in luminance to get more detail (for any given duration of exposure...say you choose 1/500th second exposure...the RGB sub-exposures might be 1/3000th of a second long, while the luminance sub-exposure might be 1/1000th long. Amplify, then slightly blur, the color channels...that reduces noise, then integrate with the luminance, which adds back the detail.)

No idea if such a sensor would ever materialized, but it's the ultimate implication of electrochromic compounds.
 
Upvote 0
Quackator said:
It could as well be a global shutter based on this technology.
No more X-sync barrier, no more rolling shutter problems.

I don't think so. There are two modes...fully transparent, and "colored". The material does not get completely opaque, it takes on a lumpy curve that peaks near the blue end and ultimately trails off by IR. The material would need to be 100% completely opaque to operate as a global shutter. On the other hand, you wouldn't necessarily want it 100% opaque when using it as a color filter...you would want it to have some kind of response curve that peaks in a given range of wavelengths, and bottoms out in the other wavelengths. Even a standard color filter is not 100% completely opaque to other colors, at the very least there is usually a few percent of red and green getting through a blue filter, a few percent of blue getting through a red filter, etc. As filter material, it sounds pretty amazing.

As a shutter, I don't know that it's capable of becoming opaque enough...even if Canon ultimately made it 0.1% or even 0.01% opaque...that is still allowing light through. Even if you weren't taking pictures...at that low transmittance level, your actually exposing the sensor...which would ultimately lead to higher levels of noise.
 
Upvote 0
jrista said:
And back to our regularly scheduled programming.

The 7D II's new technology. What will it be? Here are my thoughts, given past Canon announcements, hints about what the 7D II will be, interviews with Canon uppers, etc.

1. Megapixels: 20-24mp
2. Focal-plane AF: Probably DPAF, with the enhancements in the patents Canon was granted at the end of 2013. *
3. New fab process: Probably 180nm, maybe on 300mm wafers. **
4. A new dedicated AF system: I don't know if it will get the 61pt AF, probably too large for the APS-C frame. A smaller version...41pts would be my hope. Same precision & accuracy of 61pt system on 5D III, with same general firmware features.
5. Increased Q.E.: Canon has been stuck below 50% Q.E. for a long time now. Their competitors have pushed up to 56% and beyond, a couple have sensors with 60% Q.E. at room temperature. Higher Q.E. should serve high ISO very well.
6. Faster frame rate: I suspect 10fps. I don't think it will be faster than that, 12fps is the reserved territory of the 1D line.
7. Dual cards: CF (CFast2) + SD. I hate that, personally, but I really don't see the 7D line getting dual CF cards. (I'll HAPPILY be proven wrong here, though!)
8. No integrated battery grip. Just doesn't make sense, the 7D was kind of a smaller, lighter, more agile alternative to the 1D, a grip totally kills that.
9. New 1DX/5DIII menu system. Personally, I would very much welcome this! LOVE the menu system of the 5D III.
10. GPI and WiFi: I think both should find their way into the 7D II, what with the 6D having them. Honestly not certain, though...guess it's a tossup.
11. Video features: Video has always been core to the 7D II rumors. 60fps 1080p; 120fps 720p (?); HDMI RAW output; External mic jack; 4:2:2; I think DIGIC 7 would probably arrive with enhancements on the DIGIC 6 image and video processing features. Maybe on par with Sony's Bionz X chip.

* Namely, split photodiodes, but with different sizes...one half is a high sensitivity half, the other half is a lower sensitivity half. The patents are in Japanese, and the translations are horrible, so I am not sure exactly WHY this is good, but Canon's R&D guys seem to think it will not only improve AF performance and speed, but "reduce the negative impact to IQ"....which seems to indicate that the use of dual photodiodes has some kind of impact on IQ, a negative impact.

** We know Canon has been using a 180nm process for their smaller form factor sensors for a while. Not long ago, a rumor came through, I think here on CR, indicating Canon was building a new fab and would be moving to 300mm wafers. That should greatly help Canon's ability to fabricate large sensors with complex pixels for a lot cheaper. A smaller process would increase the usable area for photodiodes, as transistors and wiring would be a lot smaller than they are today on Canon's 500nm process. That would be a big benefit for smaller-pixel sensors. If they moved to a 90nm process, all the better. I don't suspect we'll see any kind of BSI in the 7D II...but, who knows.
I agree with everything... but I would like to make three comments.

It's not very cost effective to just upgrade fabrication by one step.... they are going to have to live with the new facility for a long time. I would expect that they would jump over 180 to 90nm... or even smaller.

And video, I think 4K video on this camera is a flip of the coin.. I wouldn't bet one way or the other, but if they do use C-fast cards the odds of 4K video goes up

And finally, I think it will be a dual processor.... but I am wondering if the time has come for one processor to be optimized for stills and the other processor optimized for video.
 
Upvote 0
Don Haines said:
jrista said:
And back to our regularly scheduled programming.

The 7D II's new technology. What will it be? Here are my thoughts, given past Canon announcements, hints about what the 7D II will be, interviews with Canon uppers, etc.

1. Megapixels: 20-24mp
2. Focal-plane AF: Probably DPAF, with the enhancements in the patents Canon was granted at the end of 2013. *
3. New fab process: Probably 180nm, maybe on 300mm wafers. **
4. A new dedicated AF system: I don't know if it will get the 61pt AF, probably too large for the APS-C frame. A smaller version...41pts would be my hope. Same precision & accuracy of 61pt system on 5D III, with same general firmware features.
5. Increased Q.E.: Canon has been stuck below 50% Q.E. for a long time now. Their competitors have pushed up to 56% and beyond, a couple have sensors with 60% Q.E. at room temperature. Higher Q.E. should serve high ISO very well.
6. Faster frame rate: I suspect 10fps. I don't think it will be faster than that, 12fps is the reserved territory of the 1D line.
7. Dual cards: CF (CFast2) + SD. I hate that, personally, but I really don't see the 7D line getting dual CF cards. (I'll HAPPILY be proven wrong here, though!)
8. No integrated battery grip. Just doesn't make sense, the 7D was kind of a smaller, lighter, more agile alternative to the 1D, a grip totally kills that.
9. New 1DX/5DIII menu system. Personally, I would very much welcome this! LOVE the menu system of the 5D III.
10. GPI and WiFi: I think both should find their way into the 7D II, what with the 6D having them. Honestly not certain, though...guess it's a tossup.
11. Video features: Video has always been core to the 7D II rumors. 60fps 1080p; 120fps 720p (?); HDMI RAW output; External mic jack; 4:2:2; I think DIGIC 7 would probably arrive with enhancements on the DIGIC 6 image and video processing features. Maybe on par with Sony's Bionz X chip.

* Namely, split photodiodes, but with different sizes...one half is a high sensitivity half, the other half is a lower sensitivity half. The patents are in Japanese, and the translations are horrible, so I am not sure exactly WHY this is good, but Canon's R&D guys seem to think it will not only improve AF performance and speed, but "reduce the negative impact to IQ"....which seems to indicate that the use of dual photodiodes has some kind of impact on IQ, a negative impact.

** We know Canon has been using a 180nm process for their smaller form factor sensors for a while. Not long ago, a rumor came through, I think here on CR, indicating Canon was building a new fab and would be moving to 300mm wafers. That should greatly help Canon's ability to fabricate large sensors with complex pixels for a lot cheaper. A smaller process would increase the usable area for photodiodes, as transistors and wiring would be a lot smaller than they are today on Canon's 500nm process. That would be a big benefit for smaller-pixel sensors. If they moved to a 90nm process, all the better. I don't suspect we'll see any kind of BSI in the 7D II...but, who knows.
I agree with everything... but I would like to make three comments.

It's not very cost effective to just upgrade fabrication by one step.... they are going to have to live with the new facility for a long time. I would expect that they would jump over 180 to 90nm... or even smaller.

I think that for larger sensors, 180nm is still used by other major manufacturers, like Sony. It's only with the much smaller sensors that have pixels smaller than 2µm that you start seeing transistors smaller. Even a move to 180nm for large (APS-C, FF) sensors would be HUGE. I mean, that's compared to 500nm. It's about 1/3rd the size. If pixel count only goes up to 24mp in the 7D II, a move to 180nm would mean that Q.E. actually increases on a per-pixel basis relative to say 20mp at 500nm. That's how significant it would be.

I'd have to check, but I would expect that Canon is probably already on a smaller process, 90nm or even 65nm, for the 1/2", 1/3", and smaller sensors.


Don Haines said:
And video, I think 4K video on this camera is a flip of the coin.. I wouldn't bet one way or the other, but if they do use C-fast cards the odds of 4K video goes up

Agreed. I really hope they do move to C-Fast, and I also hope they move to USB 3.0. If we see both of those, then 4k should be a given.

Don Haines said:
And finally, I think it will be a dual processor.... but I am wondering if the time has come for one processor to be optimized for stills and the other processor optimized for video.

That's an interesting thought. I guess it depends on the frame rate. If they only bump up to 10fps, I think a single DIGIC7 could handle it (easily...with room to spare for a whole ton of other stuff). That would be ESPECIALLY if they move the ADC onto the sensor die and make it column parallel. They do have a patent for CP-ADC with a Dual-Scale Ramp ADC (the dual-scale just allows the ADC to operate at different rates based on some trigger factor...say sensor heat...more heat, higher noise, slower readout, less noise...switch from high speed readout to low speed readout when possible under higher heat, and you could counteract the increase in dark current noise...I have no idea what the trigger factor would be to switch from the higher speed to the lower speed or vice versa in an actual product, though.) Parallelizing ADC and putting that logic on the die also reduces load in the DIGIC itself...it would then solely be responsible for digital pixel processing, in which case a DIGIC 7 that has no ADC units could theoretically have even more processing power than a DIGIC 7 that did include the ADC units. So instead of being 7x faster than a DIGIC 6, it might end up being 12x or 14x faster.

If you had one of those dedicated to stills/af/metering, and one dedicated to video, you could really do a hell of a lot with the video. Canon should be able to surpass what the Bionz X in the A7s does easily, achieving ultra low noise ISO 400k, maybe even 800k.
 
Upvote 0