IBIS and 100mp coming to an EOS R camera? [CR2]

Jun 6, 2016
300
16
The reason no one is using Foveon style sensors is precisely because they are so terrible in low light. None of the layers are "trapping" all of the photons falling on that layer or there wouldn't be anything left for the layer(s) underneath. There is nothing intrinsically different about a photon of "green" light and a photon of "blue" light except the frequency at which it is vibrating.
===

Actually, the greatest issue within the Foveon sensors, is the pass-through capabilities of the Bandpass filters used in the sensors themselves. Each layer SHOULD BE sensitive ONLY to the specified frequencies (i.e. wavelengths) for each the Red, Green and Blue colours and then REJECT AND/OR ALLOW NEARLY FULL PASS-THROUGH of all others. A Foveon stacked photosite uses chemical dopants that allow the EM waves within specific R, G and B colour bands to penetrate to the different depths within the CMOS layers (silicon).

Unfortunately, Sigma has manufacturing issues SPECIFICALLY with the Red colour channel material which is cross-contaminating the upper green (the brightest colour seen by the human eye) and bottom blue layer such that low light colour rendition is sloppy. They need to add Indium or maybe Copper and then the specific RGB layers can be made to emit a greater electrical signal making said layer "More Sensitive". Since Foveon is a MATURE technology, the dopants used for specific EM band "colour sensitivity" have stayed the same while NEWER materials are now available!

I would say Canon OR Sony simply BUY Sigma and update the dopants to allow higher photon sensitivity for the given RGB layer. The Foveon technology, not requiring de-Bayering, is more colour accurate (aka more true-to-life). It's the closest to what we see in real life when it comes to colour rendition BUT the actual photon-to-electrical-signal conversion process is flawed because of OUTDATED dopants than can be redone to increase (based upon what an Engineer I have talked to has indicated!) by an amount of luminance sensitivity in the range of 25% AND MORE maybe even 40% !!!

If that change was make, I would say we are getting into Sony A7s2 equivalencies at a photosite size that's around 6 to 8 microns which is SMALLER than the around 12 microns per photosite that the Sony uses! Not only is the photosite size SMALLER but the actually RGB pixel layer sensitivity itself IS GREATER so you are actually doubling the overall sensitivity of the entire sensor. Ergo, an APS-C sensor image using the re-made Foveon technology at the same resolution would be BRIGHTER with LESS NOISE than a normal Bayer sensor image at Full Frame size! NOW THAT would be a treat to see AND I understand Panasonic and SOny are actually persuing those very strategies of using a variable-type of NEWER dopants in their own versions of Stacked RGB layer image sensors!

ON a technical note, I DO SUSPECT that within each square area of the individual stacked RGB layers, the dopants ARE NOT adequately and/or fully being diffused throughout the width, height and depth of the individual layer. This means incoming photons of a given wavelength are NOT being passed through to the next layer OR NOT BEING ABSORBED PROPERLY into the current photosensitive layer as a cumulatively emitted electrical charge! Rather being converted into heat instead (i.e. into thermal rather than electrical energy)! A more precision doping mechanism may be needed for the individual Red, Green and Blue photosensitive layers to increase overall charge accumulation!
.
 
Last edited:
Aug 16, 2012
4,567
912
Multinational companies can't afford geniuses know-it-alls and psychics. Those are all employed by the fact manufacturing plants anyway.
Harry is not claiming to have solved the problems, he is simply stating what one of the major problems with Foveon technology is and is suggesting that it is an opportunity for Canon or Sony to use their capabilities to improve the technology. Harry often has interesting points to make, and I have learned some things from him. Canon does file patents in Foveon technology.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
359
105
===

Actually, the greatest issue within the Foveon sensors, is the pass-through capabilities of the Bandpass filters used in the sensors themselves. Each layer SHOULD BE sensitive ONLY to the specified frequencies (i.e. wavelengths) for each the Red, Green and Blue colours and then REJECT AND/OR ALLOW NEARLY FULL PASS-THROUGH of all others. A Foveon stacked photosite uses chemical dopants that allow the EM waves within specific R, G and B colour bands to penetrate to the different depths within the CMOS layers (silicon).

Unfortunately, Sigma has manufacturing issues SPECIFICALLY with the Red colour channel material which is cross-contaminating the upper green (the brightest colour seen by the human eye) and bottom blue layer such that low light colour rendition is sloppy. They need to add Indium or maybe Copper and then the specific RGB layers can be made to emit a greater electrical signal making said layer "More Sensitive". Since Foveon is a MATURE technology, the dopants used for specific EM band "colour sensitivity" have stayed the same while NEWER materials are now available!

I would say Canon OR Sony simply BUY Sigma and update the dopants to allow higher photon sensitivity for the given RGB layer. The Foveon technology, not requiring de-Bayering, is more colour accurate (aka more true-to-life). It's the closest to what we see in real life when it comes to colour rendition BUT the actual photon-to-electrical-signal conversion process is flawed because of OUTDATED dopants than can be redone to increase (based upon what an Engineer I have talked to has indicated!) by an amount of luminance sensitivity in the range of 25% AND MORE maybe even 40% !!!

If that change was make, I would say we are getting into Sony A7s2 equivalencies at a photosite size that's around 6 to 8 microns which is SMALLER than the around 12 microns per photosite that the Sony uses! Not only is the photosite size SMALLER but the actually RGB pixel layer sensitivity itself IS GREATER so you are actually doubling the overall sensitivity of the entire sensor. Ergo, an APS-C sensor image using the re-made Foveon technology at the same resolution would be BRIGHTER with LESS NOISE than a normal Bayer sensor image at Full Frame size! NOW THAT would be a treat to see AND I understand Panasonic and SOny are actually persuing those very strategies of using a variable-type of NEWER dopants in their own versions of Stacked RGB layer image sensors!

ON a technical note, I DO SUSPECT that within each square area of the individual stacked RGB layers, the dopants ARE NOT adequately and/or fully being diffused throughout the width, height and depth of the individual layer. This means incoming photons of a given wavelength are NOT being passed through to the next layer OR NOT BEING ABSORBED PROPERLY into the current photosensitive layer as a cumulatively emitted electrical charge! Rather being converted into heat instead (i.e. into thermal rather than electrical energy)! A more precision doping mechanism may be needed for the individual Red, Green and Blue photosensitive layers to increase overall charge accumulation!
.
If you do that you lose the ability to synthesize most of the colors we expect. The way the human eye/brain system works is due to the amount of overlap between the sensitivities of the three types of cones in our retinas. If there was no overlap, we'd be able to perceive three and only three different colors, rather that the millions of colors our eye/brain systems are capable of perceiving. The reason debayering information from sensors with CFAs works so well is because it mimics the way our brain processes information from our retinas.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
359
105
Google is your friend here. Canon's highest density sensor has a pixel size of 4.1 microns. Sony has an entire line of sensors with much smaller pixels all the way down 1.0 microns. How many 1.0 micron pixels you could put on a full frame sensor? I'm not going to do the math but it's a lot more than 100 million. I like Canon's system and I'm no Sony fan but you are kidding yourself if you think Canon is ahead of Sony in sensor development.

edit: sorry, forgot about the sensor in the 80D. That one is 3.7 microns.
Also important to remember that it's a linear measurement so Sixteen 1-micron pixels fit into the same area as One 4-micron pixel. Fair to say that Sony is pretty good at small pixels.
You also seem to have forgotten many of the sensors in Canon's studio broadcast cameras and other types of cameras with sensors significantly smaller than APS-C.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
359
105
Anything involving multiple images for dynamic range improvements will result in motion artifacts in some situations.

Canon could also just make access to the extra stop of dynamic range from the dual pixel sensors that is currently discarded easier.

...

And also, can a high resolution sensor not do everything a low res one can, if it has some good hardware binning implemented?
In terms of full well capacity, no it can not. Specifically, if the field density of the light falling on a sensor is highly variable from one pixel to the next (such as when doing astrophotography), four smaller pixels will have at least one that clips before a single pixel with 4X the area will.
 

Normalnorm

EOS 7D MK II
Dec 25, 2012
438
52
With bird and wildlife photos?
It's done with birds all the time. More importantly my point is that to get maximum sharpness from a high MP camera you need a sturdy tripod, high shutter speed and also Strobe if possible.
Some cases do not allow for strobe so then, absent a tripod your high MP camera is not delivering the res you paid for.
Just physics.
Sorry you are sad.
 
Aug 16, 2012
4,567
912
It's done with birds all the time. More importantly my point is that to get maximum sharpness from a high MP camera you need a sturdy tripod, high shutter speed and also Strobe if possible.
Some cases do not allow for strobe so then, absent a tripod your high MP camera is not delivering the res you paid for.
Just physics.
Sorry you are sad.
The sadness is that all too many people think it is acceptable behaviour to make gratuitous comments like that rather than simply present a logical case.
 
Jun 6, 2016
300
16
I am going to suggest to you that it is actual converted PHOTO COUNT rather than overlap of the RGB sensitivities of the cones in the human retina that determine overall colour information. It seems that the chemicals that handle Luminance sensitive (i.e. brightness) of the Human eye's RODS, only get activated when a threshold level of photons are no longer being converted in the Cones (colour receptors) on a when-not-enought-photons-activating-an-areal-basis. Basically, if not enough parts of each CONE sensor (colour receptor) emit a signal ready for transmission into the optic nerve, then the RODS get activated and a more-sensitive photon count layer does the work of brightness detection. This process takes TIME which is WHY we humans needs to adjust our daylight vision into nighttime for some minutes in order to perceive objects in low light levels.

While on a wavelength basis there is overlap, it seems our brains have built-in software that inherently mixes the emitted Red, Green Blue photo counts to create any given PERCEIVED colour which will vary among individuals depending upon base genetics and the underlying chemistry with a person's cones and rods. It seems that specific amounts of the base Photopsin (i.e. Cone Opsin) proteins will determine HOW sensitive your eye is to specific wavelengths. And based upon what I am seeing in the organics chemistry-portion of my mind, I could swear I am seeing chemicals not far removed from OLED-based photon emissive chemistry and CCD-based photon sensor chemistry. It seems we humans have done basic Bio-Mimicry of evolved genetic processes that produce a photo-sensitive layer in an organic and/or inorganic substrate.

What this means is that it takes a certain number of photons in order to excite the proteins in each Cone type (i.e. Long, Middle and Short Wavelength Opsins aka Red, Green, Blue colours), and it is the BRAIN (Visual Cortex) which does the higher-level of mixing to give us an optimal rendition of the currently viewed scene. It seems our visual cortex does live realtime RGB colour correction, luma and gamma correction and even a bit of edge detection to optimize our local viewing environment. We humans now have the ability to IMPROVE upon the human eye by changing the chemicals (dopants) and substrates (CMOS/GaAs/GaN/BN) used to form modern image sensors to those that are more sensitive to specific bands of light.

---

It does seem that Sigma is having issues manufacturing the Red portion of the photosensitive Foveon layers which is causing more noise and lower overall image luminance levels. What Sigma HAS NOT YET DONE is to change the actual chemistry to reflect modern photosensitive and transmissive CMOS layers chemistry which would RESTORE overall image brightness and have less noise contaminating the dark areas of an image. In a stack RGB layer modality, image sensors could be LARGER in size per square area or MORE NUMEROUS per square area which would result in a more sensitive image sensor OR a higher resolution image sensor!

Sigma NOW HAS the modern manufacturing infrastructure and organic/inorganic chemistry expertise to make a sensor that is BOTH more sensitive overall AND has a higher resolution! Ergo, It wouldn't be all that hard for them to make 24 to 30 megapixel image sensors at APS-C and Full Frame sizes that are on par with the Sony A7s2's light gathering prowess at a price that would allow them to be put on a $2000 camera body!

It's Just Chemistry!
.
 
Last edited:
Jun 6, 2016
300
16
You also seem to have forgotten many of the sensors in Canon's studio broadcast cameras and other types of cameras with sensors significantly smaller than APS-C.
---

Canon has GREAT one-inch, 2/3rds inch sensor cameras and even 1/2 inch sensor cameras with some pretty high pixel counts but i have noticed that when they wanted more pixels they went to a larger sensor such as APS-H for a 250 megapixel camera and to an 8.1 by 8.1 inch (200mm+ PER side!) wafer for their 448 pixel 60 fps sensor which I have talked about in earlier posts! Canon can DEFINITELY MAKE a proper high resolution sensor since they ALREADY HAVE MADE 120, 250 and even 448 megapixel CMOS sensors! AND add in a 4 MILLION ISO ultra-low-light-sensitive image sensor! Right now they CHOOSE NOT to showcase that image sensor manufacturing prowess in a consumer or prosumer DSLR or Mirrorless camera product.
.