Canon 3 Layer Sensor (Foveon Type?) Patent

Status
Not open for further replies.
I'd be great to see this done well: I don't think anybody would complain if a camera of the quality of the 5D Mark II were able to read RGB at each photosite. I don't know that I'd settle for a 5D Mk III that only includes a Foveon-type sensor but doesn't improve other basic aspects of the camera like autofocus and continuous shooting rate. high ISO on the SD 15 frankly looks terrible, based on the images from Photography Blog. there's not any better noise handling than a regular APS-C cam, nor is there better color integrity.

I'm not so worried about the sharpness; the sample images for the SD15 are crisp enough and they're shot with Sigma lenses, which ... I have used for a long time but have to admit, don't compare to L glass.

the other question is price. I'm not willing to pay 1Ds-series prices for a 5D-series camera just because it's got a Foveon-type sensor. I don't know if the SD 15's ridiculous price was due to real production cost or a last gasp by Sigma's dslr division, but I simply won't pay that sort of a premium for a Foveon sensor.
 
Upvote 0
FranciscoDurand said:
kirillica said:
Well, can someone translate into readable language what this patent is for and why it is (a way?) better than current stuff? ::)
Well the foveon sensor has 3 independent layers (blue, red, green) while the CMOS sensors has the bayer grid, the difference between those sensors is that the foveon captures all blue, all green, all red, images while the CMOS mix 'em all. You can read more in wikipedia sites about this superb sensor and about the Sigma SD1.

And I think this sensor would be in the new Canon 5D Mark III? :P

http://en.wikipedia.org/wiki/Foveon_X3_sensor

The Foveon actually *doesn't* have Red, Green or Blue layers. I has layers that are sensitive to white, yellow and red in that order from top to bottom. Blue and Green are determined based on math (white - yellow = blue), yellow-red = green. Ofcourse it's not quite that simple because you ahve to do a lot of math as the light becomes less energetic as it reaches further down the sensel, so it ends up being horribly ugly math like G = Y-R*1.3243, and so on...

That's one reason you see weird colors sometimes in the Foveon output... depending on the energy of the light and the exact frequency it can screw up that math.

Foveon/Sigma's illustrations over simplify the imaging pipeline for the general public. It's interesting that Canon refers specifically to RGB.. I would be very suprised if they could actual draw those colors out of a single sensel without reverting to Foveon like math.
 
Upvote 0
dr croubie said:
The text is (as always) mostly unintelligible. and i wish the diagram was a bit larger.

ok, i've changed what i originally thought. It didn't look like a sensor on first inspection, but now i've convinced myself it is.

with the 'Set' FET off, nothing happens.
turn the Set FET on, and one of the TxR/G/B FETs on, and the voltage at the filled-in dot at the right will change proportionally to the voltage at the '101' layer.
Turn that FET off, turn on the next TxRGB FET, and read the voltage .

the line:
"Therefore, in reading the charge of B, it becomes difficult to receive light G, to prevent mixing"
mean that only one colour can be read at a time sequentially. Turning 2 or 3 FETs on at once, the voltage will be proportional to total charge on all 3 combined (added or averaged, not sure).
So you expose your sensor using the shutter, black it off, then read every pixel's colour sequentially before the charge dissipates (switching FETs on and off can be done in the order of nanoseconds).

turn the 'Res' FET on to reset the charges on everything.

Open all 3 FETs and you have a true monochromatic sensor! I'll take 12 :)
 
Upvote 0
kubelik said:
high ISO on the SD 15 frankly looks terrible, based on the images from Photography Blog. there's not any better noise handling than a regular APS-C cam, nor is there better color integrity.

You guys do realize the SD15 is nearly prehistoric as far as sensor age, and is a 1.7x crop (smaller than APS-C we're all used to in Canon land). The new SD1 is a much better sensor (still not great), but lets compare apples to apples.
 
Upvote 0
Osiris30 said:
http://en.wikipedia.org/wiki/Foveon_X3_sensor
The Foveon actually *doesn't* have Red, Green or Blue layers. I has layers that are sensitive to white, yellow and red in that order from top to bottom. Blue and Green are determined based on math (white - yellow = blue), yellow-red = green. Ofcourse it's not quite that simple because you ahve to do a lot of math as the light becomes less energetic as it reaches further down the sensel, so it ends up being horribly ugly math like G = Y-R*1.3243, and so on...
This is wrong; do you have actual information that counters the link you quoted?
Case in point: white includes yellow, and blue, and red, and green. Take away the "white" and you're left with nothing for the other lower layers to detect!

I believe (and I could be wrong) that the Foveon X3's issues were a result of the variability of the thickness of the silicon layers. It is this depth that determines the wavelength cutoff.



This proposed Canon sensor will be inherently better because it won’t throw away 67% of the incoming light.
I’m sure folks would appreciate a much shorter exposure time by 1.5 stops, for free (all else equal).

This won’t be any better for video unless the data buses are clocked off faster to cater for the greater number of sub-pixels, or there is on-die pixel binning.

Anyway, all this is great news for photography. A microlensed, 10MP, true RGB per pixel, imager would suit me fine.
 
Upvote 0
Osiris30 said:
kubelik said:
high ISO on the SD 15 frankly looks terrible, based on the images from Photography Blog. there's not any better noise handling than a regular APS-C cam, nor is there better color integrity.

You guys do realize the SD15 is nearly prehistoric as far as sensor age, and is a 1.7x crop (smaller than APS-C we're all used to in Canon land). The new SD1 is a much better sensor (still not great), but lets compare apples to apples.

well, I was comparing it to Canon APS-C sensors, not FF if you read my post. also, I dare say even an XSi's sensor doesn't look a whole lot worse at ISO 1600 if you really want to compare old apples to old apples.

Canon's definitely got a much better chance than Sigma at pulling off this sort of new technology. I agree that it's hard to tell how much of this is limitations in the technology itself and how much of it is limitations in Sigma's budget.
 
Upvote 0
smeggy said:
Osiris30 said:
http://en.wikipedia.org/wiki/Foveon_X3_sensor
The Foveon actually *doesn't* have Red, Green or Blue layers. I has layers that are sensitive to white, yellow and red in that order from top to bottom. Blue and Green are determined based on math (white - yellow = blue), yellow-red = green. Ofcourse it's not quite that simple because you ahve to do a lot of math as the light becomes less energetic as it reaches further down the sensel, so it ends up being horribly ugly math like G = Y-R*1.3243, and so on...
This is wrong; do you have actual information that counters the link you quoted?
Case in point: white includes yellow, and blue, and red, and green. Take away the "white" and you're left with nothing for the other lower layers to detect!

I believe (and I could be wrong) that the Foveon X3's issues were a result of the variability of the thickness of the silicon layers. It is this depth that determines the wavelength cutoff.



This proposed Canon sensor will be inherently better because it won’t throw away 67% of the incoming light.
I’m sure folks would appreciate a much shorter exposure time by 1.5 stops, for free (all else equal).

This won’t be any better for video unless the data buses are clocked off faster to cater for the greater number of sub-pixels, or there is on-die pixel binning.

Anyway, all this is great news for photography. A microlensed, 10MP, true RGB per pixel, imager would suit me fine.

Here you go, one pretty picture to show what actually happens:

http://en.wikipedia.org/wiki/File:Foveon_rgb.png

If you go *read* the Foveon white papers (especially the pre-sigma ones, when they were shopping the tech), you'll see that the sensors don't output RGB at all. RGB is mathed out of the Foveon sensel output.

(edit) oh and re your response to someone else's 67% comment. There won't be any change in exposure times, it will just be a change in gain applied to the chip if you gather more light. In theory you could net more DR.
 
Upvote 0
kubelik said:
Osiris30 said:
kubelik said:
high ISO on the SD 15 frankly looks terrible, based on the images from Photography Blog. there's not any better noise handling than a regular APS-C cam, nor is there better color integrity.

You guys do realize the SD15 is nearly prehistoric as far as sensor age, and is a 1.7x crop (smaller than APS-C we're all used to in Canon land). The new SD1 is a much better sensor (still not great), but lets compare apples to apples.

well, I was comparing it to Canon APS-C sensors, not FF if you read my post. also, I dare say even an XSi's sensor doesn't look a whole lot worse at ISO 1600 if you really want to compare old apples to old apples.

Canon's definitely got a much better chance than Sigma at pulling off this sort of new technology. I agree that it's hard to tell how much of this is limitations in the technology itself and how much of it is limitations in Sigma's budget.

Sorry I picked your post to reply to, but I could have picked a myriad of others. It wasn't directed at you specifically, but the discussion as whole where people are comparing and contrasting output. 'Only goes to ISO 6400, my 5DMK II does 25,600'. Comments like that abound on this thread.

I'm not saying I support the Foveon concept, or that it doesn't have flaws, but maybe a new take at it (like Canon's) might prove interesting.
 
Upvote 0
Osiris30 said:
Here you go, one pretty picture to show what actually happens:

http://en.wikipedia.org/wiki/File:Foveon_rgb.png
Blue, then Green, then Red - just like I said. Thank you for again proving my point.

I don't care what the subsequent processing does. I'm much more interested in the source transducer as this is where the magic (light to electron/voltage conversion) happens. All is done for the R and G and B wavelengths for each pixel (for this sensor).

Practically all colour video camera processing is done in YUV (saving bandwidth), but their sensors still convert the R & G & B wavelengths at the pixel photo sites.

I return to my simple logic: what do your yellow and red layers 'see' if the top layer extracts the "white"?

Osiris30 said:
(edit) oh and re your response to someone else's 67% comment. There won't be any change in exposure times, it will just be a change in gain applied to the chip if you gather more light. In theory you could net more DR.
If you read what I actually said "(all else equal)", you will realise that my wording was spot on. To get the same image spatial detail, you need only one third of the pixel resolution; therefore each resulting pixel can be three times larger to achieve the same detail; therefore three times the light gathering capability for the same spatial detail.

Or from an alternative angle: you have the same exposure time for three times the detail; or you could reduce image noise by 1.5 stops.

All of this makes perfect sense when you realise the simple fact that all current bayer-based colour image sensors discard (i.e. waste) two-thirds of the incoming light. Do you dispute this?

Like I also said: 10MP with true RGB per pixel is enough for me (the image details would be on par with my 21MP 5DII).
 
Upvote 0
smeggy said:
Osiris30 said:
Here you go, one pretty picture to show what actually happens:

http://en.wikipedia.org/wiki/File:Foveon_rgb.png
Blue, then Green, then Red - just like I said. Thank you for again proving my point.

Your point is incorrect. The first layer is sensitive all visible light. The blue just doesn't travel any deeper. Seriously go read some Foveon white papers. There's no secret sauce, no color filter, between layers. You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.

I don't care what the subsequent processing does. I'm much more interested in the source transducer as this is where the magic (light to electron/voltage conversion) happens. All is done for the R and G and B wavelengths for each pixel (for this sensor).

Practically all colour video camera processing is done in YUV (saving bandwidth), but their sensors still convert the R & G & B wavelengths at the pixel photo sites.

I return to my simple logic: what do your yellow and red layers 'see' if the top layer extracts the "white"?

I didn't say extracts the white. I said was sensitive to. Big different between the two. And that's why the foveon concept is difficult from a color perspective. There is a lot of color bleed between the layers. Certain red and green wavelength photons get picked off in the 'blue' layer, so you end up doing really nasty math to *try* and compensate for it. It's the exact reason you get those greenish casts in skin tones sometimes with a Foveon.

Osiris30 said:
(edit) oh and re your response to someone else's 67% comment. There won't be any change in exposure times, it will just be a change in gain applied to the chip if you gather more light. In theory you could net more DR.
If you read what I actually said "(all else equal)", you will realise that my wording was spot on. To get the same image spatial detail, you need only one third of the pixel resolution; therefore each resulting pixel can be three times larger to achieve the same detail; therefore three times the light gathering capability for the same spatial detail.

[/quote]

I wasn't disagreeing with you. I was adding to what you said for the point of clarity.

Or from an alternative angle: you have the same exposure time for three times the detail; or you could reduce image noise by 1.5 stops.

All of this makes perfect sense when you realise the simple fact that all current bayer-based colour image sensors discard (i.e. waste) two-thirds of the incoming light. Do you dispute this?

No I don't, nor did I. I merely stated that exposure times wouldn't change, only applied gain. Not quite sure where the chip on your shoulder came from, but you're picking arguments that don't even exist.

Like I also said: 10MP with true RGB per pixel is enough for me (the image details would be on par with my 21MP 5DII).

Possibly. It would depend on how you use your bayer sensor, and what wavelength is dominant in your images. The nature of current Foveon sensors is such that the blue output is really pretty murky. Some of what Canon is trying to do is deal with some of those issues.
 
Upvote 0
Osiris30 said:
There is a lot of color bleed between the layers.

FWIW, there's a lot of color bleed with the Bayer mask on current Canon dSLRs, too. For example, if you illuminate the sensor with red light, both the red and green 'channels' are activated (the green channel slightly more than the red channel, in fact). The RAW conversion engine has to sort all that mixing during the demosaicing process. See the DxOMark article on this issue.
 
Upvote 0
Osiris30 said:
The blue just doesn't travel any deeper.
This is true, but it doesn't mean the top layer is comparatively as sensitive to the red and green wavelength photons as they are to the blue; hence your statement of being sensitive to 'white' is very misleading.

Osiris30 said:
Seriously go read some Foveon white papers.
I already did. I will quote from them so that you (and the reader) can read the relevant passages:

SIGMA_WHITE_PAPER_SD14:
As a result, light in the blue wavelengths, which have the highest energy, tend to be absorbed by the silicon very quickly, generating image-forming electrons in the top layer. Light in the lower-energy red wavelengths tends to penetrate further, to the bottom layer, before generating electrons, and intermediate- energy green light tends to produce electrons in the middle layer.

Color_Alias_White_Paper_FinalHiRes:
Foveon X3 sensors take advantage of the natural light absorbing characteristics of silicon. Light of different wavelengths penetrating the silicon is absorbed at different depths -- high energy (blue) photons are absorbed near the surface, medium energy (green) photons in the middle, and low energy (red) photons are absorbed deeper in the material.

And from the horse's mouth:
http://www.foveon.com/article.php?a=69
The bottom layer records red, the middle layer records green, and the top layer records blue.

And to polish off:
http://www.foveon.com/files/X3_Illustration.jpg

Osiris30 said:
There's no secret sauce, no color filter, between layers.
I didn't way there was any filter. I did say the depth of the silicon layers determines their spectral sensitivity.

Osiris30 said:
You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.
Yes there is colour bleed, but I put it to you that the top blue layer is considerably more sensitive to blue wavelengths than red ones – do you agree? If not, can quote white papers supporting your position?

Osiris30 said:
I didn't say extracts the white. I said was sensitive to.
It amounts to the same thing (both being wrong).
Why: because it is not possible to measure photon counts without them being absorbed. Does silicon non-invasively measure light intensity?

All the white papers match my claim. Thus far you have not shown any link or paper that supports yours, despite my request.
There is no point continuing with you unless you post something of substance. Until then I think it best that we leave the reader to ponder on the direct evidence and explanations that has been given, such that they can draw their own conclusions.
 
Upvote 0
neuroanatomist said:
Osiris30 said:
There is a lot of color bleed between the layers.

FWIW, there's a lot of color bleed with the Bayer mask on current Canon dSLRs, too. For example, if you illuminate the sensor with red light, both the red and green 'channels' are activated (the green channel slightly more than the red channel, in fact). The RAW conversion engine has to sort all that mixing during the demosaicing process. See the DxOMark article on this issue.

Oh agreed and it's going to get worse.. but it's a different more consistent sort of thing with the Bayer process. What's really funny is; Bayer is closer to how our eyes *actually* work than Foveon. The holy grail (in terms of replicating our eyes) would be not needing a CFA and having the sensels 'naturally' sensitive to specific spectrum, but the manufacturing of that would be a nightmare.
 
Upvote 0
smeggy said:
Osiris30 said:
The blue just doesn't travel any deeper.
This is true, but it doesn't mean the top layer is comparatively as sensitive to the red and green wavelength photons as they are to the blue; hence your statement of being sensitive to 'white' is very misleading.

In the interest of helping those following this thread have an accurate understanding of the Foveon concept, I'll grant you the use of 'white' on my part is not the best term to describe the spectrum captured at that layer. It was lazy. However, it's not a blue sensitive only sensing 'site' (ugh we need a whole new lexicon for Foveon discussions). Unlike with a CFA based filter there is nothing to *stop* other wavelengths being sensed at that 'location/depth' and weirdness ensues.

Osiris30 said:
Seriously go read some Foveon white papers.
I already did. I will quote from them so that you (and the reader) can read the relevant passages:

SIGMA_WHITE_PAPER_SD14:
As a result, light in the blue wavelengths, which have the highest energy, tend to be absorbed by the silicon very quickly, generating image-forming electrons in the top layer. Light in the lower-energy red wavelengths tends to penetrate further, to the bottom layer, before generating electrons, and intermediate- energy green light tends to produce electrons in the middle layer.

Color_Alias_White_Paper_FinalHiRes:
Foveon X3 sensors take advantage of the natural light absorbing characteristics of silicon. Light of different wavelengths penetrating the silicon is absorbed at different depths -- high energy (blue) photons are absorbed near the surface, medium energy (green) photons in the middle, and low energy (red) photons are absorbed deeper in the material.

I'll agree with what's written there for obvious reasons. However, both of those are still really generalized (have you, honest question, not an attack) looked at the response curve for a Foveon sensel... I used to have a link to a spectral response graph.. it was ugly.

And from the horse's mouth:
http://www.foveon.com/article.php?a=69
The bottom layer records red, the middle layer records green, and the top layer records blue.

And to polish off:
http://www.foveon.com/files/X3_Illustration.jpg

Both of those are for marketing purposes, and dumbed down, which is why I said white papers. Foveon isn't incorrect (I suppose) in saying the top layer records blue, but it doesn't do so by only capturing blue photons. A lot of work is done to get just blue signal (or an approximation there of). Foveons have worse color accuracy than Bayer sensors if you look at the response charts and gamut.

Osiris30 said:
There's no secret sauce, no color filter, between layers.
I didn't way there was any filter. I did say the depth of the silicon layers determines their spectral sensitivity.

Osiris30 said:
You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.
Yes there is colour bleed, but I put it to you that the top blue layer is considerably more sensitive to blue wavelengths than red ones – do you agree? If not, can quote white papers supporting your position?
[/quote]

I will find a paper this weekend, that details what the %s are. From memory more than half the red light is lost in the other two layers. That is significant. Obviously the 'green' layer is a bigger mess than the 'blue layer', but there is significant 'green' contamination in the 'blue' layer. It may take me a while because frankly I bookmarked it so long ago I think it was 3 PCs back :)

Osiris30 said:
I didn't say extracts the white. I said was sensitive to.
It amounts to the same thing (both being wrong).
Why: because it is not possible to measure photon counts without them being absorbed. Does silicon non-invasively measure light intensity?

All the white papers match my claim. Thus far you have not shown any link or paper that supports yours, despite my request.
There is no point continuing with you unless you post something of substance. Until then I think it best that we leave the reader to ponder on the direct evidence and explanations that has been given, such that they can draw their own conclusions.

As I said above I will provide you a link to scientific documentation. However, in the interest of discussing this, I would appreciate if you don't insinuate my putting forth concepts that I haven't (such as non-invasive conversion of photons to electrons, as that is pretty much impossible.. although it might be possible in some extremely weird quantum cases I would rather not ponder right now).

Further more, I maintain the statement isn't wrong as there is nothing stopping the top 'layer' from absorbing too much bleed from the other layers. There's a pretty heavy amount of 'bleed' if we want to use that term between the layers.. and there has to be by the nature of it.
 
Upvote 0
Osiris30 said:
In the interest of helping those following this thread have an accurate understanding of the Foveon concept, I'll grant you the use of 'white' on my part is not the best term to describe the spectrum captured at that layer. It was lazy. However, it's not a blue sensitive only sensing 'site'

...

Foveon isn't incorrect (I suppose) in saying the top layer records blue, but it doesn't do so by only capturing blue photons.

...

Further more, I maintain the statement isn't wrong as there is nothing stopping the top 'layer' from absorbing too much bleed from the other layers. There's a pretty heavy amount of 'bleed' if we want to use that term between the layers.. and there has to be by the nature of it.
As has been said already: this and other colour systems (bayer) have an amount of bleed. The fact is that the top layer is intended to capture the blue, and it generally does. Sure it's not a perfect blue cutoff, but what is? Our eyes aren't great either!

Osiris30 said:
...which is why I said white papers.

...

I'll agree with what's written there for obvious reasons.
White papers demanded, given and seemingly accepted. That job's done?

You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red. Therefore, your earlier claim that my "point is incorrect", was itself incorrect. Hence you might want to be more careful with what proof you post too.

Osiris30 said:
(have you, honest question, not an attack) looked at the response curve for a Foveon sensel... I used to have a link to a spectral response graph.. it was ugly.
Yup, and before my posted previous response too.
I grant the colour bands for the layered imager are not as defined as that for a filtered bayer system, but as has been said: that one bleeds too.
Either way, "white" implies luminosity (a la Y[UV]), which is not the same as 'blue with some green and a bit of red'.
If this is the root cause of our difference (and I suspect it is), then I think this issue could now be closed.

Osiris30 said:
However, in the interest of discussing this, I would appreciate if you don't insinuate my putting forth concepts that I haven't (such as non-invasive conversion of photons to electrons, as that is pretty much impossible.. ).
I don't see how else you can reconcile how silicon photo site can be 'sensitive' to a certain range of wavelengths, without extracting those photons.
 
Upvote 0
smeggy said:
As has been said already: this and other colour systems (bayer) have an amount of bleed.

Foveon color-bleeding is much worse than Bayer.
Bayer sensors use optical color filters, which are much, much better than silicon in filtering ‘undesired’ wavelength bands.

In a Bayer sensor there’s color bleeding because of 'cross-talk'.
Cross-talk is when light passing through the color filter of a pixel excites electrons in neighboring pixels.
This kind of bleeding is perfectly correctable and there are different techniques that do it.

In contrast, the color bleeding in a Foveon sensor is uncorrectable.
Like I said, silicon is much worse at filtering light of certain wavelengths compared to an optical filter.

(It's completely another matter that manufacturers use wider bands for the color filters in order to improve sensitivity - which, of course, results in poor color separation.)

You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red.

Osiris is actually right about this one.
What you see on the Foveon web site is a logical diagram of how light is filtered.
The diagram certainly does not represent how the filtering is performed in practice.

As Osiris said, all of the layers in a Foveon sensor are equally sensitive to all wavelengths.
But based on the depth of a layer, only light of a certain wavelength band is supposed to be absorbed by this particular layer.
In practice, though, the absorption is far less than ideal, so color separation is (much) worse compared to using an optical filter.
 
Upvote 0
Hello,

did anybody find the original patent? [...]

Best regards

TK

Edit:
Sorry, freepatentsonline did not find the japanese patents that have not yet been translated. The patent exists and a machine translated version is available throug the website of the japanese industrial properties digital library at http://www.ipdl.inpit.go.jp/homepg_e.ipdl . Using the search service there is described at http://www.jpaa.or.jp/english/patent/how_to_search.html .
As far as I can understand the claims, the patent describes a back-side illuminated CMOS-sensor with stacked detectors to detect different wavelengths. It is a kind of back-side illuminated Foveon sensor that - according to the patent claim - 'concerning this invention, it is possible to improve the color separation characteristic. '
 
Upvote 0
x-vision said:
Foveon color-bleeding is much worse than Bayer.
I have never disputed this. However, my issue is with the claimed “white”

x-vision said:
You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red.

Osiris is actually right about this one.
What you see on the Foveon web site is a logical diagram of how light is filtered.
I have cause to disagree.
Then it can be said that the white papers Osiris referred to showed the “logical” diagrams too; he accepted these as R & G & B. The wiki link he gave to support his argument showed exactly the same (R & G & B).

x-vision said:
As Osiris said, all of the layers in a Foveon sensor are equally sensitive to all wavelengths.
I am looking at the spectral sensitivity of the top layer of an early X3 – it is very much blue weighted; it certainly isn’t what anyone could call “equally sensitive to all wavelengths.”. I have hosted the response of the top layer, from the Foveon Inc document:

http://i231.photobucket.com/albums/ee266/smeggyhead/X3top.png

If all layers really are as equally sensitive as each other (spectra wise), then I would say Foveon missed a trick as we know there is a dependency of the wavelength absorption to the thickness of the silicon layers – it makes perfect sense to make the top layer (for blue) thinner than the next (for green) which in turn would be much than the next (for red).
So if the X3 design really did use layers of equal depth (one of the wiki links given earlier in this thread weakly indicates otherwise), then Canon has considerable scope for improvement - which is good news!
 
Upvote 0
Status
Not open for further replies.