More Sensor Technology Talk [CR1]

arco iris said:
Nothing indicates that Canon have a new sensor ready.
I do not understand all the rumors that are flourishing.
That a Foveon-like sensor is to be launched is a joke, what is the probability that Canon could do anything better than the five major sensor manufacturers? Sony alone has over 50% of the whole world wide sensor market.

Someone stated that the color accuracy would be Foveons strength, this is completely untrue.
It takes a lot of processing power to get the colors tuned in a Foveon-based sensor, there are many articles written about this topic and problems.
The desire for a new sensor from Canon is larger than Canon's ability right now to manufacture one.
But we can all dream

And we have lots more processing power, even in camera, available to the average user, to say nothing of the systems that are available for design simulations. Not saying it's automatically a solution, just pointing out that we have a LOT more CPU horsepower, to say nothing of a properly designed camera chip having a few specialist bits as ASICs.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dilbert said:
jrista said:
...
A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.)

People that have actually used 20+MP APS-C cameras and used the Sigma DP2M would disagree with you and say that the Sigma is the sharper camera/lens combination so it goes without saying that the Sigma DP2M (15MP) should also be considered sharper than the 6D.

"so it goes without saying"? LOL, love that. ::)

Oh, and...prove it! (I have already proven the opposite with visual examples on multiple occasions, so the burden of proof is on you this time.)
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dilbert said:
jrista said:
...
Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.
...

How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Because Canon's in-camera NR (excluding LENR), which only applies to JPEG, really sucks. It blurs the crap out of data, so IMO it is not a viable option. It isn't particularly advanced, either, given the limited horse power in Canon's DIGIC processors, so it can't be much more advanced than a tweaked averaging algorithm anyway.
 
Upvote 0
Jul 21, 2010
31,228
13,089
dilbert said:
How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.
 
Upvote 0
Canon Rumors said:
We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras.

Any sensor expert can testify that Foveon is an impractical technology.

At the same time, Canon's has its own 'dual pixel' tech, which has a lot of potential and room to improve.
So, why would Canon go chasing Foven when they already have a new, promising technology in-house ??

This Foveon rumor is totally fake.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
x-vision said:
Canon Rumors said:
We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras.

Any sensor expert can testify that Foveon is an impractical technology.

Why would you say Foveon type sensors are impractical? Sigma doesn't even rank within throwing distance of the radar, let alone on the radar, when it comes to sensor design and manufacture. Foveon's struggles have far less to do with it being "impractical", and more to do with the fact that it's Sigma, a company that doesn't really have the resources to really bring Foveon to bear and make it as competitive as it has the potential to be.

Fundamentally speaking, gathering full color information at each and every pixel is a superior means of imaging with digital image sensors. It's a more complicated sensor design, requiring advanced techniques and the use of the proper silicon-based compounds in both the substrate and the photodiodes in order to actually allow enough light to penetrate deeply enough to work. Given that you do end up with three times as many photodiodes to read for any given pixel count, you need a faster readout system, however that generally required higher frequency components to support a reasonable frame rate. Sigma just hasn't demonstrated that they either have the R&D budget, nor technological resources nor the prowess to develop the technology that would allow them to build a truly high resolution Foveon style sensor that had a high speed readout without junking the signal with a crap ton of read noise.

Canon, on the other hand...they very possibly DO have the resources to make a Foveon style sensor both high resolution, and a truly viable alternative to bayer. I would actually bet on Sony as being the most likely to have the resources to do it, but Sony hasn't shown any interest, where as Canon actually has patents for the technology.

At the same time, Canon's has its own 'dual pixel' tech, which has a lot of potential and room to improve.
So, why would Canon go chasing Foven when they already have a new, promising technology in-house ??

Because DP tech has nothing to do with base image quality. DP adds an ALTERNATIVE option for performing autofocus, one that as of yet has not proven to be better than the tried and true approach of using a dedicated AF unit and a mirror. A Foveon-style sensor design, on the other hand, if done right with new techniques to increase parallelism and readout rate without increasing read noise, improve pixel structure, and increase pixel count, has the potential to radically improve image quality.

So...the real question is, why wouldn't Canon pursue the technology?
 
Upvote 0

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,246
1,939
Canada
dilbert said:
neuroanatomist said:
dilbert said:
How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.

I didnt know that dark frame subtraction was limited to JPEG.
I thought (assumed) that dark frame subtraction in jpegs was done by taking a raw image, subtracting a raw dark frame, and then creating the JPEG. Wouldn't it be harder to do this with subtracting jogs?
 
Upvote 0
Jul 21, 2010
31,228
13,089
dilbert said:
neuroanatomist said:
dilbert said:
How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.
I didnt know that dark frame subtraction was limited to JPEG.

Canon's in-camera LENR doesn't really remove noise (defined as such) from images, in many cases it actually adds noise. It's effective at removing hot/stuck pixels, but that's about it.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
x-vision said:
jrista said:
Because DP tech has nothing to do with base image quality.

It may not be quite obvious but the dual-pixel tech has a lot to do with image quality.

First of all, let's clarify that the dual-pixel tech is actually a quad-pixel tech.
Canon is not advertising it but Chipworks has a photo of the sensor die, which shows a quad pixel design.

I've been through every free Chipworks article they have ever published. They do not have any images of Canon's 70D sensor that shows anything like quad pixel. Their full teardown of the 70D sensor costs $16,000, so I highly doubt anyone on these forums has seen those documents, but I have little doubt they show a dual-pixel design, not a quad pixel design.

I've also read Canon's actual patents on the technology, which show two separate photodiodes, not four. Even their revised patent, which includes high and low sensitivity halves, is still just HALVES, so there are only two photodiodes.

So sorry, but I'm calling bulls*it on this one. :p Unless you can furnish an actual image of the actual sensor die it self that proves otherwise (and believe me, I spent a lot of time looking for that image after the 70D was released). I've actually found real sensor images for prototype DPAF sensors from Omnivision and Panasonic, which basically steal Canon's design...however again, they clearly show two photodiodes per pixel, not four. (It also means Canon won't be the only company with DPAF technology in the relatively near future...so their potential advantage in this area will probably dissapear. Omnivision's patents have DPAF pixels in 100% of the sensor pixels, vs. Canon's 80%.)

The term "dual PIXEL" is also very missleading terminology. A pixel is a single image element, however in Canon's designs, each single pixel has a split photodiode. The photodiode exists below the microlenses and CFA. As such, there is nothing special that can be done to make it magically provide higher resolution or anything like that, and changing the design to actually allow higher resolution

x-vision said:
So, what does this have to do with image quality?

The quad-pixel tech will eventually allow Canon to use non-bayer color filters.
I can easily see them using dichromatic or polychromatic filters for each sub-pixel - and then deriving the overal pixel color from the values of the different sub-pixels.
In principle, this would be the same as Foveon, as each pixels will have full color information (rather than a single R/G/B color as in bayer).
But it would be better than Foveon, as the implementation complexity and poor color separation of Foveon are avoided.

This is the same old logical fallacy that everyone seems to be making. DPAF is not a magic bullet for higher image resolution. What happens if you make a green quad-pixel a green, red, blue, and say white (luma) "quad pixel"? You no longer have a quad pixel!! You have separate, smaller pixels with one quarter of the area...that's all! You probably also lose the ability to do focal-plane autofocus as well, as having a SINGLE microlens over the split photodiodes is essential to how focal-plane AF works. If you try to keep one microlens over a "quad pixel", then you have problems with distributing the right amount of light over each of the RGBW "sub pixels", which would increase noise.

There is nothing special here about this technology. It has ONE purpose: To support autofocus. That's it. Anything else, and you simply have smaller color pixels.

x-vision said:
Using non-bayer color filters has the advantage of both better resolution and better light sensitivity, as a bayer filter 'throws' away two thirds of incoming light vs one third for a dichromatic filter, for example.

In addition to allowing the use of non-bayer color filters, the quad-pixel tech also has the known ability that sub-pixels can be read at different ISO/amplification.
MagicLantern has shown that Canon already implements this in the 5DIII - but they are not using it.

Just these two improvements to the dual-pixel tech (quad-pixel, as noted) have the potential to improve sensor performance with maybe ~2 stops of ISO.
That's 4x more sensitivity, which is huge

Again, this is a lot of wishful thinking. For one, it has NEVER been demonstrated that the pixel halves can be read at different ISO settings. Even if they could, I've debunked the notion that it would improve anything on several occasions...in the end, assuming you read one half at ISO 100 and the other half at ISO 800 or 1600, you have a net neutral result: Your jacking up the ISO on HALF the amount of light (since it's half the photodiode), which does NOT get you the same result as what Magic Lantern is doing with current Canon sensors (which uses a downstream amplifier, not the per-pixel amplifiers, to achieve their results...plus, they are using full pixels, not half pixels). I highly doubt that we will ever see Canon's DPAF technology used in such a way that it improves dynamic range. Not even from Magic Lantern.

Sensitivity has to do with total photodiode (or rather sensor) area and quantum efficiency. Splitting the photodiode does nothing to improve quantum efficiency....Q.E. is a fixed trait of the silicon, how the silicon is doped and design of the photodiodes. Were already at about 50% Q.E., so to gain ONE stop of improved ISO, we would need to double that to 100% Q.E. That isn't ever going to happen in a consumer-grade device.

Reducing the amount of light filtered is also a means of increasing the rate at which light fills the sensor, however again, total sensor area is the real limiting factor here. A reduction in filtering simply means you fill the sensor with charge faster. That lets you use a lower gain, however it also means that you could end up clipping your signal for any given exposure length...and the only way to avoid that is to increase the total sensor area (i.e. move from APS-C to FF, which is the only way to really improve dynamic range.) Switching to some kind of dichromic filter still means your using a filter, and still means your losing light. Your actually losing about 50% of the light per pixel color, which isn't anywhere close to a two-stop improvement in high ISO (it's actually only half a stop improvement at best).

The only real improvement on a bayer sensor design is the use of MCS, or Micro Color Splitting. This concept replaces filtration entirely with light splitting, using a special kind of prism-like component between the microlens and the photodiodes that utilizes the diffracted nature of light to either channel one part of light downward or deflect the other part of light to the neighboring pixels. You end up with White-Red and White+Red light in this kind of sensor. MCS preserves nearly 100% of the light. This does indeed improve low light sensitivity, but again, the gain is at most one stop, and you still benefit from it most by using a larger sensor. Assuming we could never actually achieve 100% Q.E. (the only sensors that I know of that achieve higher than 70% Q.E. are Grade 1 CCDs, extremely expensive commercial and scientific grade, and generally speaking the Q.E. curve peaks somewhere in the greens or yellows, and falls off again to around 50-60% for all other wavelengths.)


x-vision said:
So, the dual pixel tech has a lot to do with image quality.

Note that I have no inside info or anything like that.
I'm just making informed guesses of where Canon might take this technology.

Your making wild assumptions, not informed guesses. ;) I spend a considerable amount of time reading actual sensor patents, reading everything Chipworks and Image Sensors World posts, and while I do not have inside info, I do have a lot of references I can point to that back up my conclusions.

Canon's dual "pixel" technology is not dual pixels. It is dual photodiodes within each pixel, and it is quite LITERALLY "dual", not "quad". Even if they had "quad" photodiodes, that still isn't going to improve image quality. The photodiodes are split below the microlenses and color filters, by necessity (as that is required for the AF functionality to work). All you've really talked about is Canon making the pixels (the whole pixels, microlenses and filters and all) smaller...which really isn't anything special, and it precludes the option of having split photodiodes (which must exist below the microlens and CFA in order to function properly for AF purposes.)
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dilbert said:
jrista said:
dilbert said:
jrista said:
...
A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.)

People that have actually used 20+MP APS-C cameras and used the Sigma DP2M would disagree with you and say that the Sigma is the sharper camera/lens combination so it goes without saying that the Sigma DP2M (15MP) should also be considered sharper than the 6D.

"so it goes without saying"? LOL, love that. ::)

Oh, and...prove it! (I have already proven the opposite with visual examples on multiple occasions, so the burden of proof is on you this time.)

Sigma DP2M review:
http://www.luminous-landscape.com/reviews/cameras/sigma_dp2m_review.shtml

This doesn't prove anything. For one, the reviewer is highly biased. He talks about a resolution advantage for the Foveon in comparison to the M9. Despite the fact that the M9 image CLEARLY suffers from camera shake blur, it still has the resolution advantage. The guy is comparing 100% results, rather than normalizing the image size. The M9, for example, has about a 30% advantage on spatial resolution, meaning it should have been downsampled a bit. Once downsampled, any perception of a resolution advantage for the Foveon disappears entirely. (And, of course, if the guy actually used an appropriately stable tripod to snap his images, that would have allowed the M9 to trounce the Foveon hands down with or without downsampling.)

The NEX is in the same boat as the M9...it has a significant true resolution (spatial resolution) advantage over the Foveon. The reviewer was also pretty clear that he used a sucky lens on the NEX, but still used it anyway because he was more interested in framing parity (which is naive, you can achieve that by moving the camera). The NEX comparison had what appears to be an intentional handicap vs. the Foveon not because the sensor isn't as sharp...but because a soft lens was used. Despite that, downsample the NEX image, and most of the resolution advantage of the Foveon disappears. Use an appropriately sharp lens, and the NEX will best the Foveon both at native size and downsampled.

The Sony RX100? MASSIVELY diffraction limited, as it's only a 1" sensor. Fuji X100? The X100 sensor has never been one for the sharpest images. The sensor uses shifted microlenses. That helps reduce the kind of vignetting that occurs when you place the lens so close to the sensor, but it doesn't really do anything to improve resolution. The X100 is softer than pretty much any Canon camera except the Rebels.

And, finally, that article does NOTHING to prove that the DP2M offers more resolution than the Canon 6D.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dilbert said:
neuroanatomist said:
dilbert said:
How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.

I didnt know that dark frame subtraction was limited to JPEG.

Dark Frame Subtraction (LENR, Long Exposure Noise Reduction, which I explicitly excluded in my response) is not the standard JPEG noise reduction. LENR is user-togglable for either RAW or JPEG, and it's sole purpose is to remove hot pixels. However because of how LENR works, it actually tends to make deep shadow random noise worse. This is why astrophotographers generally do not use in-camera LENR, and instead choose to take 30-50 "dark frames" which are then averaged together. The averaging greatly reduces the random noise component to the point where it is practically non-existent, and makes the hot pixel information more accurate. A master dark frame is then subtracted from each light frame, as that again is superior to using in-camera LENR.
 
Upvote 0
jrista said:
I've been through every free Chipworks article they have ever published.

Hmm. Obviously not, because Chipworks has a free partial die photo of the 70D sensor:
https://chipworks.secure.force.com/catalog/ProductDetails?sku=CAN-EOS-70D_Pri-Camera&viewState=DetailView

Take a careful look and consider the geometry of a dual-photodiode pixel:
- you can have two rectangular photodiodes that form a square pixel
- or, you can have two square photodiodes that form a rectangular pixel
- finally, you can have two square photodiodes plus wasted space on the die that form a square pixel

Now, as I said, take a careful look at the partial sensor die and tell me if you see:
a) anything rectangular features on this photo
b) any apparently wasted space

A partial die photo is certainly not a definitive proof.
It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not a dual one.
Again, just think of the geometry of a dual pixel design and make your own conclusions.

As for the resolution of a non-bayer filter: I should have been more clear.
The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer.
Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.

Now, imagine if each of the photodiodes had their own, individual color filters.
You still have a single pixel with a single microlens.
Underneath, however, there are four individual color filters - one for each photodiode.
Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color.
And instead of deriving a single R/G/B color, as in a bayer sensor, you derive all three primary colors.

In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are filtering-in more light than a monochromatic filter.

Hopefully I'm able to communicate the point.

Back to the topic of extra resolution:
The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor.
Admittedly, the resolution increase is not all that big - but it's still an increase.
(Sigma/Foveon fans will tell you that it is a significant increase 8) ).

Think about all those things.
You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.

Finally, even if you are still not convinced that the 70D sensor is a quad-photodiode sensor, consider that going from dual-photodiode to quad-photodiode is the next evolutionary step of this design - for all the reasons outlined above.

The simple fact is that a bayer sensor throws away 2/3 of the incoming light.
And the seemingly low-hanging fruit for improving sensor efficiency is to throw away less light.
Foveon is just one solution to the problem. There will be others soon.

Regards
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
x-vision said:
jrista said:
I've been through every free Chipworks article they have ever published.

Hmm. Obviously not, because Chipworks has a free partial die photo of the 70D sensor:
https://chipworks.secure.force.com/catalog/ProductDetails?sku=CAN-EOS-70D_Pri-Camera&viewState=DetailView

Take a careful look and consider the geometry of a dual-photodiode pixel:
- you can have two rectangular photodiodes that form a square pixel
- or, you can have two square photodiodes that form a rectangular pixel
- finally, you can have two square photodiodes plus wasted space on the die that form a square pixel

The image you are referring to does not even have any of the dual pixels in it, assuming those are pixels at all (on the contrary, they kind of look like readout pins in a land grid array, which would be on the BOTTOM of the sensor, the opposite side from where the actual pixels are, and assuming they are not readout pins, I would know a CMOS sensor pixel if I saw one...those are not even remotely close to what a CMOS pixel looks like...they don't even have microlenses or color filters...it's just wiring and bare silicon substrate). That image is from the outer periphery of the sensor die, which is usually riddled with power regulation transistors and other non-pixel logic. Canon's DPAF pixels are only in the center 80% of the part of the die that actually contains pixels...so even if the image WAS of pixels (which it is not), then they wouldn't be DPAF pixels...they would be standard single-photodiode pixels.


x-vision said:
Now, as I said, take a careful look at the partial sensor die and tell me if you see:
a) anything rectangular features on this photo
b) any apparently wasted space

A partial die photo is certainly not a definitive proof.

It isn't proof, because you are gravely mistaken about what that photo is actually of. There is even some kind of stamp on top of the electronics in the region of that photo that ChipWorks has shared. You don't stamp the actual pixels...and usually such stamps are again on the back side or very outer periphery of the sensor, not the side with the pixels. This photo is either of peripheral logic on the top side of the sensor, or circuitry or pinning on the bottom side of the sensor.

x-vision said:
It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not a dual one.
Again, just think of the geometry of a dual pixel design and make your own conclusions.

Again, your completely misinterpreting what that image is.

x-vision said:
As for the resolution of a non-bayer filter: I should have been more clear.
The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer.
Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.

Now, imagine if each of the photodiodes had their own, individual color filters.

I don't need to imagine, as that is exactly what a sensor with split photodiodes WITHOUT DPAF or QPAF would be...each photodiode would have it's own color filter...because each photodiode would be a pixel! :p Thus, what you are proposing is the removal of DPAF technology, and a factor of two reduction in pixel size, and a higher resolution. That's it! There really, truly, honestly isn't anything special about giving each smaller photodiode it's own filter. That just means you have a sensor with four times as many pixels, which is pretty much what each new generation of sensors gets anyway. (Well, not four times as many pixels, but a pixel size reduction and an increase in pixel count is a pretty consistent fact of just about every new still photography camera release.)

x-vision said:
You still have a single pixel with a single microlens.

If you do this, then you are going to have problems properly distributing light into each photodiode. The entire purpose of the microlens is to guide as much light as possible onto the photodiode. If you try to increase the pixel resolution below the microlens, then the problem you have is that one of those four subpixels gets more light than the rest, as the microlens, just like any other lens, FOCUSES LIGHT. The focal point, where the majority of the light is concentrated, is rarely dead center underneath the microlens (the farther from the center of the sensor you go, the more off-centered the focal point from the microlens will be). So, if you split the color filter and photodiode underneath the microlens, you'll greatly increase noise levels...one out of four subpixels will get most of the light, and the other subpixels will get significantly less light. You idea effectively trades noise for resolution.

You counter might be, well just use more layers of microlenses for each photodiode. If you throw in more layers of microlenses, then you further screw with the AF capability of the subpixels, as you would be mucking with the phase of the light below the initial microlens. Muck with phase, and you can no longer "phase detect" (PD), or at least not detect it as well or as accurately. So again, as I said before, all you are proposing is a factor of two reduction in pixel size, or a factor of four increase in pixel count. In other words, a standard (non-AF capable) sensor with higher resolution...and more noise.

x-vision said:
Underneath, however, there are four individual color filters - one for each photodiode.
Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color.
And instead of deriving a single R/G/B color, as in a bayer sensor, you derive all three primary colors.

Look up Micro Color Splitting Sensor. Panasonic's design is vastly superior to any kind of di/poly-chromatic filter, because it simply doesn't filter. It splits light, but directs all of it into photodiodes.

x-vision said:
In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are wasting less light than a monochromatic filter.

And, by definition, MCS wastes zero light. Why invest time, money, and effort into a very complicated pixel design, one that is prone to being much noisier due to improper use of a microlens, when there are proven techniques that eliminate filtration entirely?

x-vision said:
Back to the topic of extra resolution:
The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor.
Admittedly, the resolution increase is not all that big - but it's still an increase.

What your proposing is a significant increase in resolution. The fact that you don't understand even that demonstrates that you don't understand sensor technology all that well, which indicates that your just speculating and dreaming. Nothing wrong with dreaming, but you should be aware that's what your doing. ;) Your DOUBLING resolution in both the horizontal and vertical by making each photodiode one quarter the size. The D800 clearly has a lot more resolution than the 1D X, and it's basically the same thing...twice the resolution.

You are really just talking about a pixel size reduction. Again...there isn't anything special here, and because your proposing that one single microlens be used for multiple pixels, your going to have an increase in noise due to what I described above. The increase in noise is going to be a severe drag on IQ, so again...your talking about at the very best, a net neutral difference, and at worst, your going to get WORSE IQ with your sensor design because of the increased noise.

x-vision said:
Think about all those things.
You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.

I fully understand what DUAL-photodiode technology is, how it works, why it's designed the way it's designed, and I also understand that it isn't some magical technology that will suddenly slingshot Canon ahead of the competition. You are dreaming, pure and simple, that somehow Canon has solved their IQ problems with an AF invention. It's just a dream, though. It's the same dream a lot of Canon users have, because they all want better IQ out of Canon sensors, but it's still just a dream. It's an ill-educated dream, I am sorry to say, and your misinterpreting a lot of information (such as the Chipworks photo of the OUTER PERIPHERY of the 70D sensor...anyone who knows anything about die fabrication understands that the outer periphery of any CMOS die, sensor, cpu, memory, whatever, is the domain of power regulation, control circuitry, wiring and pin solder points, etc. not core logic, memory cells, or pixels.)

Canon does not have quad pixel technology. If they had already used it in the 70D, then they would have received patents for it years ago. I've read all of Canon's photography-related patent releases for the last three years. They have several for DPAF technology, some new ones since the 70D that have not been implemented anywhere. Their patents, being patents, MUST be extremely precise and explicit about the design (that's what patents are, specific details about specific implementations of a concept). Not one single patent Canon has ever filed for DPAF has ever detailed quad photodiodes. Neither would Canon have sold themselves short by announcing DUAL pixel technology if in reality they had QUAD pixel technology...if they had QPAF, they would have told the world. It would be big news.

Finally, Canon also already has patents for layered sensor technology that really, truly DOES have the potential to increase image quality. Given some of the things their patents discuss, such as the use of what is basically akin to the nanocoating technology they use on some of their lenses on the second and third photodiode layers, Canon has the potential to improve the total amount of light their red and green photodiodes are sensitive to by reducing the chance of reflection at those lower layers, thereby increasing Q.E. Canon Foveon-like technology has the potential to be superior to Sigma Foveon technology, and with Canon's R&D budget, they certainly have the power to bring the technology to market and continue improving it.

If you want to root for Canon, and really want better image quality (which has less to do with photodiode count, and more to do with pixel design quality, quantum efficiency, etc.), then you should look into their layered sensor patents and root for them to actually make a DSLR camera that uses it. If Canon is indeed using nano-crystal technology to reduce reflection and increase Q.E. of the photodiodes in their layered sensors, I think they really have something that could outdo Sigma's Foveon, and outdo it enough that Canon could produce a 30 or 40 megapixel layered sensor that not only has the benefit of higher color fidelity, but also have higher native, non-bayer spatial resolution. THAT is where a meaningful increase in IQ for Canon DSLRs will come from....not DPAF.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
CarlTN said:
Don't pretend you don't have your own biases, though. You are proud of, and trumpet often, your bias against an entire company, Sigma.

I've never pretended. I'm pretty strait up about what I think of Sigma. I am not against the entire company. I've said on many occasions I think their new lenses from the last couple of years are excellent, and that I appreciate the competitive force they bring in that arena.

I have NEVER hidden my feelings about how Sigma has handled Foveon. I have been quite open about it. I think they do Foveon, which I believe is technology with a lot of potential, a severe disservice by missleadingly selling it as having some magical powers to increase resolution, when it does nothing of the sort. Spatial resolution is determined by pixel size, plain and simple. Foveon's strengths lie in other areas than spatial resolution, and they are good strengths. No color moire, good sharpness (for the resolutions that Foveon sensors come in), and excellent color fidelity.

Sigma wastes far too much time, money, and effort trying to trick potential customers into thinking they will get more resolution with a Foveon than a bayer, which is just a blatant, outright lie. I don't appreciate that, and yes, I fault Sigma for it. If Sigma would take a big chunk of their false advertising budget and inject it into their R&D department instead, I think they could make Foveon viable both on the color fidelity and spatial resolution fronts, and actually have a real competitor on their hands. But sadly, they keep pushing their missleading advertising.

CarlTN said:
Your bias and the need to feel proud of it somehow, is rather juvenile, don't you think?

Bait. Hmm. I'll let another fish bite.

CarlTN said:
Since you are very concerned about having the highest image quality, you should never use an aps-c camera, yet you do, very often. Practice what you preach.

I use an APS-C camera because I haven't had the money to buy a full-frame camera. I spent over ten grand on a lens last year. No one who isn't independently wealthy spends that kind of money, then turns right around and spends thousands more on MORE equipment. I do practice what I preach. Soon as I have the funds, I'll be using a full frame camera. Until then, my 7D has more reach, thanks to it's higher spatial resolution, and that's a fact I greatly appreciate. Oh, it's also a fact I preach, too. ;P
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
CarlTN said:
Lol, well I know you like the higher spatial resolution...but that only works if the lens is up to the task (at least for "spatial" resolution of the image itself...not for comparing final or effective resolution of the larger sensor to a smaller denser one at the same lens focal length, etc...obviously ultimate image quality is less of a factor in that case). Not many lenses are up to the task. Also I'm not saying the 600 ii is not able to pull it off, obviously of course it is. For astro imaging, would you not still need to do a similar multi shot NR process, even for a 6D, 5D3, or 1DX sensor? How about for the 24MP or 36MP Exmors? Wouldn't the A7r be an interesting option (since it can be adapted for EF lenses)? Or is the closer flange distance enough to discourage trying that, due to the higher ghosting? I assume in that process, you are not using (and would not want to try to use) ISO settings above 1000 or so (meaning the Exmors would have clear advantage).

If you are talking about astrophotography (honestly not really sure what your trying to get at here), then the answer would really be "none of the above". I use my 7D for AP only because it's what I have right now. As far as the best sensors for AP, one doesn't use a camera built for normal photography. Every normal photography camera "cooks" the images. Even Canon's, even though they cook them less than the competitors, are always modifying the raw signal in some ways, but more than enough that it can make it difficult to properly calibrate and integrate a stack of images to produce a low noise, easily stretched astro image.

Astro CCD imagers tend to be vastly superior to any CMOS image sensor from normal photography cameras. They are usually monochrome, therefor their spatial resolution, particularly for color filtered frames, is higher despite the fact that they often have slightly larger pixels. They use higher grade silicon and fabrication processes, and usually have higher Q.E. (55-65% is common for low end CCDs, 70-96% is what you get for higher end CCDs). They also usually have considerably higher dynamic range. About the best DR for a modern CMOS imaging sensor for a normal photography camera is around 40-43 dB. Even a lower end astro CCD gets about 55dB, and the midrange and higher end CCD can get anywhere from 70-105dB of dynamic range. About every 3dB is a one-stop improvement. Most of the nice high end astro CCDs that use the Kodak KAF-16803 full frame (36x24mm) sensor with 9µm pixels (or similar variants, some use a 36.7x36.7 4096x4096 pixel square Kodak KAF sensor, but it's specs are generally the same) get between 79 and 91 dB of dynamic range (depends on the actual grade). FWC is around 100,000e-, read noise is about 9-11e-, and dark current (when fully cooled) is around 0.02e-/s or less. Factoring in read noise, that's anywhere from 24-29 stops of dynamic range...which utterly TROUNCES the D800 and any other Sony Exmor based imager on the market.

When it comes to core technology, a lot of the technology that matters for normal photography really doesn't matter a wit for astrophotography. Spatial resolution is an important factor for normal photography. Not the single most important (you should know me well enough by now that I don't believe in the concept of a single most important feature for IQ :p). When it comes to astrophotography, it's a very keen balancing act, between getting enough resolution, but not so much that your dramatically oversampling your subject. You have a number of factors that go into producing a "spot size", the size of a diffraction-limited star at the sensor. When you factor in seeing (atmospheric turbulence), most of the time it's difficult for amateur astrophotographers to find seeing good enough that stars are less than 2-3" (arcseconds) in diameter. For nebula, galaxies, clusters, basically anything non-planetary, you want your sensor resolution to be fairly close to your spot size, not oversampling them too much, but also not undersampling them. For the most part, a pixel size around 5-6µm is pretty ideal for this purpose, but most astro CCDs allow pixel binning, so you can make your effective pixels larger or smaller as necessary when adding barlows or focal reducers in order to match your pixel size to your seeing/spot size. Astrophotography is also dependent on having sensitivity to wavelengths of light that are either utterly unimportant for normal photography, or which may even have a negative impact on color accuracy (i.e. deep reds and near IR and near UV), while concurrently being averse to other wavelengths that are often very important to normal photography (i.e. the various bandwidths within which sodium and mercury vapor lighting emit...yellows, greens, and violets, which contributes to light pollution in cities, is often filtered out with light pollution reduction filters.)

What I need for astrophotography is very different than what I need for stills photography. There is nothing wrong with more spatial resolution for normal photography, more of it certainly doesn't hurt. Total sensor area is also important for normal photography for VERY different reasons that it is important for astrophotography. Total sensor area leads to higher real sensitivity with normal photography. Larger sensor will always trump smaller sensor when it comes to high ISO performance.

With astrophotography, most of what your imaging are point light sources. This makes full well capacity, quantum efficiency, and having a low gain setting far more important than high ISO performance, as the higher you crank gain (or ISO), the faster your stars saturate and "bloom" (clip, then begin to spill over into neighboring pixels, which also eventually clip). Physical aperture size vastly more important than relative aperture in astrophotography, as it doesn't matter so much how fast you image as how much light you get from each and every definable point of the sky that you are resolving. Physical aperture is also the primary factor in determining limiting magnitude, so a larger physical aperture, even if the telescope is effectively only f/8 or f/10, is important if your goal is to resolve very small details of very distant objects, or very small, dim stars.

It's generally illogical to compare normal photography needs with astrophotography needs. They are very different. What I argue for here on CR is very different than what I may argue for in the astrophotography threads here, or on astrophotography forums. Conflating what I've said about CMOS image sensors for normal photography with what I may have said about astrophotography is generally pointless, as there is no real correlation between those two types of photography.

CarlTN said:
Have you seen Sigma's internal balance sheets and accounting? You claim you know where their money goes. I admit obviously their foveon sensor is still very much in infancy, which is a shame. However, they did buy the rights to the design from the American company. And, they are the only ones producing a sensor like it (so far). They even have a new one (which you were quick to trash, without ever having tried it).

Your kind of missing the point of what I was saying. It doesn't matter how much money is involved. My point is that if they dumped their Foveon advertising budget into Foveon R&D, the money would be better spent. Regardless of how much they actually spend. A truly competitive Foveon (one that has BOTH the color fidelity advantage as well as competitive spatial resolution) would speak for itself, in images and by a much larger community and word of mouth.

CarlTN said:
I see nothing wrong with giving Sigma credit for trying, for being different...it seems like it works for the segment of the market they have laid claim to.

I've never faulted Foveon for trying. Ever. I've only faulted them for lying or being misleading and creating this mistaken notion that somehow, Foveon's layered pixels somehow give them the magical ability of creating more resolution out of nothing. Sigma has a misleading, fallacious advertising agenda for Foveon. They seem to think they NEED to falsely trump up Foveon's resolution capabilities in comparison to bayer sensors, when they really don't. That's my beef with them. If they were truthful and sold Foveon on it's REAL strengths, I'd have nothing to call Sigma out for, and we wouldn't be having this discussion.

CarlTN said:
Primarily they make lenses, after all. The cameras are a very small niche. Why would you expect them to be able to spend the funds necessary for the R&D to develop the sensor to your liking, when Canon and Sony have (as yet) not been able to do it? Canon is trying to do it, and they are the largest camera company in the world. Yet it's still not even for sale.

Based on the earliest patents from Canon for similar technology, they haven't been at it for even half as long as Sigma (or the prior owner of the technology). Hence my quip about Sigma better spending their money on R&D...it shouldn't take so long for such an intriguing sensor technology to go...almost nowhere. It was at 4-5mp for years, then it had a jump in the last couple of years to higher resolution, but it still lags behind bayer sensors. Foveon still suffers from noise problems, so it's never been as viable at high ISO (which immediately makes it a non-viable option for a LOT of photographers). Some of the technology in Canon's patents already surpasses Sigma's technology that is already in Foveon.

I sincerely hope that as more cash flows into Sigma from their lens division, they will be able to better prioritize more funds for Foveon R&D. I do like the core concept. I just don't believe that Sigma has done Foveon justice (so far). Things could change, and if/when they do, I'll applaud Sigma for the change...but to date, the snail is still losing the race.

CarlTN said:
(And let's face it, if Sigma spent $1 billion to develop it, it would still be a failure in your opinion, no matter how good it ultimately was...how is that fair or unbiased?)

Now your just assuming things. If you actually learned anything about me over my time on these forums, you would understand how ludicrous that assumption is. :p

I could care less, really, about how much money Sigma spends. What matters more to me is whether they money they spend results in progress that produces real value, and whether they HONESTLY sell the thing or whether they resort to misleading factoids and spurious claims. If Sigma could make the Foveon a truly competitive sensor TECHNOLOGICALLY (and it certainly has the potential, nothing wrong with the technology itself), it wouldn't matter if it cost $1,000,000 or $1,000,000,000...so long as in the end they turned enough of a profit to continue investing in the technology and keep it competitive. If they end up failing in the end, well it still wouldn't matter if they spent a hundred grand or a hundred billion, it would all be a waste in the end.

CarlTN said:
It will be both interesting and amusing, to see your criticism of Canon's new camera (assuming it even uses this technique...for all we know the next full frame model may not even use it after all. It's just rumors...)

Again, your disgust with Sigma for simply existing, is juvenile, misplaced, and unnecessary. As is your harsh view of those who use, or have used their products. If we state our opinion of the images we got from using the camera, who are you to say we don't have a right to state it?

And were back to the personal insults. You and I do indeed have a mutual loathing of each other, and I have no interest in being friends with you...but I'm really trying to keep it off the public forum. No one else wants to see us fight, so I respectfully ask that if you want to insult me, please use PMs. Then you can get as nasty and hateful as you want.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
dilbert said:
jrista said:
...
Sigma wastes far too much time, money, and effort trying to trick potential customers into thinking they will get more resolution with a Foveon than a bayer, which is just a blatant, outright lie. I don't appreciate that, and yes, I fault Sigma for it. If Sigma would take a big chunk of their false advertising budget and inject it into their R&D department instead, I think they could make Foveon viable both on the color fidelity and spatial resolution fronts, and actually have a real competitor on their hands. But sadly, they keep pushing their missleading advertising.

If Canon come out and say that their 15MP layered sensor is in fact 45MP, how are you going to
respond? 15 is just an example, maybe it will be 20, maybe some other number. But the challenge will be how to market it as being superior to a 36MP Nikon or a 36MP Sony.

If Canon comes out and makes spurrious claims about how their 15mp layered sensor is really a 45mp sensor, I'll be the first to call them out for using the same missleading tactics as Sigma. I almost hope they do, and if they do, I really hope your still around, because I would love to prove to you that I stick to the facts and the physics, regardless of brand.

How many times have you heard me say the D800 has a superior sensor at low ISO, or in terms of resolution (hell, just a couple posts ago I stated that the D800 had twice the resolution as the 1D X)? I only dispute what's wrong. The Foveon, like Canon's DPAF, is not a magic bullet. It cannot give you more resolution than it actually has. Canon DPAF cannot give you more resolution, because DPAF isn't about resolution. The D800 cannot give you better high ISO performance because high ISO performance is physics-limited. I could care less about the brand...all I really care about are the facts, the engineering, and the physics when it comes to what a sensor or camera is capable of.

I would have thought my tiraid against the mistaken notions of Canon's DPAF also being a magic bullet for better IQ in the future would be an indication of how little I care about brand when debating the facts.

dilbert said:
I spent over ten grand on a lens last year.

Why should we care about this?

Well, if your going to intentionally miss the point, you shouldn't. :p
 
Upvote 0
Mar 2, 2012
3,188
543
CarlTN said:
Actually I did read it, but I'm not wasting my time reading this new one. Try to help some people out, and they bite your head off. Irrational? Indeed...and you're extremely guilty of outright insulting me and trolling me, many times over. Again...I asked a simple question, how about a simple answer that is less than 100 words, with no insults and no whining? What is the reason you would not buy an A7r, to try for astrophotography? Is it the ghosting? I could understand that, if that's what it is. It can't be the cost, because we both know how upset you got when I suggested you would not "buy a $30k lens right this moment"...and you said you would, if you thought it would help your photography achieve new heights of greatness. You could also use the A7r for static bird photography, something a $1995 CCD imager couldn't do.

Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.
 
Upvote 0

jrista

EOL
Dec 3, 2011
5,348
36
jonrista.com
3kramd5 said:
Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.

I'm actually pretty into astrophotography. It splits my budgets now. The A7r, along with pretty much any Sony camera, Nikon camera (with the exception of a couple that use different sensors), and a lot of other cameras that use Sony sensors (i.e. Pentax) are all pretty poor choices for astrophotography. Those manufacturers all mess with the image signal pretty heavily.

They clip the black point, rather than using a bias offset (Canon uses a bias offset). That causes two problems for astrophotography: By clipping to the black point, you simply eliminate a lot of the dimmer background stars entirely, they are gone from the signal, unable to be retrieved; They make it difficult to use standard bias frame calibration techniques to remove any noise caused by sensor bias and recover those dim stars (which IS possible with Canon cameras.)

Sony/Nikon/Pentax/etc. also tend to apply noise reduction to the RAW signal in hardware...an unconfigurable noise reduction, that's just always applied. Having total control over noise is a pretty critical facet of astrophotography...the vast majority of images you create for astrophotography have image data only in the lowest echelons of the signal, stars are the only things that have levels throughout the signal. While you can do some pretty amazing things with the D800 at ISO 100 when it comes to lifting shadows, that's nothing compared to the kind of lifting you do in astrophotography. The D800 can be lifted about six stops. In astrophotography, your often lifting by a lot more than that...to really pull out dust lane detail and dark nebula detail and things like that, it's common to lift things by an equivalent of 10-15 stops! Not even the great D800 or any other Exmor DSLR camera can handle that, in part because of the black point clipping, which is throwing away a couple/few stops of potentially recoverable information in the first place.

A proper astro CCD camera has at least 18-19 stops of dynamic range, and usually well over 20 stops. They are thermally regulated (anywhere from -40°C to -80°C Delta-T from ambient), which nearly eliminates dark current noise, generally have relatively low read noise, usually have much higher Q.E., and usually have larger pixels (smaller astro CCD sensors usually have around 5-6µm pixels, larger astro CCD sensors usually have 9-24µm pixels; FF DSLRs tend to have pixels in the 6-7µm range, and APS-C DSLRs are now around 3.5-4.5µm). Since astro CCD sensors are also most often monochrome, and you usually image in LRGB (luminance + RGB), you can produce images with much stronger signals than you can with bayer-filtered DSLRs.

So, while I'd like an A7r for my landscape photography, it is actually one of the worst possible choices for astrophotography. I do landscapes sometimes, wildlife and birds most of the time, and astrophotography every time there is a clear night. Since Canon cameras don't mess with the image signal nearly to the degree that other manufacturers to (they do some response curve tweaks at certain higher ISO settings, but I usually image at ISO 400, which Canon pretty much leaves alone), and since the 5D III can be used for landscapes (it has a very respectable pixel count and frame size for that), wildlife and birds (it meets my minimum expectations for rate at 6fps), AND can be used for astrophotography, it's a far better investment in the interim (especially with prices hitting $2700 pretty regularly now.) It may not have the DR of the A7r, but it is a vastly more versatile device.

If it wasn't for the astrophotography, I'd get a 1D X. By getting a 5D III, that leaves me plenty of cash to invest in a proper astro CCD, a filter wheel and filter system, and a few other accessories.

So...given how versatile Canon's DSLRs already are...do they really need to become a Sony clone with their new sensors? ;D :p
 
Upvote 0