vscd said:
@jrista
Yepp, you mixed some things. The first FoveOns were 5Mpixel on three Layers, which (could) be summed up to 15MPixel. The next generation was the Merrill, where about 15MPixel on 3 Layers can be counted to 45MPixel. The new Quatto Design ist again 3layered, but just the blue one get's 19MPixel, where the other 2 are just about 5 MPixel each. Now happy counting
At the End, the results are crucial.
Actually, the results aren't all that crucial. You don't have a 19mp sensor just because the blues are higher resolution. You get something around the average of the spatial resolutions of all three colors. Red has the lowest weight, green actually has the highest weight because it is where the bulk of light entering a camera usually comes from. Blue has the second highest weight. You can increase luminance detail in blue, but since blue is inherently a lesser component of visible light, and since our eyes are less sensitive to blues, green dominates. The bulk of the luminance detail is going to come from green, and since that is a lower resolution than the blues, you don't have a 20mp sensor. If you just take the averages, you have 9.7mp. You might have somewhere between 10-15mp, depending on exactly how the Foveon color information is processed and, for lack of a better word, interpolated, to produce a final image. Either way, you still aren't getting any more spatial resolution than the SD1 had years ago, and honestly I'd prefer the SD1 design, rather than the quattro design (becase at least with the SD1, your spatial resolution was exact, not some blend of higher and lower frequency pixel spacing.)
Sigma is still being very misleading by saying that you get 39mp. They are working some quirky imaginary mathematical magic as well, because assuming you just added up the resolutions of each pixel, you get 19.6+4.9+4.9, which is 29.4mp. How they get to 39mp is beyond me, however I suspect they are using some arbitrary means of measuring an upscaled image in relation to bayer images like they have done in the past. Simple fact of the matter is, upscaling and bayer interpolation (especially with AHDD) are NOT the same thing, and do NOT produce the same results. Sigma is probably comparing images demosaiced with your standard 2x2 intersection-based demosaicing to upscaled Foveon images, which is intentionally putting bayer at a significant disadvantage that ignores the most common and effective means of demosaicing.
vscd said:
It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.
This is of course confusing, but it's not a lie, because... let's define a pixel. You refer to it as a Pixel is the Picture which comes out from the cam. The Pixels from the Sensor are something different... you could also count each layer as a single Pixel, because it has an own wired output and the information is capsulated within this *single* Lighttrap. Remember the Nikon D2X (or was it the D1x?), there the Pixels were halfsized, so what do you count?
It's some kind of definition. The Sigmapeople have the same "problem" as Intel had 10 years ago... recognizing that Megahertz has nothing to do with speed, but the people don't know this. So you have to catch them with Numbers they understand.
A pixel is a spatial measure, two dimensional, not three dimensional. You can define pixels in many ways, however as far as bayer is concerned, it's all the same. You can measure the individual r, g, and b pixels in a sensor. Assuming you ignore the masked pixels, you will usually get one extra row and column at the edges of the RAW image data as compared to the interpolated image. So, if you have a camera with 5184x3456 (i.e. 1D X) pixels, that is the EXACT pixel count as far as exported TIFF or JPEG images go. The actual RAW pixel count, ignoring the masked border pixels, would be 5186x3458, as you need that extra set of rows and columns on the outer edge in order to perform interpolation. The actual true RAW pixel dimensions are greater, around 5212x3466 when you do include the masked border pixels (which are used for sensor black and white point calibration).
Regardless of how you slice it, a "pixel" in bayer is a direct unit of two-dimensional SPATIAL measure. A "pixel" in Foveon, the way Sigma defines it, is a three-dimensional measure of both spatial detail and color depth. If you want to compare Foveon to Bayer, you have to remove that third color depth dimension, otherwise you are comparing apples to oranges. Spatially, Foveon sensors have, historically, been significantly lower resolution than bayer sensors. This is no myth, no trickery, there isn't even any anti-Foveon here. As I've said, I love the Foveon concept, I just think that Foveon in the hands of Sigma is in the wrong hands, and I think the way Sigma markets Foveon is so misleading that it ramps up prospective buyers hopes to levels that simply cannot be met. (Either that, or you get gullible saps who buy so fully into Sigma's misleading concept that they are missing the forest for the trees, and therefor missing out on the kind of raw, unmitigated resolving power you can get with some current bayer sensors...which actually includes both the 5D III and D800, probably also the 6D, and for sure all current medium format sensors on the market without question.)
vscd said:
Furthermore, the D800 and 645D both have more information to start with. They are resolving details that are not even present in the SD1 image at all, despite it's sharpness
No, they DON'T, that's what the image should have told you. I could resize the Sigma-Picture 4 Times and have more resolution, but not more information.
Your conflating two separate concepts. Resolution is an overloaded word, and some of it's "overloads" are invalid. I try to be very specific when I use words like resolution. When I say resolution in this context, I try to always make it very clear that I am talking about
resolving power and
spatial resolution. These terms refer to very well understood concepts in the world of imaging, and describe a very specific process where by something with a given area is divided into certain discrete elements...such as a real image projected onto a sensor by a lens being "resolved" by each pixel.
What you are referring to is one of the invalid uses of resolution, which refers to image dimensions. Simply upscaling an image does not give you more resolution...it gives you more pixels, but your resolution has not actually increased. By upscaling, you enlarge everything, including the smallest discernible element of detail, such that those smallest elements are also larger. That is not increasing resolution...it is simply increasing the total number of pixels and enlarging your images dimensionally. I rarely ever use the word "resolution" to refer to changes in image dimensions. I usually use the term "image dimensions", or refer to concepts like upscaling or downsampling, to refer to changes in image dimensions.
The
resolution I am talking about is not the
"resolution" your are talking about. Upscaling an image does not give you more resolution...it simply gives you more pixels, and changes the ratio of pixels to detail. Luminance detail, I might add...when you upscale a Foveon image, you aren't just blurring chrominance information (as is the case with bayer interpolation)...you are ALSO blurring luminance information (which is NOT the case with bayer interpolation...you keep your full luminance information at each pixel.)
So you are correct about not having more information after upscaling.
vscd said:
A light sharpening filter can deal with the softness in a few seconds, and then the SD1 is at a real disadvantage.
Please try and proove me wrong, the RAW-Data is available for download @dpreview.com
By the way, the Size of the photodiodes are of course really important, especially on lowlight, but the technology solves some of the problems. On the paper no one could beat my old 5D with ca. 8.2 Microns, but in reality your 1DX would run circles around it 8)
Your argument is a classic fallacy...to claim that technological improvements will only benefit one type of technology. Technological improvements can indeed help Foveon, but at the same time, MASSIVE strides have been and will continue to be made for bayer type sensors as well. Foveon isn't going to be gaining technological advancements in leaps and bounds and suddenly end up well ahead of bayer...it just isn't going to happen.
In this case, the reason the 1D X would run circles around the 5D does not actually have anything to do with pixel size. The 5DC is actually still an excellent performer. I know a few wedding photographers who LOVE their 5DCs, they still produce wonderful images. Technologically, they have high read noise (actually quite high), so the images from a 5DC cannot be pushed around like those from a 1D X or even a 5D III or 6D. The CDS technology used in the 5DC isn't as good as it is today. The individual color filters in the bayer CFA are stronger in the 5DC, which improves native color fidelity, but reduces total sensor Q.E.
So yes, technology does solve some problems. If the Foveon was in the hands of Canon or Sony, I believe it could rapidly become a major contender in the sensor market. I do not believe it would ever offer as much spatial resolution (i.e. true resolving power) as any bayer...as Foveon improves, so too will bayer sensors, and bayer will always have the lead in terms of spatial resolution, assuming your aim is to keep Foveon noise levels as low as bayer levels. Spatially, Foveon could compete directly with bayer if you simply ignored noise levels, however because the red layer is at the bottom, despite silicon's greater transparency to red, your still losing a lot of light by the time the red photodiode senses anything. A spatially-equivalent Foveon is going to be a very noisy sensor.
I think the only way your going to get a true "full color fidelity per pixel" sensor that is actually better than bayer would be if something like TriCCD came along again. Three separate sensors with single-color color filters on them, which receive light from a special prism where each sensor gets a FULL compliment of light of it's given color. You then have full sensitivity, full spatial resolution, in three (or, as should be possible, more) full colors. You would then simply need to convert each RAW color layer into R,G, and B pixels in an output image, no interpolation required (like Foveon, but without the sensitivity and noise issues.) Such a system would be rather bulky, but I do think it would be ideal for those who want everything to be the absolute best. Foveon is just another compromise....spatial resolution for color fidelity, just like bayer is a compromise: color fidelity for spatial resolution.