September 02, 2014, 03:21:40 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 67 68 [69] 70 71 ... 277
1021
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 05:51:41 PM »
If you convert a bayer sensor's data to monochrome, you effectively have just the full detail luminance.

If you can just expand on that a little Jon. When you say 'you' are you referring to the manufacturers setting it up this way ( like the Leica monochrome), or the user converting the RAW to B&W ?

I mean you as in the "you" who is reading my words. ;)

You can use astrophotography editors to read RAW images directly. If you used something like LR or ACR, if you convert to grayscale that is post-demosaicing, so you really wouldn't gain the same benefit. With something like Iris, you can simply read out a RAW image as monochrome data. You might get slight artifacting this way...silicon really is not very sensitive to blue at all, so depending on the exact camera you are using, the blue pixels might end up a bit darker. I recently purchased a tool called PixInsight, an astrophotography processing tool (exceptionally powerful). PixInsight has something called PixelMath, which allows you to run just about any algorithm you can imagine on your images. If you have a blue darkening problem when converting a RAW image directly to luminance, you could easily apply some pixel math to reweight some luminance information, stealing a little bit from green and adding it to blue. Or you could artificially apply some digital amplification to just the blue pixels, which would make them a little noisier, but normalize the brightness.

Regardless of how you correct any blue deficiencies (which, BTW, would also be present in a Foveon sensor, as silicon is silicon), Bayer sensors generally gather roughly the same average amount of light at every sensor pixel. Absent any color, that is your full resolution DETAIL...and it really doesn't need any interpolation, all it might need is some massaging to normalize luminance levels in post. Blue is just a noisy channel because of lower natural sensitivity levels...we've all been living with that fact ever since we started using digital cameras. Everyone knows about it from the noise in their blue skies or the blue paint on that car or the blue dye in that girls hair.

1022
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 05:38:58 PM »
@jrista

Yepp, you mixed some things. The first FoveOns were 5Mpixel on three Layers, which (could) be summed up to 15MPixel. The next generation was the Merrill, where about 15MPixel on 3 Layers can be counted to 45MPixel. The new Quatto Design ist again 3layered, but just the blue one get's 19MPixel, where the other 2 are just about 5 MPixel each. Now happy counting ;)

At the End, the results are crucial.

Actually, the results aren't all that crucial. You don't have a 19mp sensor just because the blues are higher resolution. You get something around the average of the spatial resolutions of all three colors. Red has the lowest weight, green actually has the highest weight because it is where the bulk of light entering a camera usually comes from. Blue has the second highest weight. You can increase luminance detail in blue, but since blue is inherently a lesser component of visible light, and since our eyes are less sensitive to blues, green dominates. The bulk of the luminance detail is going to come from green, and since that is a lower resolution than the blues, you don't have a 20mp sensor. If you just take the averages, you have 9.7mp. You might have somewhere between 10-15mp, depending on exactly how the Foveon color information is processed and, for lack of a better word, interpolated, to produce a final image. Either way, you still aren't getting any more spatial resolution than the SD1 had years ago, and honestly I'd prefer the SD1 design, rather than the quattro design (becase at least with the SD1, your spatial resolution was exact, not some blend of higher and lower frequency pixel spacing.)

Sigma is still being very misleading by saying that you get 39mp. They are working some quirky imaginary mathematical magic as well, because assuming you just added up the resolutions of each pixel, you get 19.6+4.9+4.9, which is 29.4mp. How they get to 39mp is beyond me, however I suspect they are using some arbitrary means of measuring an upscaled image in relation to bayer images like they have done in the past. Simple fact of the matter is, upscaling and bayer interpolation (especially with AHDD) are NOT the same thing, and do NOT produce the same results. Sigma is probably comparing images demosaiced with your standard 2x2 intersection-based demosaicing to upscaled Foveon images, which is intentionally putting bayer at a significant disadvantage that ignores the most common and effective means of demosaicing.

Quote
It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

This is of course confusing, but it's not a lie, because... let's define a pixel. You refer to it as a Pixel is the Picture which comes out from the cam. The Pixels from the Sensor are something different... you could also count each layer as a single Pixel, because it has an own wired output and the information is capsulated within this *single* Lighttrap. Remember the Nikon D2X (or was it the D1x?), there the Pixels were halfsized, so what do you count? ;) It's some kind of definition. The Sigmapeople have the same "problem" as Intel had 10 years ago... recognizing that Megahertz has nothing to do with speed, but the people don't know this. So you have to catch them with Numbers they understand.

A pixel is a spatial measure, two dimensional, not three dimensional. You can define pixels in many ways, however as far as bayer is concerned, it's all the same. You can measure the individual r, g, and b pixels in a sensor. Assuming you ignore the masked pixels, you will usually get one extra row and column at the edges of the RAW image data as compared to the interpolated image. So, if you have a camera with 5184x3456 (i.e. 1D X) pixels, that is the EXACT pixel count as far as exported TIFF or JPEG images go. The actual RAW pixel count, ignoring the masked border pixels, would be 5186x3458, as you need that extra set of rows and columns on the outer edge in order to perform interpolation. The actual true RAW pixel dimensions are greater, around 5212x3466 when you do include the masked border pixels (which are used for sensor black and white point calibration).

Regardless of how you slice it, a "pixel" in bayer is a direct unit of two-dimensional SPATIAL measure. A "pixel" in Foveon, the way Sigma defines it, is a three-dimensional measure of both spatial detail and color depth. If you want to compare Foveon to Bayer, you have to remove that third color depth dimension, otherwise you are comparing apples to oranges. Spatially, Foveon sensors have, historically, been significantly lower resolution than bayer sensors. This is no myth, no trickery, there isn't even any anti-Foveon here. As I've said, I love the Foveon concept, I just think that Foveon in the hands of Sigma is in the wrong hands, and I think the way Sigma markets Foveon is so misleading that it ramps up prospective buyers hopes to levels that simply cannot be met. (Either that, or you get gullible saps who buy so fully into Sigma's misleading concept that they are missing the forest for the trees, and therefor missing out on the kind of raw, unmitigated resolving power you can get with some current bayer sensors...which actually includes both the 5D III and D800, probably also the 6D, and for sure all current medium format sensors on the market without question.)

Quote
Furthermore, the D800 and 645D both have more information to start with. They are resolving details that are not even present in the SD1 image at all, despite it's sharpness

No, they DON'T, that's what the image should have told you. I could resize the Sigma-Picture 4 Times and have more resolution, but not more information.

Your conflating two separate concepts. Resolution is an overloaded word, and some of it's "overloads" are invalid. I try to be very specific when I use words like resolution. When I say resolution in this context, I try to always make it very clear that I am talking about resolving power and spatial resolution. These terms refer to very well understood concepts in the world of imaging, and describe a very specific process where by something with a given area is divided into certain discrete elements...such as a real image projected onto a sensor by a lens being "resolved" by each pixel.

What you are referring to is one of the invalid uses of resolution, which refers to image dimensions. Simply upscaling an image does not give you more resolution...it gives you more pixels, but your resolution has not actually increased. By upscaling, you enlarge everything, including the smallest discernible element of detail, such that those smallest elements are also larger. That is not increasing resolution...it is simply increasing the total number of pixels and enlarging your images dimensionally. I rarely ever use the word "resolution" to refer to changes in image dimensions. I usually use the term "image dimensions", or refer to concepts like upscaling or downsampling, to refer to changes in image dimensions.

The resolution I am talking about is not the "resolution" your are talking about. Upscaling an image does not give you more resolution...it simply gives you more pixels, and changes the ratio of pixels to detail. Luminance detail, I might add...when you upscale a Foveon image, you aren't just blurring chrominance information (as is the case with bayer interpolation)...you are ALSO blurring luminance information (which is NOT the case with bayer interpolation...you keep your full luminance information at each pixel.)

So you are correct about not having more information after upscaling. ;)

Quote
A light sharpening filter can deal with the softness in a few seconds, and then the SD1 is at a real disadvantage.

Please try and proove me wrong, the RAW-Data is available for download @dpreview.com  ;)

By the way, the Size of the photodiodes are of course really important, especially on lowlight, but the technology solves some of the problems. On the paper no one could beat my old 5D with ca. 8.2 Microns, but in reality your 1DX would run circles around it  8)

Your argument is a classic fallacy...to claim that technological improvements will only benefit one type of technology. Technological improvements can indeed help Foveon, but at the same time, MASSIVE strides have been and will continue to be made for bayer type sensors as well. Foveon isn't going to be gaining technological advancements in leaps and bounds and suddenly end up well ahead of bayer...it just isn't going to happen.

In this case, the reason the 1D X would run circles around the 5D does not actually have anything to do with pixel size. The 5DC is actually still an excellent performer. I know a few wedding photographers who LOVE their 5DCs, they still produce wonderful images. Technologically, they have high read noise (actually quite high), so the images from a 5DC cannot be pushed around like those from a 1D X or even a 5D III or 6D. The CDS technology used in the 5DC isn't as good as it is today. The individual color filters in the bayer CFA are stronger in the 5DC, which improves native color fidelity, but reduces total sensor Q.E.

So yes, technology does solve some problems. If the Foveon was in the hands of Canon or Sony, I believe it could rapidly become a major contender in the sensor market. I do not believe it would ever offer as much spatial resolution (i.e. true resolving power) as any bayer...as Foveon improves, so too will bayer sensors, and bayer will always have the lead in terms of spatial resolution, assuming your aim is to keep Foveon noise levels as low as bayer levels. Spatially, Foveon could compete directly with bayer if you simply ignored noise levels, however because the red layer is at the bottom, despite silicon's greater transparency to red, your still losing a lot of light by the time the red photodiode senses anything. A spatially-equivalent Foveon is going to be a very noisy sensor.

I think the only way your going to get a true "full color fidelity per pixel" sensor that is actually better than bayer would be if something like TriCCD came along again. Three separate sensors with single-color color filters on them, which receive light from a special prism where each sensor gets a FULL compliment of light of it's given color. You then have full sensitivity, full spatial resolution, in three (or, as should be possible, more) full colors. You would then simply need to convert each RAW color layer into R,G, and B pixels in an output image, no interpolation required (like Foveon, but without the sensitivity and noise issues.) Such a system would be rather bulky, but I do think it would be ideal for those who want everything to be the absolute best. Foveon is just another compromise....spatial resolution for color fidelity, just like bayer is a compromise: color fidelity for spatial resolution.

1023
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 04:55:03 PM »
So arguing that the DP2, which itself is still just a 4.7mp camera (or even the SD1, which is a much higher resolution Foveon), is potentially equivalent to a 39mp camera, is gravely missing the point of having a truly higher resolution sensor (in luminance terms...luminace is where detail comes from, color CAN be of much lower spatial resolution so long as your luminance information is high...as a matter of fact, this is actually a standard practice in astrophotography, to image at high resolution in luminance, then when you switch to RGB filters, you bin 2x2 or 3x3, which increases your sensitivity, and reduces your resolution by 4x or 9x...and your never the wiser when looking at the final blended result). It buys into the very misleading hype that Sigma spews, which I believe is ultimately, in the long term, going to damage their reputation and hurt Foveon (because as more people try to produce images with a 4.7mp or 15mp Foveon sensor that compare to even the regular old D800, let alone the D800E or the 645D, and realize they simply cannot...they are either going to ditch Foveon and go back to bayer type sensors, or they are going to begin badmouthing Foveon.)

Nobody said the first generation Foveon sensor is equal to 39 MP.  Jrista, again you learn about what you're interested in, but this leaves a lot of facts for you to miss. 

When I mentioned the "new DP2", I was referring to this...it's called the Quattro.

http://www.sigma-global.com/en/cameras/dp-series/

...And it's most definitely more resolution than the SD-1...it's a new sensor with more pixels.  Just exactly how many pixels it is, is kind of unclear.  I think Sigma don't mind that it is unclear...lol.  The actual pixel dimensions of the RAW image, might be 19 MP, or might be more.  For some reason it can produce JPEGs that are 7680 x 5120 = 38.3 MP. 

To argue about what outresolves what, on such a new product, is a waste of time in any case.

I try to speak about what I have had experience with.  I've owned the original DP2, and it most certainly had more resolution than its native 4.6 MP image.  As I said, it could easily scale to about 25 MP, and still look sharp enough to me for a print at 300 ppi. 

So there's no reason to start bashing Sigma, and talking about what "TROUNCES" what.  Nobody thinks a crop sensor is ever going to be "better" than a full frame sensor...other than you and your 7D :P.  Everybody knows nothing compares to the mighty 7D!

I don't know where you guys are getting your info. On your own site, the DP2 is listed as having 29mp effective (non-masked) "photo detectors", which are the same thing as a photodiode. From the dp-series link:

Color Photo Detectors    Total Pixels: Approx.33MP, Effective Pixels: Approx.29MP

That is 29 million PHOTODIODES. That means, from a SPATIAL standpoint (actual resolving power), you have 29/3 million PIXELS (actual square areas on the sensor that are light sensitive), or 9.7mp. The DP2 that you are referring to is a TEN MEGAPIXEL sensor. Not only that, it is a 10mp APS-C sized sensor, so were talking pretty small pixels.

I'm sorry, but it doesn't matter how good those pixels are...there is no way, physically, that they could ever compare to the 36.3mp of a D800 nor the 40mp of the 645D. Spatially, from a luminance (detail) perspective, there is no loss of data or resolution in a bayer array. There is only, ONLY, a loss of color data or color spatial resolution. The loss of spatial color detail is a bit of a detractor for bayer type sensors, it hurts their color fidelity a little bit, however it is not enough of a detractor to warrant calling a 9.7mp Foveon as good as a 39mp bayer. The FULL detail luminance from a bayer is more than enough to offset the loss in color detail.

Neuro has explained how a properly designed OLPF (which is usually the case these days, even leaning towards the slightly weak side more often than not), despite blurring high frequency data, is not a huge detractor for bayer sensors as OLPF's blur predictably and consistently across the area of the sensor, meaning a light sharpening filter in post usually reverses the softening impact of an OLPF.

The whole "eqivalent megapixels" deal that Sigma uses is also very misleading. Currently, today, megapixel counts are based on output image widthxheight. A 15mp Sigma Foveon is 15mp, in terms of actual megapixels stored in the output JPED image or a JPEG that you can create from RAW. It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

No more misleading than stating a sensor has so many megapixels, when each photodiode samples one color, and the other two are interpolated in the JPEG.

Your misunderstanding. Every bayer pixel may have only one color, but regardless of color, every pixel receives "light". This is why the spatial resolution of a bayer sensor is so high, and why a D800 is capable of resolving so much detail. If you convert a bayer sensor's data to monochrome, you effectively have just the full detail luminance. Advanced demosaicing algorithms like AHDD are explicitly designed to preserve as much luminance detail as possible, while effectively distributing color data to avoid mazing artifacts and other demosiacing quirks. A bayer sensor needs no interpolation from a luminance standpoint, they only need interpolation from a color standpoint. Bayer sensors have nearly their full resolution in terms of luminance, and since luminance is really what carries your fine detail, they DO have FAR more resolution than any Foveon on the market today, including the SD1.

This isn't missleading, it's how the physics and mathematics of interpolation work. Interpolation algorithms like AHDD are actually capable of producing crisper, smoother, sharper results with a bayer than your standard, basic demosaicing algorithm, and AHDD is pretty ubiquitous these days (LR/ACR use it, Adobe Aperture uses it, and it's a demosaicing option in most Linux RAW editors like RawThearapy and Darktable.) AHDD is even used in lower level tools, often used for astrophotography, like DeepSpaceStacker, Iris, and PixInsight.

The only loss with a bayer type sensor is in terms of color spatial resolution and color fidelity. The most obvious of those is really color fidelity, as when chrominance is blended with luminance, our eyes can't really tell the difference, or at least the difference is small enough that it isn't an issue unless you are directly comparing, side-by-side, a Foveon and Bayer image with the same image dimensions (in other words, if you had a 10mp bayer and a 10mp Foveon, then you would be able to tell that the Foveon had slightly better color microcontrast and better color fidelity...however when comparing a 35 or 40mp bayer to a 10mp Foveon, the only visible difference MIGHT be sharpness...that would depend on the strength or presence of an AA filter.)

1024
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 06:30:33 PM »
Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion".

That's EXACTLY what I did....

Still playing and learning.... and having lots of fun doing it.

BTW... GooseCreek2 on Vimeo is what happens when you shoot geese at 30HZ....

Have you tried shooting at 24fps instead of 30fps? Just curious if that would help. BTW, unless you've configured things differently, if you are shooting at 30fps, I would expect the actual shutter speed to default to 1/60th of a second. Given that, it isn't all that surprising that your getting some stutter...it's that whole timing issue (which will present regardless of whether you are playing back at 60Hz, or down interpolating your video to 30fps for playback at 30Hz). If you drop to 24fps, your shutter speed would be 1/48th of a second (by default, at least I would expect), and that offset might help avoid the stutter problem when playing back.

I'd also offer that NOT harrassing the geese while in your canoo would probably avoid wing beats entirely. ;P

Also, what do you use to process your videos? Premier?

The 60D allows a shutter speed of 1/30 second when shooting 30FPS. Obviously, they cheat and it isn't really 1/30th but is close to it...

The editing software I have for video is Pinacle Studio. I do NOT recommend it.

Oh yes, I've encountered Pinacle Studio. Bleh.

I am a bit surprised that at a shutter speed of 1/30th, your getting stutter like that. Maybe it's still just a timing issue,  but I would have expected 1/30th to produce a bit more motion blur, which should smooth out the issue a bit. Of course, these are lower end DSLRs...they aren't exactly designed for video, it's more of an afterthought, and they all still have rolling shutters and the like as well. (Even the GoPro's have rolling shutter issues...a friend of mine builds and flys model airplanes, and he often sticks his gopro on the plane itself for some cool videos...but wow, the rolling shutter and stutter effects are BAAAD.)

1025
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 05:49:01 PM »
Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion".

That's EXACTLY what I did....

Still playing and learning.... and having lots of fun doing it.

BTW... GooseCreek2 on Vimeo is what happens when you shoot geese at 30HZ....

Have you tried shooting at 24fps instead of 30fps? Just curious if that would help. BTW, unless you've configured things differently, if you are shooting at 30fps, I would expect the actual shutter speed to default to 1/60th of a second. Given that, it isn't all that surprising that your getting some stutter...it's that whole timing issue (which will present regardless of whether you are playing back at 60Hz, or down interpolating your video to 30fps for playback at 30Hz). If you drop to 24fps, your shutter speed would be 1/48th of a second (by default, at least I would expect), and that offset might help avoid the stutter problem when playing back.

I'd also offer that NOT harrassing the geese while in your canoo would probably avoid wing beats entirely. ;P

Also, what do you use to process your videos? Premier?

1026
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 05:24:03 PM »

Also keep in mind that higher frame rates, without further camera handling technique or processing technique, NORMALLY result in slow motion video when played back at normal speed (your output frame rate is always going to be 24 or 30 frames per second. The high frame rates on TVs, like 60fps, 120fps, etc. are artificial, produced within the TV hardware that does additional inter-frame processing to blend two separate source frames into four or six output frames.) Your video does not play back at 60fps if you recorded it at 60fps...it still plays back at 30fps. So don't think of 60fps as the solution to your problems...ultimately, shooting at 60 or 120 fps just means your creating a slow motion sequence. This is especially true if you are interested in shooting at mixed frame rates, and producing a video that contains normal rate and slow motion sequences...you have no choice but to use the same output frame rate...anything shot slower than that will speed up, anything shot faster than that will slow down.


Ummm.......that is not true. Most modern TVs will handle at least 60 fps, after all, that is what computers put out, and all of them can be used as computer monitors. My TV can handle frame rates up to 240 fps, if it does get that, THEN it interpolates frames. It automatically matches frame rate to the source, and if you have the feature turned on, it interpolates frames to make up the 240 fps if your source does not go that high.

Typical sources would be BluRay or AVCHD output (which may be either 30p or 60p), or a computer (usually 60p)

Just because your typical movie is 24 fps does not mean that all other video is treated the same way.

There is refresh rate, and there is progressive video output. There is also the interpolated motion rates, such as Samsung's CMR (Clear Motion Rate). There ARE some high end premium HDTV channels (i.e. ESPN) that offer 60p progressive frame rates, however for the most part, the majority of HDTV is 30p. My Samsung non-3D TV has a 120Hz "refresh rate", supports 60p output, but as I've never paid for premium HDTV channels (they are extremely expensive, as they use up a hell of a lot of bandwidth), I've only ever seen 30p content on it.

The modern 240Hz refresh rates for TVs were originally created to support interpolated motion rates for 3D video up to 120Hz per eye, or 240Hz in total. BluRay video frame rate is still 24fps as far as I know, and the supported BluRay playback rates (according to the current BluRay standards) are either 720p @ 60Hz progressive, or 1080p @ 30Hz progressive or 50/60Hz interlaced. The higher refresh rates on TVs avoid timing issues between the video rate and the playback rate (the refresh rate)...i.e. if you had a 24fps video on a 24hz screen, you have to get the timing exact to ensure that each frame is ready to play exactly every 1/24th of a second...if your timing is off (there is always noise, so it's likely), then you end up rerendering the same frame for another 1/24th of a second, causing stutter. With higher refresh rates, timing becomes less and less of an issue. At 60hz, you play back ~0.4 frames per refresh cycle, at 120Hz you play back ~0.2, and at 240Hz you play back ~0.1. You still can't get timing 100% exact, which is where interpolation like CMR comes into play, which smooths things out if you end up having to re-render the same frame for an extra refresh cycle.

The key is to note that even though BluRay and HDTV channels support playback (officially) at 30p or 60i (or even 60p, as in the case of some premium HDTV channels), the VIDEO ITSELF is still usually filmed at 24fps for standard motion. I am also quite certain that even the 1080p @ 60hz channels are still filmed at 24fps (I think if they were filmed at 60fps, people would have started complaining about it like they have with The Hobbit, and I've not read about any such thing.) Filming rate and playback rate are very different things. Filming rates, for standard motion, higher than 24fps are still VERY new, and not many productions have used those frame rates yet. If you want fast motion, you would have to film at lower than 24fps (or do timelapse), and if you want slow motion you would have to film at higher than 24fps. This is IRRESPECTIVE of the playback rate, whether it is interlaced or not, etc.

Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion". I'm not sure if something like Adobe Premier can do that, or whether you would need higher end software. Either way, there isn't any mixing and matching frame rates in a single final output video. If you record at different frame rates, you either string them together in a tool like Premier, and anything filmed faster than your output rate ends up "slow motion", and anything filmed slower than your output rate ends up "high speed motion".

1027
EOS Bodies / Re: Canon's Medium Format
« on: April 01, 2014, 04:50:11 PM »
@jrista

Yepp, but one *large* advantage of a foveon is that you don't need a Zeiss Otus to get your 36MP Sensor served. "Normal" sharp lenses, even customer-ones, get 15MP Pixels without problems... so the whole, or at least most better L, Canon Lense Lineup would be able to outperform the D800E.

Of course the layers constrict the light... until someone invents something new and proves the old wrong.

It's not just the layers or well depth that constricts the light. If you look at the Foveon design (and, for that matter, Canon's own layered sensor patents), they have a LOT more activate and readout wiring per pixel. It's really complicated stuff, which further restricts the actual light-sensitive photodiode area.

The whole "eqivalent megapixels" deal that Sigma uses is also very misleading. Currently, today, megapixel counts are based on output image widthxheight. A 15mp Sigma Foveon is 15mp, in terms of actual megapixels stored in the output JPED image or a JPEG that you can create from RAW. It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

I like the Foveon sensor design, it has SO much potential. It's just in the wrong hands with Sigma...they can't seem to develop it and bring it to bear on the market in a form that would make it a truly viable competitor with higher MP bayer type sensors. I think there are some innovations that have been developed for video sensor technology that could greatly increase the transparency of the silicon that surrounds the layered photodiodes and improve Q.E., reduce noise, improve dynamic range, etc. I've been hoping that Canon was working with some of those technologies on their own layered sensor design.

I would also dispute the whole "need for high resolution lenses" argument. Output resolution, in spatial terms, is the convolution of both sensor and lens resolution...AND, a most important point here, is LIMITED by the LEAST common denominator. The Sigma DP2, for example, is a 4.7 megapixel camera!!! Spatially, that is VERY low resolution. It is not a 15mp camera. It has richer, more complete color information per pixel, however from a luminance standpoint, it's luminance resolution is extremely low. It's pixel pitch is 7.85µm. Those are nice, big pixels, however because of the wiring requirements, the photodiode area is a lot smaller than 7.85µm (I don't know exactly off the top of my head...I would have to find the patents again...but I'd say that at least a third of the area is lost, so maybe around 6.3µm, which is about the same as the 5D III.)

The biggest benefit for the Foveon is the lack of an AA filter. You still experience moire, but because full color information is gathered at each pixel, you only have monochrome moire. Mono moire in most "natural" cases in photography is often not that bad. The lack of an AA filter makes it SHARPER, but it does not really increase the resolution of the sensor. This is very obvious from VCD's comment:

Quote
Have you seen the new DP2?  They claim up to 39 MP...

I have and I'm waiting for the reviews, I think I may get one if the results are good. 39 MP are realistic... of course you just have to get rid of the context "pixels" just by x/y Resolution. The Details of a FoveOn (@lowISO) are outstanding above a Nikon or even a Pentax 645D!

F.e. (picture from dpreview.com):


This was the first sensor which really catched my attention after buying my 5D back then. Everything else is just "evolution" here and there, half a stop more Dynamic, more resolution, ISO25600. Hooray, you invented the holy grail. Nikon bla, Canon bla... everything no real leap. A 5D is still awesome and able to serve my needs. The sigma is again something worth time spending with.

I think VCD has radically misinterpreted this comparison. The Sigma SD1 does not have anything even remotely close to the same resolution as the D800 or 645D. It isn't even a contest. The Sigma SD1 appears to be sharper...but that is only in a non-normalized comparison like this, and sharpness alone does not translate into more resolution. If one were to downsample the D800 and 645D images to the same dimensions as the SD1 image, they would likely TROUNCE the SD1. They have significantly more information in total, and while they may seem slightly soft at the pixel level, on a normalized basis, all that extra information gets interpolated into fewer, but much more accurate, sharper, richer and less noisy pixels.

Furthermore, the D800 and 645D both have more information to start with. They are resolving details that are not even present in the SD1 image at all, despite it's sharpness. Even if those details aren't as crisp as the LESSER details of the SD1, it's still more detail. A light sharpening filter can deal with the softness in a few seconds, and then the SD1 is at a real disadvantage. You can sharpen the SD1 image in post to your heart's content...that will never create information that was never there to begin with, and since it's already sharp, your probably doing yourself a disservice by sharpening SD1 images.

So arguing that the DP2, which itself is still just a 4.7mp camera (or even the SD1, which is a much higher resolution Foveon), is potentially equivalent to a 39mp camera, is gravely missing the point of having a truly higher resolution sensor (in luminance terms...luminace is where detail comes from, color CAN be of much lower spatial resolution so long as your luminance information is high...as a matter of fact, this is actually a standard practice in astrophotography, to image at high resolution in luminance, then when you switch to RGB filters, you bin 2x2 or 3x3, which increases your sensitivity, and reduces your resolution by 4x or 9x...and your never the wiser when looking at the final blended result). It buys into the very misleading hype that Sigma spews, which I believe is ultimately, in the long term, going to damage their reputation and hurt Foveon (because as more people try to produce images with a 4.7mp or 15mp Foveon sensor that compare to even the regular old D800, let alone the D800E or the 645D, and realize they simply cannot...they are either going to ditch Foveon and go back to bayer type sensors, or they are going to begin badmouthing Foveon.)

1028
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: March 31, 2014, 02:45:55 AM »

At the time, I didn't care too much as I was really looking at high video frame rates. For slow motion video.
If you are going to shoot BIF, you need high frame rate or the wings look goofy... I have shot gees at 30FPS and when you view it the wings just seem to jump around at random....

I'm more interested in 2K at 60FPS or preferably 120FPS than I am interested in 4K. The fastest frame rate I can get on my 60D is 60hz while my P/S can film the same resolution at 240hz...

You don't need a high frame rate, you need a slower shutter. Stutter and jumping like that is caused by having a shutter speed that is too high relative to your frame rate. Cinematographers have been shooting wildlife documentaries for decades at 24fps, and wing stutter has never been a problem for the pros. Even with high end digital, you still see high quality documentaries like Planet Earth, Life, etc. shot at standard cinematic frame rates, because you can see the motion blur in the birds wings.

Those same productions also used high frame rates to GREAT effect...for example, when they pan through landscapes, the slower frame rates tended to cause stutter when near landscape moved faster than distant landscape. This happened in Planet Earth. In the more recent Frozen Planet, they solved the landscape panning stutter by shooting at a higher frame rate, coupled with with a faster transition rate. Slowed down, the higher frame rate produced EXCEPTIONALLY smooth landscape panning, while the same old standard 24fps produced beautiful wing motion blurs for scenes with animal life. They also used very slow time lapse imaging in all of those series to produce the most incredible, smooth, and intriguing time lapse sequences I've ever seen.

You have to choose the right frame rate (and even camera handling technique) for the situation. Faster frame rates are not the solution to all cinematography problems. They actually present more problems than they solve at the moment (Can anyone say: The Hobbit? *sigh*) If you are getting stutter, your shutter speed is faster than your frame rate by too great a degree. You should be lowering the shutter speed closer to the frame rate, allowing the shutter to be open for longer in order to create that pleasing motion blur in the birds wings. You don't necessarily want a 1/24th second shutter at 24fps, but neither do you want a 1/60th second shutter.

Also keep in mind that higher frame rates, without further camera handling technique or processing technique, NORMALLY result in slow motion video when played back at normal speed (your output frame rate is always going to be 24 or 30 frames per second. The high frame rates on TVs, like 60fps, 120fps, etc. are artificial, produced within the TV hardware that does additional inter-frame processing to blend two separate source frames into four or six output frames.) Your video does not play back at 60fps if you recorded it at 60fps...it still plays back at 30fps. So don't think of 60fps as the solution to your problems...ultimately, shooting at 60 or 120 fps just means your creating a slow motion sequence. This is especially true if you are interested in shooting at mixed frame rates, and producing a video that contains normal rate and slow motion sequences...you have no choice but to use the same output frame rate...anything shot slower than that will speed up, anything shot faster than that will slow down.

I don't suspect 48fps or 60fps will become standard playback rates until Hollywood has made them mainstream. There are only a small handful of films shot at those rates (maybe only 48fps, not sure if anything with 60fps has actually been finished yet.) The Hobbit BluRay discs are processed to play back at 30fps, and I have to say, I love that, because I don't get motion sickness watching them at home. You can still see some of the other consequences of the higher shooting frame rate, but the funky motion and panning issues don't seem to be present, or at least not as bad, in the BluRay versions as they were in the theaters.

1029
EOS Bodies / Re: Canon's Medium Format
« on: March 31, 2014, 02:31:39 AM »
Maybe the answer of canon will be the "Canon FoveOn". If they would bring a better and faster Fullframe foveon like the Merrill of Sigma, the Resolution of maybe 24 MPix would be comparable to maybe 60MP of a bayer-sensor. So, they would beat the Nikon D800 easily, with the full power of the EOS-lenseprogram. The patents for this are filed since a few months... I REALLY WOULD LOVE IT.

If I go out, making pictures with my Sigma Merrill DP3, and forget about all those disadvantages in speed, flexibility or Autofocus I would say the foveon is head and shoulders above anything else. A Nikon D800 is nothing against this small jewel. But I have to admit, mostly my Canon 5D is way more used ;)

A small example from a "poor" Compact Sigma DP3M, imagine this with FullFrame:
http://tf.weimarnetz.de/downloads/SDIM0175.jpg

Remember, no one said "Mediumformat" from Canon, just the quality and resolution of a medium Format ;)

It isn't really fair to say a 24mp Foveon is the same as a 60mp Bayer. The problem with that argument is that Bayer sensors have much higher luminance resolution than chrominance resolution. It might be that a 30mp-35mp Foveon is like a 55-60mp Bayer, it would really depend on the exact design specifics (of both sensors...many bayer sensors come without a low pass filter these days, or with very weak ones.)

Layered sensors have a harder time with high ISO as well, since all three colors are sensed at each pixel, the deeper layers get less light anyway. Throw in less light through the lens, and the problem with the green and red layers is exacerbated.


1030
EOS Bodies / Re: Canon's Medium Format
« on: March 29, 2014, 05:54:20 PM »
Making things larger doesn't just affect one thing, it affects everything, hence the exponentially higher cost of quality MFD systems.

I think the 'exponentially higher cost of quality MFD systems' is primarily an effect of something small, not something large - namely, market size.  The MF market size is miniscule compared to the dSLR market.  How minuscule?  Exact figures aren't available for MF.  But…in 2013, there were close to 14,000,000 dSLRs sold worldwide.  Stephen Shulz, head of Leica's photo division, estimated that the annual worldwide market, all brands, is just 6,000 MF cameras.  14 million vs. 6 thousand. 

Anyone want to argue that a difference in production cost is the reason for the >$5K higher cost of 1D C compared to the 1D X?  An MF digital back probably doesn't cost all that much more than a 1-series body to produce, but if you're only going to sell ~1,000 units per year, you need a high price to realize a return on investment.

But which is the cause, and which is the effect? Do they only sell 6000 units a year because of the high cost, or is the cost high because they only sell 6000 units a year? I don't think there is necessarily enough data to determine that either way. Kind of a chicken and egg problem. I think we could only tell based on the sales of a much "cheaper" entrant to the MFD market. Not saying Canon will be that entrant...Sony might be...but until it occurs, I don't think anyone can say, definitively, which is the cause and which is the effect here.

And there is no question that the cost of an MF sensor is (in it's own right) exponentially higher than a FF sensor, which is in turn quite a bit more expensive than an APS-C sensor, which in turn are more expensive than the small form factor sensors found in just about everything else these days. The radically lower yield isn't the only reason for the higher cost of MFD. It's part of the whole ball of wax, though. Larger sensors. Larger lenses. Bigger bodies. The interchangeable back option. Etc.

Now, most DSLRs cost on average around $1200 (maybe $800-$1500 for low end to lower midrange). A medium format camera that cost $15,000-$18,000 would still be exponentially more expensive. We still cannot say that the reason they cost $40,000 is because the market is small...the market could be small because they cost so much.

1031
EOS Bodies / Re: Canon's Medium Format
« on: March 29, 2014, 04:40:02 AM »
The costs of larger sensors are unlikely to come down significantly. When 450mm wafers become common place, that might help, but overall, the problem with larger sensors isn't just how many you can fit on a wafer. With the increased area comes a similar exponential increase of devastating defects that render the entire sensor useless. With smaller sensors, you still lose the whole sensor, but you have so may more on the area of the wafer. With FF, one large defect still kills the whole sensor. With MF, same deal, only now your losing something closer to a fifth of the wafer, rather than a 20th or 30th.

Etching a larger sensor also requires more advanced fabrication technology that can handle larger templates and etch the whole area of that template. Remember, fabrication of CMOS circuitry still uses lenses. They may work in the extreme UV range, but it's still light, and that light is still being bent, so it's still succeptible to aberrations and diffraction effects.

As Don said earlier, it's a global problem. Making things larger doesn't just affect one thing, it affects everything, hence the exponentially higher cost of quality MFD systems.

1032
EOS Bodies / Re: Canon's Medium Format
« on: March 25, 2014, 11:28:05 PM »
I wrote 0.005mm^2
the ^2 means square
1mm^2 = 1 mm² = 1 000 000 μm²

http://www.aqua-calc.com/what-is/area/square-millimeter

Ah, yes, I do understand what ^2 means. ;P I've been spitting out this kind of math on these forums for years now.
After all these years of spitting out math on these forums, one would think you understood the basics….

BTW, 0.005mm^2 is 5µm^2. Same thing, it's just a scale factor of 1000.
It seems to me you don’t understand the basics, so let me explain.

A square with sides of 1 millimeter has a surface area of 1mm * 1mm  = 1mm²
Agreed?
1mm = 1,000μm (no ^2 in this, that’s important)
 
A square with sides of 1,000μm (=1mm) has a surface area of…
1,000μm * 1,000μm = 1,000,000μm² (here we do have the ^2)

So 1mm² = 1,000,000μm² (a factor of a million, not a thousand due to the ^2)
Once you understand this basic concept you know that 0.005mm² = 5000μm² (and not 5μm²)

One side of a square of 5000μm² is equal to the de square root of 5000μm² which is just over 70μm.
you reached that conclusion already yourself in a very complex way in your previous post, but failed to see the relation with the 0.005mm² surface area.

Oh, sorry, you are correct. I pretty much always work with just linear pixel pitch. I guess I implicitly dropped the square when running the math.

1033
EOS Bodies / Re: Canon's Medium Format
« on: March 25, 2014, 08:01:33 AM »
I wrote 0.005mm^2
the ^2 means square
1mm^2 = 1 mm² = 1 000 000 μm²

http://www.aqua-calc.com/what-is/area/square-millimeter

Ah, yes, I do understand what ^2 means. ;P I've been spitting out this kind of math on these forums for years now.

BTW, 0.005mm^2 is 5µm^2. Same thing, it's just a scale factor of 1000.

So, you said in your previous answer that this photographer's 8x10 sensor had five micron pixels. That is incorrect (assuming the sensor does indeed have only 10 megapixels). If the sensor had five micron pixels, that would mean the number of rows and columns of pixels is calculated by the width and height of the sensor, in millimeters, divided by 0.005mm. Based on my math in my prior post, 0.005mm pixels would mean the sensor had TWO GIGAPIXELS, or 40640x50800 pixels, a far cry from 10 megapixels.

Keep in mind, 0.005mm pixels are SMALLER than the 1D X (which has 0.00695mm pixels) and the 5D III (which has 0.00625mm pixels). Based on my math, this guy has a sensor with 0.072mm pixels, which is 72 microns...not 7.2, but 72. It's highly unlikely this guy's sensor has pixels that are smaller than the 1D X, let alone the 5D III. Hell, at 5µm, they would be a mere 0.1µm bigger than the D800 pixels! Imagine the D800 sensor scaled to 8x10...thats how many pixels this guy's sensor would have if he really had a 0.005mm/5µm pixel pitch.

They have to be 0.072mm/72µm pixels...its the only size that fits a 10mp total pixel count. Those are VERY big pixels. I'd really love to have that kind of sensor for my astrophotography.



To demonstrate the error in your math another way. You took the squared area of the 8x10 sensor, and divided it by the LINEAR megapixel count:

203.2mm*254mm / 10000000px = ~0.005mm^2/px

We can use a know quantity to check this math. The 7D, for example, has a 22.3x14.9mm sensor, with a full output image size of 5184x3456, which comes out to 17,915,904 pixels. We also know that the 7D has a very well known 4.3 micron pixel pitch, the same size of pixel for ALL of Canon's 18mp APS-C sensors. If we run these numbers through your formula:

22.3mm*14.9mm / 17915904px = ~0.0000185mm^2/px

By your squared over linear formula, the 7D should have 0.0185 micron pixels, or 18.5 NANOmeter pixels!!! We know for sure that is not correct, as 18nm is smaller than the wavelengths of all infrared, visible, and ultraviolet light.

1034
EOS Bodies / Re: Canon's Medium Format
« on: March 25, 2014, 12:20:04 AM »
Actually, it probably has really huge pixels. There are Kodak astro CCDs that have 9µm and 24µm square pixels. If we figure that the pixel sizes for this 8x10 sensor are somewhere around there, the guy has ~640mp @ 9µm, and ~90mp @ 24µm. I figure, just from a space and processing standpoint, the pixels would have to be garganguan. I think 24µm pixels sounds more reasonable, and I guess it's possible they were larger than that. So this guy is taking maybe 70-90 megapixel photos with a giant 8x10 sensor with pixels that probably have about 12 times the sensitivity as the 1D X sensor. That would make full well capacity per pixel around 1.1me- to  1.5me-...WOW. Dynamic range on that sucker must be like, 150dB! :P

In the comments he says it takes photos of about 10 mpix. Yup, ten megapixels.

In that case, I think the guy got ripped off. :P That means the pixels are 20mm in size. That's just a waste of space and fabrication power.

He just needed something to replace his large format polaroid. He shot 7 or 8 large format polaroid’s before taking the "real" pictures, for him that’s about $ 50,000 a year in polaroid’s alone...

By the way, 8 by 10 inch is 203 by 254mm which is about 50,000mm^2
Divide that by 10mp (10,000,000 pixels) and you get 0.005mm^2 per pixel.

Anyway, Canon also made a large (202 x 205mm) CMOS sensor back in 2010 that can do 60fps
http://www.canon.com/news/2010/aug31e.html

The technology for large format digital sensors has been there for years, but there is no real market.

I think you've got your math wrong somewhere. If we convert the sensor size into millimeters, it is as you say 203.2x254mm. We can then figure out how many pixels per row, and how many rows, assuming a 5µm pixel:

203.2/0.005 = 40,640
254/0.005 = 50,800

That is over 40 THOUSAND pixels per row, and over 50 THOUSAND rows. That's a LOT of pixels! Multiply the rows by columns to get the actual megapixel count:

40,640 * 50,800 = 2,064,512,000

That would be TWO GIGAPIXELS. You said it was 10 MEGAPIXELS. There is no way in hell that guy has 5 micron pixels on his sensor. If he did, that would be kick ass. I actually made an error in my math for the last answer, and I wrote the wrong units anyway. I said the pixels were 20 millimeters, that was supposed to be 20 microns, however correcting my math, its 72 microns:

203.2/0.072 = ~2823
254/0.072 = ~3528

2823 * 3528 = 9,959,544

That's a little more reasonable. I still think he could have easily gotten away with pixels ~15x smaller (about 20 microns square) and had more than enough signal to noise ratio and dynamic range, and had about 130 megapixels instead of 10. ;) That wouldn't have required any special fabrication techniques or anything either, 20 micron pixels are monsters, and have more than enough room for very large, easy to fabricate wiring. I think the most difficult aspect of building a sensor that large is that you cannot fabricate it on a single wafer. You would have to fabricate a number of pieces of the sensor on multiple wafers, then assemble them together. There would certainly be additional cost there...but it isn't a new technique, it's been done before (Canon did it for that very same 202x205mm sensor you mentioned), however if you cut corners on readout rate (i.e. you went for one frame every 30 seconds, rather than 60 frames every one second), the task would be easier (Canon used a hyperparallel on-die readout and ADC system for that ultra large sensor.)

1035
EOS Bodies / Re: Canon's Medium Format
« on: March 24, 2014, 04:09:25 PM »
Well, I guess I just am all wrong, huh?

You really need to grow a thicker skin, dude.

You're not sure why I'm saying you wouldn't need to match the FOV?  I thought I already explained it.  It's not complicated.  If you have a 100 MEGAPIXEL sensor, you don't need that many megapixels with a very long telephoto lens, in my opinion.

A 45mm wide sensor that has 100 Megapixels, would not need a 3000mm f/4 lens, to get adequate pixels on subject of things like distant birds, sports, or anything.

You're completely ignoring the fact that the larger medium format sensor has vastly more, as in 5 times more megapixels...THAT'S WHY you don't need to match field of view.

Like I said, it's a stupid argument, you're nitpicking, and it's lame.  There's no reason to pile on me just because you're bored.  If your point of view is correct, then all of the people who use a 5D3 or 1DX with a 600mm lens, are fools...because they could do better with a 7D or 70D.  But that's just not right...and frankly I'm not going to waste time arguing about it.

No, I fully understood your argument. I think your argument is fallacious. Why spend all the extra money...and, were not talking like an extra few hundred bucks, were talking an extra tens of thousands of dollars...on a BIG sensor, if all you care about using is the center region of pixels? It's a monstrous, utter waste of money.

You've basically made my argument for me...no one needs that big of a sensor if they are doing work that requires a telephoto lens and considerable reach. As for those using a 5D III or 1D X for telephoto work, they aren't fools, however they ARE spending a LOT more money to get the reach they need than someone who might be using a 70D + 100-400 or 150-600. The latter combo won't get you the same IQ, but it is vastly more cost effective, at around maybe $3500. The point is, it might cost you $17,000 to get the necessary lens quality and reach with 35mm format and still be able to take FULL advantage of the full frame sensor and larger pixels. If you can never take advantage of the full sensor, then yes, you wasted your money by buying a bigger camera setup. As much as the IQ on a 1D X trounces that of a 7D/70D when you fill the frame, if all you ever use is the center 1/4 of the frame, then the 7D/70D is always going to resolve more detail. It'll also always be a little noisier, but noise can be dealt with fairly well in post, and sometimes all that matters is detail.

However, that PALES in contrast to someone who spends $40,000 on an MDF body, and another...what, $35,000 on a lens capable of similar reach that would still allow full use of a 55x44mm sensor? You don't buy a camera like that to use the center 1/4 of the sensor. It would just be an utter waste of money. You buy a camera like that to use the whole sensor, that's the entire point. So it's either spend $17,000 on a 1D X and 600/4 II, or spend $75,000 on an MDF and comparable lens (and a couple thousand more for a tripod and head capable of holding the gimongous rig, because you aren't going to be hand-holding it.)

Pages: 1 ... 67 68 [69] 70 71 ... 277