November 25, 2014, 11:40:34 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 99 100 [101] 102 103 ... 309
1501
Landscape / Re: Deep Sky Astrophotography
« on: April 02, 2014, 11:36:07 PM »
I reworked my Rosette Nebula image with a new software package called PixInsight. A VASTLY superior product for astrophotography, it gives me so much more control over everything, and allows me to tweak specific layers and scales of detail independently without messing with my color balance. This version of Rosette is much more color accurate than the nearly monochrome-red version that I posted before...and it has a bit more color contrast, so certain details should be easier to see than before:


1502
Landscape / Re: jrista et al, Why Astrophotography?
« on: April 02, 2014, 11:33:38 PM »
Sorry for the late reply...I did not see this thread till now.

Don pretty much summed up the core of it: I've ALWAYS been fascinated by the night sky, by the cosmos in general, ever since I was a very young kid (I think I got my first telescope for Christmas when I was 6.) Astrophotography lets you, personally, "see" deeper than is usually possible, and in more vibrance and brilliance, than you can visually. That's definitely a big part of the draw...and extension of that childhood fascination with the sky that needed an active outlet.

Visual observation, even with some fairly hefty equipment (i.e. say a 12-14" cassegrain type OTA) is usually largely "gray"...color is very hard to discern until you get into the really gargantuan apertures. A lot of visual-only amateur astronomers build their own "Dobs", or Dobsonian-type truss-design telescopes, with apertures up to several feet. They are usually pretty basic in construction, have wood mounts and secondary mirror supports, use basic metal piping for the truss structure, and they grid their own mirrors (there are actually several mirror grinding parties combined with star parties that occur a couple times a year in the US, on where a guy will actually teach you how to grind your own mirrors). I've heard of some personal project dobsonians being up to 60" of aperture, there are even a couple here in Colorado that are around 40" of aperture. At those sizes, you can visually observe the universe in colorful glory, although it still isn't as detailed as what you can get with astrophotography.

When it comes to the repetition and "it's been done before" aspect, I think Soulless nailed it. This is nothing new in photography. Were all part of a VERY saturated population of people, and repetition is common with any kind of fixed subject photography. Particularly landscapes, I've seen the same scenes, image pretty much identically, from dozens if not hundreds of photographers. Monument Park? Horseshoe Bend? Zion National Park? Etc. etc. Being "unique" in the realm of landscape photography is extremely difficult...you have to set yourself apart with technique and vision, rather than subject, because every subject has already been photographed countless times over the last century.

I've noticed that architectural photography, and increasingly "stairwell" photography, are also beginning to suffer from this problem. (I love photos of stairwells and the like...especially some of the subway escalators of some of those really cool new European....but I'm beginning to see the same escalators photographed pretty much the same way over and over now...) So, I don't think that is a "detractor" to be assigned only to astrophotograph. Other forms of photography have the same issue. Were just part of a very saturated community...you have to move into the realm of action and maybe macro photography to get more unique images, however even then...once you've seen a couple dozen "Fly Eye" macro photos, you've effectively seen them all. Repetition, there isn't much getting away from it, and the only way to truly set yourself apart is with your technique and vision...don't just photograph a fly eye...photograph it with flare, do something unique with it, make it stand out and do it with the utmost precision and exquisite aesthetic....and you have yourself a wonderfully unique, interesting, likable photo that people will gravitate to. Despite the fact that it really ISN'T unique. ;)

Soulless covered the other reason I gravitate towards astrophotography: It's a great technical challenge! It's a challenge, period. Astrophotography is probably the most difficult form of photography, and certainly one of the most expensive if you really want to do it right. It's not just about pointing a camera, composing, and pressing the shutter. There is an extensive base of knowledge, about imagers, optics, telescopes, mounts, electronics, and a slough of software packages, that is necessary to start creating truly beautiful, detailed night sky images. This holds true for pretty much anything that requires an equatorial tracking mount.

Creating the base sub frames that are ultimately integrated into a final image is meticulous, detailed, and very interactive work. When you really get into astrophotography, it isn't just about pointing at some DSO, telling your camera to take X number of frames at Y exposure time and Z ISO setting. For the best detail, color, contrast, and depth, you use a monochrome sensor with individual color filters. You image "clear" luminance, red, green, blue for broad-band color channels. You can also image in narrow-band color, filtering out all but one emission line at a time for nebula, such as Hydrogen-Alpha, Sulfur-II, and Oxygen-III. If you really want to go all out, you also image in infrared, as IR produces more "translucent" images that allow distant background objects, such as galaxies, that are normally obscured by nebula or foreground milky way dust lanes, to be seen. For EACH of these individual color channels, you have to create multiple "sub frames", so you might expose 20x1200s Lum, 15x600s Red, 15x600s Green, 15x600s Blue, 20x1200s Ha, 12x1200s SII, 12x1200s OIII, and 20x1200s IR. That's a total of over 35 SOLID hours of exposure time. That does not include any of the additional time before you start imaging to set up, polar align, drift align, inter-exposure dithering and cooldown times, etc. Some subjects might require fewer exposures, some require considerably more...depends on exactly how dim they are. Some of the worlds top astrophotographers have put 60-80 hours of exposure time into ONE single region of the sky. And that is just getting the initial light-frames themselves! There is still more work to integrate them into a full-color image.

Astrophotography is a highly technical, very meticulous, and very detailed form of art. You aren't just pointing a camera, framing, focusing, and opening the shutter. Astrophotography is more like painting than photography...you have to have your final goal entirely planned out in your head ahead of time, you have to prep, you have to be meticulous about each and every color.

You mention that astrophotography is just "visual impression being more or less filter-effect digital artwork." That is the farthest thing from the truth. Good astrophotography does not apply a bunch of filter-effects in post to create some hyper-saturated image full of colors. That's cheap, it's a cop-out, and it isn't astrophotography. A properly done astronomical photograph won't have any effect filters applied at all. Everything you see is real. Most astro images are done in visible light, so most of it is what these deep space objects would look like to the naked eye. In many cases, saturation is a choice left up to the one doing the processing, and a lot of astro images are generally oversaturated, but very rarely is it "effect-filter fakery". Many astro images these days are what we call "narrow band mapped color", where imaging was done only in Ha, SII, and OIII. Those three narrow bands of light are then mapped to red, green, and blue to produce the kind of images you normally think of as Hubble images, or "false color" images (as while these narrow bands of color do exist in the overall spectrum coming from deep sky objects, they are too narrow to be represented accurately with just R,G, and B channels in an image). There is even a form of NB mapping called "Hubble Mapped Color", but one need not use the exact blending method as Hubble. Some imagers use Ha for red and SII for green, some use SII for red and Ha for Green. Some will perform a more complex blend that uses various mixes of Ha, SII, and OII for the red, green, and blue channels to create more unique results. In general, narrow-band images produce much higher contrast, especially between dark dusty nebula and brighter emission and reflection nebula, where as visible light images are less contrasty, but often a bit more vibrant. Finally, the most advanced imagers will often blend all seven of these different color layers together to produce some rather wild results. Some, as I noted before, will even bring in an IR layer to add a whole new measure of depth and transparency to visible and/or narrow band base image.

The incredible colors you see in astro images is rarely ever from effect filters. It's all detail and color that's there in the objects themselves, and different techniques to blend various color layers together bring out different colors and aspects of detail. None of that detail is fabricated...it is EXTRACTED. You might be surprised to find out that most astro images, after calibration and stacking, usually appear as almost pitch black. For all the dozens of hours you may spend exposing, all that exposure time does is produce images where all the color is packed DEEPLY into the utter depths of the lowest levels of your image. A properly calibrated and integrated stack has at least 20 stops of dynamic range, and when you stack enough, you can end up with more than that. It is becoming pretty common these days with the more advanced tools at our disposal to save our integrations as 64-bit IEEE floating point FITS images. A 32-bit IEEE floating point TIFF can store well more than 24 stops of dynamic range...a 64-bit floating point image is, for all intents and purposes, capable of storing an infinite amount of dynamic range (more than capable of storing enough DR that, if one figured out how, they could represent a dim, distant galaxy about to be occluded near the edge of the sun, while concurrently storing enough information to resolve details on the surface of the sun itself). The very vast bulk of astrophotography post processing is geared towards "stretching" those really deep shadows to lift all the detail up into a level range that is visible to the human eye. The rest of astrophotography processing is geared towards reducing noise (because when you lift an image by 20 stops, even if you stack dozens of frames to reduce noise and improve SNR, you STILL have lots of noise), and towards enhancing the detail that exists within the stretched image. You would be surprised at how often very fine structure that you actually capture in your images appears to be flat detail...it takes some careful, meticulous, and often highly mathematical processing to separate the various levels of that detail to make it visible...but not separate them so much that the results look over-processed in the end.

So, is there a massive imbalanced trade-off in "effort vs. results" when it comes to astrophotography? It depends on how you look at it. Is there a massive imbalanced trade-off in "effort vs. results" when it comes to oil painting? Sculpting? How about architecture? All of these endeavors, which are undoubtedly great forms of art, require a far more considerable investment up front, and throughout the entire process, in order to produce one single artistic creation in the end. Astrophotography is also, without question, a form of art. As much as it is called photography, I think it may be more appropriate to compare it to painting than photography, as when you get down to the foresight and vision, the preparation, and the very manual process of stretching and detail extraction, astrophotography feels more like painting to me than photography. Just like painting, you often have to spend hours focusing one one small area of your image, figuring out the various algorithms that will enhance that detail in just the right way. And, similarly, the kind of satisfaction you get in the end, after putting in all that effort, all that dedicated, meticulous care and attention into your artistic creation...it's wonderfully satisfying.



The only real drawback with astrophotography is the cost. People balk at the $6800 price tag of a 1D X, or teh $12,000 price tag of an EF 600mm f/4 L II lens. When you get right down to it, to do astrophotography well, $6800 is down right cheap! For me, my ultimate goal is to be able to produce images that approach the kind of quality you might see from Robert Gendler or Russel Croman. To achieve that level of imagery, you not only need skill, but you need the right equipment. Were talking $20,000-$40,000 mounts, $40,000 telescopes, $30,000 thermoelectrically cooled scientific grade CCD monochromatic image sensors, and robotic equipment like filter wheels, image rotators, and focusers (each of which can cost thousands of dollars each.) Were talking about $115,000 in equipment, and were still not done. This kind of equipment isn't portable, the mount weighs a few hundred pounds, the telescope (such as a 20" RCOS or PlaneWave) weighs a good hundred pounds or so, and all the other accessories pile on another couple dozen pounds. You need a permanent observatory, complete with remote operation capabilities, power, internet, etc., built under permanently dark skies, in order to use this kind of equipment. That's probably another $35,000 to $50,000. Throw in another grand or so in software, for good measure.

Without this kind of equipment, then in large part, some of what you've said, Larry, about astrophotography just being repetition, is kind of true. With the kind of equipment and setup above, you have the ability to image narrow regions of the sky very deeply, very precisely, and so long as it's all set up out under consistently, persistently dark skies away from light pollution, you can use it every time the sky is clear, from the comfort of your own home. The ability to image very narrow regions of the sky very deeply means you can, if you wish, find regions of the sky that are often only a few pixels of "most" astrophotography, and image them in extreme detail. A 20" RCOS or PlaneWave telescope is usually going to be around 3500-4000mm in focal length, and you can throw on a 2x barlow to make that 7000-8000mm. You can also use focal reducers to get a wider field (say 2700mm), image at a lesser magnification, and even do mosaic imaging to expose gigantic regions of the sky in exceptional detail.

Most of the astrophotography you'll see on the internet is usually what we call "wide field", where large multi-arc minute or even arc-hour swaths of sky are imaged all at once with a short focal length...200mm, 350mm, 400mm, 600mm. At these levels, the large scale structures are easily recognizable, their locations in the sky are well known, and you don't need as much total integration time to get decent results. And the brighter the structures are, such as Orion Nebula or Andromeda Galaxy, the more frequently they will be imaged by novice and moderately skilled amateur astrophotographers.

So there is a certain amount of "repetition" when it comes to astrophotography, more so due to a barrier to entry due to the excessive high cost of getting high quality, precise equipment that allows astrophotographers to pursue more unique targets. I doubt I'll ever be spending a hundred grand on astrophotography equipment, at least not all at once, however over the next few years, I don't think it's out of the realm of possibility to spend $35,000 to $50,000 on better equipment. I doubt I'll ever be able to afford a 20" RC Optical Systems (RCOS) Ritchey-Chretine ion-milled telescope. I also doubt I'd ever be able to afford a 20" PlaneWave CDK, which are a bit cheaper, different design, just as high quality...as even that still costs over twenty grand just for the OTA. To get serious at all, though, you have to purchase a mount that is capable of high precision, absolutely encoded, precision modeled, and permanent tracking. Such mounts are expensive, around $20,000. Once you have a mount like that, however, then your pretty much free to put any kind of OTA you want on it, and you can slowly upgrade to better and better OTAs over the years. By the time I retire, I might finally have a 20" PlaneWave CDK with a nice FLI ProLine 37x37mm 4096x4096 cooled CCD imager sitting on a nice 10Micron 2000HPS mount, and be capable of creating some of those unique images of narrow regions of the sky that most people just think of as "That little group of 50 pixels over there" in their images.

1503
Photography Technique / Re: 1D X - 12 FPS or 14 FPS?
« on: April 02, 2014, 07:41:48 PM »
I'm not sure what you mean about "you need at least 1/1000s for 14fps". Technically speaking, ignoring shutter lag and such, 14fps is 1/14th of a second...with the overhead, maybe, what, 1/20th of a second? The 1D X X-sync is 1/250th of a second. I'm sure I'm missing something about the 1D X 14fps mode (I've not used it myself) that you are referring to, however theoretically I don't see why flash at 14fps would be impossible.

Canon states that 1/1000 s or faster shutter speed is a requirement for achieving 14 fps (p.111 of the FWv1 manual, since that's the one on my iPhone).

Hmm, bummer. Wonder why they impose such a limitation. There can't be that much overhead for 14fps (especially since the mirror is locked up).

1504
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 07:09:11 PM »
...or the blue dye in that girls hair.

Does it have to be a girl?   :P


EOS 5D Mark II, EF 70-200mm f/2.8L IS II USM @ 70mm, 1/400 s, f/2.8, ISO 100
Taken on Shamian Island in Guangzhou, China.

Clearly not! :P That is some badass blue hair, too! Strait out of an anime into real life kind of blue hair.

1505
Photography Technique / Re: 1D X - 12 FPS or 14 FPS?
« on: April 02, 2014, 07:06:58 PM »
I would also see it being used for automated bird photography setups. Such as hummingbird flash photography...

Flash photography at 14 fps?   ???  How does that work?  Are there flashes that recycle that fast?  Also, you can't get 14 fps at max Xsync, you need at least 1/1000 s for 14 fps.

I'm not sure what you mean about "you need at least 1/1000s for 14fps". Technically speaking, ignoring shutter lag and such, 14fps is 1/14th of a second...with the overhead, maybe, what, 1/20th of a second? The 1D X X-sync is 1/250th of a second. I'm sure I'm missing something about the 1D X 14fps mode (I've not used it myself) that you are referring to, however theoretically I don't see why flash at 14fps would be impossible.

Halfrack already covered high speed flash units. Flash hummingbird photography is nothing new, people have been doing that for years, including with 1D IIIs and 1D IVs at 10fps. Lot of people put a lot of money into it as well...AC power adapters for their flash units and the like so recharge times are practically instantaneous (or they use the real high end, high speed Eneloop reghargable batteries...although I'm not sure if that would actually support 12fps, let alone 14fps...maybe 10fps). The flashes are usually relatively close to the bird, manually set, so they are usually on pretty low power, and low power flash pulses are generally very short, 1/10,000th to 1/25,000th of a second durations. The short, low power also helps recharge rates. I've even managed 8fps for the better part of a couple seconds with my 7D and 430EX at low power and the good Eneloop batteries (although after each burst, you usually have to wait a while for the charge to recover before you can do that again. :P)

1506
Photography Technique / Re: 1D X - 12 FPS or 14 FPS?
« on: April 02, 2014, 06:07:25 PM »
I read an article a while back...actually, a good while back now, shortly after the 1D X was released. It was written by a pro sports photographer who had purchased two 1D X's. His approach was to use the 1D X on a fixed tripod at one end of the field, pointed in a certain direction, with mirror lockup and 14fps, on a moderately long lens for certain kinds of action shots. He used his other 1D X at 12fps, with a variety of telephoto lenses like the 300/2.8, 400/2.8, and 200-400 TC.

Both of his cameras were connected to laptops via ethernet, and he had some kind of automated/remote control for the fixed 1DX + long lens setup.

It seems that is generally how one would use the 14fps setting. When it's automated and fixed.

I would also see it being used for automated bird photography setups. Such as hummingbird flash photography, where you put together a carefully constructed bait and flower setup, surrounded by your various flashes and the 1D X along with an automatic trigger (which keys off the presence and motion of the hummingbird, for example). Such setups are pretty common, people usually use the 7D or 5D III, but a 1D X with mirror lockup and 14fps would be even more ideal for capturing that perfect wing position (which is really hard to do with humming birds.)

You can use similar techniques for capturing larger birds in flight, songbirds in flight, etc. Alan Murphy, a renown bird photographer from Texas, has published a couple of eBooks that cover a lot of these techniques. He uses Nikon, but the techniques could easily be adapted to a 1D X @ 14fps for even better results.

1507
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 02, 2014, 06:01:10 PM »
That said, what software do you recommend for editing movies? I have Pinnacle and it is terrible!

Go with Adobe Premier...

I totally agree here. Adobe Premier. It's become a VERY high quality product, very capable. It'll completely change your outlook on personal, small scale cinematography...it has so many tools to help you solve common problems, such as camera shake from hand holding, panning unevenness, zooming smoothing, etc. It's an amazing tool. If you are serious about getting into video with your DSLR, I'd even go so far as to say it's worth the $20/mo Creative Cloud fee to get (or, if you use multiple Adobe apps, then the $50/mo full CC fee.)

1508
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 02, 2014, 05:58:42 PM »
Of course, these are lower end DSLRs...they aren't exactly designed for video, it's more of an afterthought, and they all still have rolling shutters and the like as well.

I'm not trying to hurt anyones feelings, but..

Thats a poor mans excuse for not knowing or not willing to learn how to use the current equipment he/she has... Even with an Arri... a poor man would say the same thing.

Please look at video examples posted in the video thread, especially with something like the t2i, t3i, 60D...
For example: http://www.canonrumors.com/forum/index.php?topic=19358.0

There are a few 60D and t2i video clips on the Video & Movie Section of the Forum, and they look quite professional. Please use the search option to find those videos. You can also find videos from t3i and t2i on YouTube.

Here is an example of a Music Video shot with a t2i:
Wisam Benhachem - Perfect Girl (Canon T2i Music Video)

And, there are hundreds of examples, just like that...

Again, I'm not trying to make anyone feel bad...
So what exactly are we doing wrong?

Aside from the in camera processing bit... Don, you are obviously recording at 1080/30p with a shutter of 1/30th of a second. What lens are you using?

What do you do your editing on PC/MAC?
How come you can't choose HD on Vimeo? What resolution are you uploading it onto Vimeo?

I'm not sure about you, but I see quite a bit of short stutters in the video you embedded. From an IQ standpoint, the IQ is great...but there are definitely artifacts that result from the lesser video capabilities of the DSLR used, and those are what I was referring to. It is very professional from a cinematic and photographic standpoint, for sure...but that does not eliminate the issues inherent in current low end and midrange DSLR video technology.

I've tried recording birds and wildlife. I have a whole bunch of clips in a big fat folder on my hard drive. I've messed with them quite a bit, run them through Premier and composited a variety of short little videos. I'm just not satisfied with any of them, though, because of rolling shutter issues in particular, and stutter in a few cases (usually panning). I'm no professional cinematographer by any means, for sure, I just have a critical eye. The fact that someone was able to make a music video with a T2i is pretty amazing...however to truly call it professional, personally, I'd expect the stutter and rolling shutter artifacts, as light as they are, to be gone. Especially if I am paying someone to film the thing for me.

1509
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 05:51:41 PM »
If you convert a bayer sensor's data to monochrome, you effectively have just the full detail luminance.

If you can just expand on that a little Jon. When you say 'you' are you referring to the manufacturers setting it up this way ( like the Leica monochrome), or the user converting the RAW to B&W ?

I mean you as in the "you" who is reading my words. ;)

You can use astrophotography editors to read RAW images directly. If you used something like LR or ACR, if you convert to grayscale that is post-demosaicing, so you really wouldn't gain the same benefit. With something like Iris, you can simply read out a RAW image as monochrome data. You might get slight artifacting this way...silicon really is not very sensitive to blue at all, so depending on the exact camera you are using, the blue pixels might end up a bit darker. I recently purchased a tool called PixInsight, an astrophotography processing tool (exceptionally powerful). PixInsight has something called PixelMath, which allows you to run just about any algorithm you can imagine on your images. If you have a blue darkening problem when converting a RAW image directly to luminance, you could easily apply some pixel math to reweight some luminance information, stealing a little bit from green and adding it to blue. Or you could artificially apply some digital amplification to just the blue pixels, which would make them a little noisier, but normalize the brightness.

Regardless of how you correct any blue deficiencies (which, BTW, would also be present in a Foveon sensor, as silicon is silicon), Bayer sensors generally gather roughly the same average amount of light at every sensor pixel. Absent any color, that is your full resolution DETAIL...and it really doesn't need any interpolation, all it might need is some massaging to normalize luminance levels in post. Blue is just a noisy channel because of lower natural sensitivity levels...we've all been living with that fact ever since we started using digital cameras. Everyone knows about it from the noise in their blue skies or the blue paint on that car or the blue dye in that girls hair.

1510
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 05:38:58 PM »
@jrista

Yepp, you mixed some things. The first FoveOns were 5Mpixel on three Layers, which (could) be summed up to 15MPixel. The next generation was the Merrill, where about 15MPixel on 3 Layers can be counted to 45MPixel. The new Quatto Design ist again 3layered, but just the blue one get's 19MPixel, where the other 2 are just about 5 MPixel each. Now happy counting ;)

At the End, the results are crucial.

Actually, the results aren't all that crucial. You don't have a 19mp sensor just because the blues are higher resolution. You get something around the average of the spatial resolutions of all three colors. Red has the lowest weight, green actually has the highest weight because it is where the bulk of light entering a camera usually comes from. Blue has the second highest weight. You can increase luminance detail in blue, but since blue is inherently a lesser component of visible light, and since our eyes are less sensitive to blues, green dominates. The bulk of the luminance detail is going to come from green, and since that is a lower resolution than the blues, you don't have a 20mp sensor. If you just take the averages, you have 9.7mp. You might have somewhere between 10-15mp, depending on exactly how the Foveon color information is processed and, for lack of a better word, interpolated, to produce a final image. Either way, you still aren't getting any more spatial resolution than the SD1 had years ago, and honestly I'd prefer the SD1 design, rather than the quattro design (becase at least with the SD1, your spatial resolution was exact, not some blend of higher and lower frequency pixel spacing.)

Sigma is still being very misleading by saying that you get 39mp. They are working some quirky imaginary mathematical magic as well, because assuming you just added up the resolutions of each pixel, you get 19.6+4.9+4.9, which is 29.4mp. How they get to 39mp is beyond me, however I suspect they are using some arbitrary means of measuring an upscaled image in relation to bayer images like they have done in the past. Simple fact of the matter is, upscaling and bayer interpolation (especially with AHDD) are NOT the same thing, and do NOT produce the same results. Sigma is probably comparing images demosaiced with your standard 2x2 intersection-based demosaicing to upscaled Foveon images, which is intentionally putting bayer at a significant disadvantage that ignores the most common and effective means of demosaicing.

Quote
It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

This is of course confusing, but it's not a lie, because... let's define a pixel. You refer to it as a Pixel is the Picture which comes out from the cam. The Pixels from the Sensor are something different... you could also count each layer as a single Pixel, because it has an own wired output and the information is capsulated within this *single* Lighttrap. Remember the Nikon D2X (or was it the D1x?), there the Pixels were halfsized, so what do you count? ;) It's some kind of definition. The Sigmapeople have the same "problem" as Intel had 10 years ago... recognizing that Megahertz has nothing to do with speed, but the people don't know this. So you have to catch them with Numbers they understand.

A pixel is a spatial measure, two dimensional, not three dimensional. You can define pixels in many ways, however as far as bayer is concerned, it's all the same. You can measure the individual r, g, and b pixels in a sensor. Assuming you ignore the masked pixels, you will usually get one extra row and column at the edges of the RAW image data as compared to the interpolated image. So, if you have a camera with 5184x3456 (i.e. 1D X) pixels, that is the EXACT pixel count as far as exported TIFF or JPEG images go. The actual RAW pixel count, ignoring the masked border pixels, would be 5186x3458, as you need that extra set of rows and columns on the outer edge in order to perform interpolation. The actual true RAW pixel dimensions are greater, around 5212x3466 when you do include the masked border pixels (which are used for sensor black and white point calibration).

Regardless of how you slice it, a "pixel" in bayer is a direct unit of two-dimensional SPATIAL measure. A "pixel" in Foveon, the way Sigma defines it, is a three-dimensional measure of both spatial detail and color depth. If you want to compare Foveon to Bayer, you have to remove that third color depth dimension, otherwise you are comparing apples to oranges. Spatially, Foveon sensors have, historically, been significantly lower resolution than bayer sensors. This is no myth, no trickery, there isn't even any anti-Foveon here. As I've said, I love the Foveon concept, I just think that Foveon in the hands of Sigma is in the wrong hands, and I think the way Sigma markets Foveon is so misleading that it ramps up prospective buyers hopes to levels that simply cannot be met. (Either that, or you get gullible saps who buy so fully into Sigma's misleading concept that they are missing the forest for the trees, and therefor missing out on the kind of raw, unmitigated resolving power you can get with some current bayer sensors...which actually includes both the 5D III and D800, probably also the 6D, and for sure all current medium format sensors on the market without question.)

Quote
Furthermore, the D800 and 645D both have more information to start with. They are resolving details that are not even present in the SD1 image at all, despite it's sharpness

No, they DON'T, that's what the image should have told you. I could resize the Sigma-Picture 4 Times and have more resolution, but not more information.

Your conflating two separate concepts. Resolution is an overloaded word, and some of it's "overloads" are invalid. I try to be very specific when I use words like resolution. When I say resolution in this context, I try to always make it very clear that I am talking about resolving power and spatial resolution. These terms refer to very well understood concepts in the world of imaging, and describe a very specific process where by something with a given area is divided into certain discrete elements...such as a real image projected onto a sensor by a lens being "resolved" by each pixel.

What you are referring to is one of the invalid uses of resolution, which refers to image dimensions. Simply upscaling an image does not give you more resolution...it gives you more pixels, but your resolution has not actually increased. By upscaling, you enlarge everything, including the smallest discernible element of detail, such that those smallest elements are also larger. That is not increasing resolution...it is simply increasing the total number of pixels and enlarging your images dimensionally. I rarely ever use the word "resolution" to refer to changes in image dimensions. I usually use the term "image dimensions", or refer to concepts like upscaling or downsampling, to refer to changes in image dimensions.

The resolution I am talking about is not the "resolution" your are talking about. Upscaling an image does not give you more resolution...it simply gives you more pixels, and changes the ratio of pixels to detail. Luminance detail, I might add...when you upscale a Foveon image, you aren't just blurring chrominance information (as is the case with bayer interpolation)...you are ALSO blurring luminance information (which is NOT the case with bayer interpolation...you keep your full luminance information at each pixel.)

So you are correct about not having more information after upscaling. ;)

Quote
A light sharpening filter can deal with the softness in a few seconds, and then the SD1 is at a real disadvantage.

Please try and proove me wrong, the RAW-Data is available for download @dpreview.com  ;)

By the way, the Size of the photodiodes are of course really important, especially on lowlight, but the technology solves some of the problems. On the paper no one could beat my old 5D with ca. 8.2 Microns, but in reality your 1DX would run circles around it  8)

Your argument is a classic fallacy...to claim that technological improvements will only benefit one type of technology. Technological improvements can indeed help Foveon, but at the same time, MASSIVE strides have been and will continue to be made for bayer type sensors as well. Foveon isn't going to be gaining technological advancements in leaps and bounds and suddenly end up well ahead of bayer...it just isn't going to happen.

In this case, the reason the 1D X would run circles around the 5D does not actually have anything to do with pixel size. The 5DC is actually still an excellent performer. I know a few wedding photographers who LOVE their 5DCs, they still produce wonderful images. Technologically, they have high read noise (actually quite high), so the images from a 5DC cannot be pushed around like those from a 1D X or even a 5D III or 6D. The CDS technology used in the 5DC isn't as good as it is today. The individual color filters in the bayer CFA are stronger in the 5DC, which improves native color fidelity, but reduces total sensor Q.E.

So yes, technology does solve some problems. If the Foveon was in the hands of Canon or Sony, I believe it could rapidly become a major contender in the sensor market. I do not believe it would ever offer as much spatial resolution (i.e. true resolving power) as any bayer...as Foveon improves, so too will bayer sensors, and bayer will always have the lead in terms of spatial resolution, assuming your aim is to keep Foveon noise levels as low as bayer levels. Spatially, Foveon could compete directly with bayer if you simply ignored noise levels, however because the red layer is at the bottom, despite silicon's greater transparency to red, your still losing a lot of light by the time the red photodiode senses anything. A spatially-equivalent Foveon is going to be a very noisy sensor.

I think the only way your going to get a true "full color fidelity per pixel" sensor that is actually better than bayer would be if something like TriCCD came along again. Three separate sensors with single-color color filters on them, which receive light from a special prism where each sensor gets a FULL compliment of light of it's given color. You then have full sensitivity, full spatial resolution, in three (or, as should be possible, more) full colors. You would then simply need to convert each RAW color layer into R,G, and B pixels in an output image, no interpolation required (like Foveon, but without the sensitivity and noise issues.) Such a system would be rather bulky, but I do think it would be ideal for those who want everything to be the absolute best. Foveon is just another compromise....spatial resolution for color fidelity, just like bayer is a compromise: color fidelity for spatial resolution.

1511
EOS Bodies / Re: Canon's Medium Format
« on: April 02, 2014, 04:55:03 PM »
So arguing that the DP2, which itself is still just a 4.7mp camera (or even the SD1, which is a much higher resolution Foveon), is potentially equivalent to a 39mp camera, is gravely missing the point of having a truly higher resolution sensor (in luminance terms...luminace is where detail comes from, color CAN be of much lower spatial resolution so long as your luminance information is high...as a matter of fact, this is actually a standard practice in astrophotography, to image at high resolution in luminance, then when you switch to RGB filters, you bin 2x2 or 3x3, which increases your sensitivity, and reduces your resolution by 4x or 9x...and your never the wiser when looking at the final blended result). It buys into the very misleading hype that Sigma spews, which I believe is ultimately, in the long term, going to damage their reputation and hurt Foveon (because as more people try to produce images with a 4.7mp or 15mp Foveon sensor that compare to even the regular old D800, let alone the D800E or the 645D, and realize they simply cannot...they are either going to ditch Foveon and go back to bayer type sensors, or they are going to begin badmouthing Foveon.)

Nobody said the first generation Foveon sensor is equal to 39 MP.  Jrista, again you learn about what you're interested in, but this leaves a lot of facts for you to miss. 

When I mentioned the "new DP2", I was referring to this...it's called the Quattro.

http://www.sigma-global.com/en/cameras/dp-series/

...And it's most definitely more resolution than the SD-1...it's a new sensor with more pixels.  Just exactly how many pixels it is, is kind of unclear.  I think Sigma don't mind that it is unclear...lol.  The actual pixel dimensions of the RAW image, might be 19 MP, or might be more.  For some reason it can produce JPEGs that are 7680 x 5120 = 38.3 MP. 

To argue about what outresolves what, on such a new product, is a waste of time in any case.

I try to speak about what I have had experience with.  I've owned the original DP2, and it most certainly had more resolution than its native 4.6 MP image.  As I said, it could easily scale to about 25 MP, and still look sharp enough to me for a print at 300 ppi. 

So there's no reason to start bashing Sigma, and talking about what "TROUNCES" what.  Nobody thinks a crop sensor is ever going to be "better" than a full frame sensor...other than you and your 7D :P.  Everybody knows nothing compares to the mighty 7D!

I don't know where you guys are getting your info. On your own site, the DP2 is listed as having 29mp effective (non-masked) "photo detectors", which are the same thing as a photodiode. From the dp-series link:

Color Photo Detectors    Total Pixels: Approx.33MP, Effective Pixels: Approx.29MP

That is 29 million PHOTODIODES. That means, from a SPATIAL standpoint (actual resolving power), you have 29/3 million PIXELS (actual square areas on the sensor that are light sensitive), or 9.7mp. The DP2 that you are referring to is a TEN MEGAPIXEL sensor. Not only that, it is a 10mp APS-C sized sensor, so were talking pretty small pixels.

I'm sorry, but it doesn't matter how good those pixels are...there is no way, physically, that they could ever compare to the 36.3mp of a D800 nor the 40mp of the 645D. Spatially, from a luminance (detail) perspective, there is no loss of data or resolution in a bayer array. There is only, ONLY, a loss of color data or color spatial resolution. The loss of spatial color detail is a bit of a detractor for bayer type sensors, it hurts their color fidelity a little bit, however it is not enough of a detractor to warrant calling a 9.7mp Foveon as good as a 39mp bayer. The FULL detail luminance from a bayer is more than enough to offset the loss in color detail.

Neuro has explained how a properly designed OLPF (which is usually the case these days, even leaning towards the slightly weak side more often than not), despite blurring high frequency data, is not a huge detractor for bayer sensors as OLPF's blur predictably and consistently across the area of the sensor, meaning a light sharpening filter in post usually reverses the softening impact of an OLPF.

The whole "eqivalent megapixels" deal that Sigma uses is also very misleading. Currently, today, megapixel counts are based on output image widthxheight. A 15mp Sigma Foveon is 15mp, in terms of actual megapixels stored in the output JPED image or a JPEG that you can create from RAW. It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

No more misleading than stating a sensor has so many megapixels, when each photodiode samples one color, and the other two are interpolated in the JPEG.

Your misunderstanding. Every bayer pixel may have only one color, but regardless of color, every pixel receives "light". This is why the spatial resolution of a bayer sensor is so high, and why a D800 is capable of resolving so much detail. If you convert a bayer sensor's data to monochrome, you effectively have just the full detail luminance. Advanced demosaicing algorithms like AHDD are explicitly designed to preserve as much luminance detail as possible, while effectively distributing color data to avoid mazing artifacts and other demosiacing quirks. A bayer sensor needs no interpolation from a luminance standpoint, they only need interpolation from a color standpoint. Bayer sensors have nearly their full resolution in terms of luminance, and since luminance is really what carries your fine detail, they DO have FAR more resolution than any Foveon on the market today, including the SD1.

This isn't missleading, it's how the physics and mathematics of interpolation work. Interpolation algorithms like AHDD are actually capable of producing crisper, smoother, sharper results with a bayer than your standard, basic demosaicing algorithm, and AHDD is pretty ubiquitous these days (LR/ACR use it, Adobe Aperture uses it, and it's a demosaicing option in most Linux RAW editors like RawThearapy and Darktable.) AHDD is even used in lower level tools, often used for astrophotography, like DeepSpaceStacker, Iris, and PixInsight.

The only loss with a bayer type sensor is in terms of color spatial resolution and color fidelity. The most obvious of those is really color fidelity, as when chrominance is blended with luminance, our eyes can't really tell the difference, or at least the difference is small enough that it isn't an issue unless you are directly comparing, side-by-side, a Foveon and Bayer image with the same image dimensions (in other words, if you had a 10mp bayer and a 10mp Foveon, then you would be able to tell that the Foveon had slightly better color microcontrast and better color fidelity...however when comparing a 35 or 40mp bayer to a 10mp Foveon, the only visible difference MIGHT be sharpness...that would depend on the strength or presence of an AA filter.)

1512
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 06:30:33 PM »
Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion".

That's EXACTLY what I did....

Still playing and learning.... and having lots of fun doing it.

BTW... GooseCreek2 on Vimeo is what happens when you shoot geese at 30HZ....

Have you tried shooting at 24fps instead of 30fps? Just curious if that would help. BTW, unless you've configured things differently, if you are shooting at 30fps, I would expect the actual shutter speed to default to 1/60th of a second. Given that, it isn't all that surprising that your getting some stutter...it's that whole timing issue (which will present regardless of whether you are playing back at 60Hz, or down interpolating your video to 30fps for playback at 30Hz). If you drop to 24fps, your shutter speed would be 1/48th of a second (by default, at least I would expect), and that offset might help avoid the stutter problem when playing back.

I'd also offer that NOT harrassing the geese while in your canoo would probably avoid wing beats entirely. ;P

Also, what do you use to process your videos? Premier?

The 60D allows a shutter speed of 1/30 second when shooting 30FPS. Obviously, they cheat and it isn't really 1/30th but is close to it...

The editing software I have for video is Pinacle Studio. I do NOT recommend it.

Oh yes, I've encountered Pinacle Studio. Bleh.

I am a bit surprised that at a shutter speed of 1/30th, your getting stutter like that. Maybe it's still just a timing issue,  but I would have expected 1/30th to produce a bit more motion blur, which should smooth out the issue a bit. Of course, these are lower end DSLRs...they aren't exactly designed for video, it's more of an afterthought, and they all still have rolling shutters and the like as well. (Even the GoPro's have rolling shutter issues...a friend of mine builds and flys model airplanes, and he often sticks his gopro on the plane itself for some cool videos...but wow, the rolling shutter and stutter effects are BAAAD.)

1513
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 05:49:01 PM »
Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion".

That's EXACTLY what I did....

Still playing and learning.... and having lots of fun doing it.

BTW... GooseCreek2 on Vimeo is what happens when you shoot geese at 30HZ....

Have you tried shooting at 24fps instead of 30fps? Just curious if that would help. BTW, unless you've configured things differently, if you are shooting at 30fps, I would expect the actual shutter speed to default to 1/60th of a second. Given that, it isn't all that surprising that your getting some stutter...it's that whole timing issue (which will present regardless of whether you are playing back at 60Hz, or down interpolating your video to 30fps for playback at 30Hz). If you drop to 24fps, your shutter speed would be 1/48th of a second (by default, at least I would expect), and that offset might help avoid the stutter problem when playing back.

I'd also offer that NOT harrassing the geese while in your canoo would probably avoid wing beats entirely. ;P

Also, what do you use to process your videos? Premier?

1514
EOS Bodies / Re: Will the next xD cameras do 4k?
« on: April 01, 2014, 05:24:03 PM »

Also keep in mind that higher frame rates, without further camera handling technique or processing technique, NORMALLY result in slow motion video when played back at normal speed (your output frame rate is always going to be 24 or 30 frames per second. The high frame rates on TVs, like 60fps, 120fps, etc. are artificial, produced within the TV hardware that does additional inter-frame processing to blend two separate source frames into four or six output frames.) Your video does not play back at 60fps if you recorded it at 60fps...it still plays back at 30fps. So don't think of 60fps as the solution to your problems...ultimately, shooting at 60 or 120 fps just means your creating a slow motion sequence. This is especially true if you are interested in shooting at mixed frame rates, and producing a video that contains normal rate and slow motion sequences...you have no choice but to use the same output frame rate...anything shot slower than that will speed up, anything shot faster than that will slow down.


Ummm.......that is not true. Most modern TVs will handle at least 60 fps, after all, that is what computers put out, and all of them can be used as computer monitors. My TV can handle frame rates up to 240 fps, if it does get that, THEN it interpolates frames. It automatically matches frame rate to the source, and if you have the feature turned on, it interpolates frames to make up the 240 fps if your source does not go that high.

Typical sources would be BluRay or AVCHD output (which may be either 30p or 60p), or a computer (usually 60p)

Just because your typical movie is 24 fps does not mean that all other video is treated the same way.

There is refresh rate, and there is progressive video output. There is also the interpolated motion rates, such as Samsung's CMR (Clear Motion Rate). There ARE some high end premium HDTV channels (i.e. ESPN) that offer 60p progressive frame rates, however for the most part, the majority of HDTV is 30p. My Samsung non-3D TV has a 120Hz "refresh rate", supports 60p output, but as I've never paid for premium HDTV channels (they are extremely expensive, as they use up a hell of a lot of bandwidth), I've only ever seen 30p content on it.

The modern 240Hz refresh rates for TVs were originally created to support interpolated motion rates for 3D video up to 120Hz per eye, or 240Hz in total. BluRay video frame rate is still 24fps as far as I know, and the supported BluRay playback rates (according to the current BluRay standards) are either 720p @ 60Hz progressive, or 1080p @ 30Hz progressive or 50/60Hz interlaced. The higher refresh rates on TVs avoid timing issues between the video rate and the playback rate (the refresh rate)...i.e. if you had a 24fps video on a 24hz screen, you have to get the timing exact to ensure that each frame is ready to play exactly every 1/24th of a second...if your timing is off (there is always noise, so it's likely), then you end up rerendering the same frame for another 1/24th of a second, causing stutter. With higher refresh rates, timing becomes less and less of an issue. At 60hz, you play back ~0.4 frames per refresh cycle, at 120Hz you play back ~0.2, and at 240Hz you play back ~0.1. You still can't get timing 100% exact, which is where interpolation like CMR comes into play, which smooths things out if you end up having to re-render the same frame for an extra refresh cycle.

The key is to note that even though BluRay and HDTV channels support playback (officially) at 30p or 60i (or even 60p, as in the case of some premium HDTV channels), the VIDEO ITSELF is still usually filmed at 24fps for standard motion. I am also quite certain that even the 1080p @ 60hz channels are still filmed at 24fps (I think if they were filmed at 60fps, people would have started complaining about it like they have with The Hobbit, and I've not read about any such thing.) Filming rate and playback rate are very different things. Filming rates, for standard motion, higher than 24fps are still VERY new, and not many productions have used those frame rates yet. If you want fast motion, you would have to film at lower than 24fps (or do timelapse), and if you want slow motion you would have to film at higher than 24fps. This is IRRESPECTIVE of the playback rate, whether it is interlaced or not, etc.

Regarding Don's problem with a beat frequency in bird wings, if you have software capable of doing it, you could probably record at 60fps, then interpolate that sequence back down to 24fps for inclusion with other 24fps sequences, WITHOUT resulting in the 60fps sequences playing back as "slow motion". I'm not sure if something like Adobe Premier can do that, or whether you would need higher end software. Either way, there isn't any mixing and matching frame rates in a single final output video. If you record at different frame rates, you either string them together in a tool like Premier, and anything filmed faster than your output rate ends up "slow motion", and anything filmed slower than your output rate ends up "high speed motion".

1515
EOS Bodies / Re: Canon's Medium Format
« on: April 01, 2014, 04:50:11 PM »
@jrista

Yepp, but one *large* advantage of a foveon is that you don't need a Zeiss Otus to get your 36MP Sensor served. "Normal" sharp lenses, even customer-ones, get 15MP Pixels without problems... so the whole, or at least most better L, Canon Lense Lineup would be able to outperform the D800E.

Of course the layers constrict the light... until someone invents something new and proves the old wrong.

It's not just the layers or well depth that constricts the light. If you look at the Foveon design (and, for that matter, Canon's own layered sensor patents), they have a LOT more activate and readout wiring per pixel. It's really complicated stuff, which further restricts the actual light-sensitive photodiode area.

The whole "eqivalent megapixels" deal that Sigma uses is also very misleading. Currently, today, megapixel counts are based on output image widthxheight. A 15mp Sigma Foveon is 15mp, in terms of actual megapixels stored in the output JPED image or a JPEG that you can create from RAW. It may have 45 million photodiodes, but that is not the same as megapixels, and I really wish Sigma would stop being so misleading.

I like the Foveon sensor design, it has SO much potential. It's just in the wrong hands with Sigma...they can't seem to develop it and bring it to bear on the market in a form that would make it a truly viable competitor with higher MP bayer type sensors. I think there are some innovations that have been developed for video sensor technology that could greatly increase the transparency of the silicon that surrounds the layered photodiodes and improve Q.E., reduce noise, improve dynamic range, etc. I've been hoping that Canon was working with some of those technologies on their own layered sensor design.

I would also dispute the whole "need for high resolution lenses" argument. Output resolution, in spatial terms, is the convolution of both sensor and lens resolution...AND, a most important point here, is LIMITED by the LEAST common denominator. The Sigma DP2, for example, is a 4.7 megapixel camera!!! Spatially, that is VERY low resolution. It is not a 15mp camera. It has richer, more complete color information per pixel, however from a luminance standpoint, it's luminance resolution is extremely low. It's pixel pitch is 7.85µm. Those are nice, big pixels, however because of the wiring requirements, the photodiode area is a lot smaller than 7.85µm (I don't know exactly off the top of my head...I would have to find the patents again...but I'd say that at least a third of the area is lost, so maybe around 6.3µm, which is about the same as the 5D III.)

The biggest benefit for the Foveon is the lack of an AA filter. You still experience moire, but because full color information is gathered at each pixel, you only have monochrome moire. Mono moire in most "natural" cases in photography is often not that bad. The lack of an AA filter makes it SHARPER, but it does not really increase the resolution of the sensor. This is very obvious from VCD's comment:

Quote
Have you seen the new DP2?  They claim up to 39 MP...

I have and I'm waiting for the reviews, I think I may get one if the results are good. 39 MP are realistic... of course you just have to get rid of the context "pixels" just by x/y Resolution. The Details of a FoveOn (@lowISO) are outstanding above a Nikon or even a Pentax 645D!

F.e. (picture from dpreview.com):


This was the first sensor which really catched my attention after buying my 5D back then. Everything else is just "evolution" here and there, half a stop more Dynamic, more resolution, ISO25600. Hooray, you invented the holy grail. Nikon bla, Canon bla... everything no real leap. A 5D is still awesome and able to serve my needs. The sigma is again something worth time spending with.

I think VCD has radically misinterpreted this comparison. The Sigma SD1 does not have anything even remotely close to the same resolution as the D800 or 645D. It isn't even a contest. The Sigma SD1 appears to be sharper...but that is only in a non-normalized comparison like this, and sharpness alone does not translate into more resolution. If one were to downsample the D800 and 645D images to the same dimensions as the SD1 image, they would likely TROUNCE the SD1. They have significantly more information in total, and while they may seem slightly soft at the pixel level, on a normalized basis, all that extra information gets interpolated into fewer, but much more accurate, sharper, richer and less noisy pixels.

Furthermore, the D800 and 645D both have more information to start with. They are resolving details that are not even present in the SD1 image at all, despite it's sharpness. Even if those details aren't as crisp as the LESSER details of the SD1, it's still more detail. A light sharpening filter can deal with the softness in a few seconds, and then the SD1 is at a real disadvantage. You can sharpen the SD1 image in post to your heart's content...that will never create information that was never there to begin with, and since it's already sharp, your probably doing yourself a disservice by sharpening SD1 images.

So arguing that the DP2, which itself is still just a 4.7mp camera (or even the SD1, which is a much higher resolution Foveon), is potentially equivalent to a 39mp camera, is gravely missing the point of having a truly higher resolution sensor (in luminance terms...luminace is where detail comes from, color CAN be of much lower spatial resolution so long as your luminance information is high...as a matter of fact, this is actually a standard practice in astrophotography, to image at high resolution in luminance, then when you switch to RGB filters, you bin 2x2 or 3x3, which increases your sensitivity, and reduces your resolution by 4x or 9x...and your never the wiser when looking at the final blended result). It buys into the very misleading hype that Sigma spews, which I believe is ultimately, in the long term, going to damage their reputation and hurt Foveon (because as more people try to produce images with a 4.7mp or 15mp Foveon sensor that compare to even the regular old D800, let alone the D800E or the 645D, and realize they simply cannot...they are either going to ditch Foveon and go back to bayer type sensors, or they are going to begin badmouthing Foveon.)

Pages: 1 ... 99 100 [101] 102 103 ... 309