September 17, 2014, 11:38:07 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 69 70 [71] 72 73 ... 299
1051
Lenses / Re: Distance-dependent AF behaviour of Canon 35/2 IS
« on: May 16, 2014, 03:53:56 PM »
I have to agree that Dot Tune is pretty unreliable. There is no direct correlation between when AF is confirmed by the firmware, and the AFMA setting. AF confirmation is a bit more arbitrary, which is why there is variability.

When you use FoCal, do you use the maximum samples option? I forget what it is exactly, I think 10 samples are taken per AFMA setting. If you only do the "quick" AFMA tuning with FoCal, which only uses 2 or 3 samples per AFMA setting, it isn't really all that much better than Dot Tune. You really need to use a high number of samples to get accurate results. Small things during the AFMA tuning process, such as a missfocus (which do happen), wins or something else that might temporarily change the focus distance, etc. can all mess with the focus hits at each AFMA setting. By doing at least 10 samples per, FoCal is then able to use some basic statistics to discard outliers and produce a more accurate curve, and thus find the most accurate AFMA setting.

That said, AF IS often distance dependent. This could be due to spherical aberration in a lens, or possibly other aspects of lens construction. Regardless of the why, tuning AFMA for near focus will often result in improper AFMA for far focus. You might want to run FoCal with a high sample count for both near and far focus, and just try to memorize the settings, or write them down and keep the settings in your camera bag, or something like that, so you can reset AFMA if you need to switch periodically between far and near focus.

1052
Animal Kingdom / Re: Show your Bird Portraits
« on: May 15, 2014, 12:34:01 AM »
The Naked-faced Spiderhunter (Arachnothera clarae) is a species of bird in the Nectariniidae family. It is endemic to the Philippines.

Its natural habitat is subtropical or tropical moist lowland forests.

Source: http://en.wikipedia.org/wiki/Naked-faced_Spiderhunter

Location: http://en.wikipedia.org/wiki/La_Mesa_Ecopark

Settings: 1/250 f/8 800mm ISO 800

Interesting bird. Rather exotic.

1053
Animal Kingdom / Re: Show your Bird Portraits
« on: May 15, 2014, 12:33:12 AM »
Hello from Sweden!
After seeing all your great photos of bird portraits may I also add this photo of a mute swan, that was crossing the full moon light gate @ the sea shores of the Baltic Sea, in the coastal area of Sandemar Nature Reserve on the east coast of Sweden at the time of the full moon rise over the sea on the evening of 16th of March 2014. [ Canon EOS 5D Mark II with EF300mm f/2.8 L IS II USM ]

Wonderfully executed! Love the way you put the band of light reflecting off the water right behind the bird's head.

1054
Animal Kingdom / Re: Show your Bird Portraits
« on: May 15, 2014, 12:32:24 AM »


Common Redpoll - Auðnutittlingur
by Rodor54 in Iceland, on Flickr

Great shot! Perfect balance, with those wings head on like that.

1055
EOS Bodies - For Stills / Re: Advice on a upgrade from the Rebel XS
« on: May 14, 2014, 10:01:44 PM »
There is noise reduction circuitry. It's called CDS, or correlated double sampling. There is usually a CDS unit per column, which samples dark current before an exposure is made, and that sampling is subtracted from the pixel charge as each row is read.

Ah.  I assumed that was being done in software rather than hardware.

CDS? CDS has to be done in hardware, since it requires sampling the actual dark current moving through the circuit. The closer the sampling is to the time the dark current is subtracted, the more accurate. This means that for shorter exposures, analog CDS is very accurate.

The first Exmor design, the ones used in still photography sensors, used only digital CDS. The later Exmor designs actually use a dual CDS design, one analog CDS stage and one digital CDS stage. The analog stage takes care of most of the dark current noise, and the digital CDS stage takes care of any residual. As far as I know, the dual-CDS Exmors are only used in video camera sensors at the moment, but I suspect that won't remain that way for long. I actually suspect that the A7s sensor uses a dual CDS approach.

Sony Exmor sensors use column-parallel ADC. They moved the ADC onto the sensor die, and hyperparallelized them. That means each ADC unit in an exmor is only responsible for handling a few thousand pixels, instead of a few million pixels, every fraction of a second. That allows a lower frequency to be used, so the frequency of the clock is lower than the frequency of noise in the circuit itself.

I knew they'd moved it onto the die.  I didn't know about the parallelization.  That's an interesting approach.  I'd be curious whether the use of lots of ADCs causes banding problems like it does for the 5DmkIII.

From what I've read about Sony Exmor, since the ADCs are per-column, that allows the potential to tune each ADC to handle column response differential. The responses of each ADC can be normalized to eliminate vertical column banding.

In the case of both the 7D (to a fairly strong degree) and the 5D III (very slightly), there is noticable vertical banding that correlates with each set of readout channels. In the 7D, you can clearly tell that each vertical band is 8 pixels wide, which corresponds with the 8 readout channels. In the 5D III, the effect is very subtle, so I figure Canon must have figured out a way of tuning or otherwise correcting for the readout differential for each ADC channel.

Anyway, there is potential for vertical banding with parallel ADCs, but it can always be tuned out or otherwise corrected for. With lower frequency per-column ADCs it's easier to fine-tune each ADC.

That might improve sampling accuracy, but at first glance, I would think that you could achieve similar benefits with oversampling.  Maybe not.

It sounds like you understand audio signal processing. While I think some aspects of standard signal processing apply, there are a lot of differences with spatial signal processing. I don't know standard audio signal processing all that well, so I can't say how sampling techniques might apply, but my gut (based on what I do know about spatial signal processing) tells me that there really isn't going to be much in the way of multi- or over-sampling the signal. It generally comes out of the sensor "as is", with the exception of what CDS does.

Now, I do know that Sony, Nikon and a few other manufacturers do some things differently than Canon. It's often called "processing", but in general it's simple things. For example, Canon uses a bias offset in their design to handle the sensor bias signal, where as Sony and Nikon clip the bias signal out entirely (cleaner deep shadow noise, but you lose a good chunk of deep shadow.) For normal photography, clipping seems to be better, however for astrophotography (an arena where Canon cameras are almost synonymous with "modded DSLR") a bias offset is a far better approach as it means with more advanced noise removal techniques, you can recover a hell of a lot more signal from DEEP within the read noise. (Since that signal is clipped in Sony and Nikon sensors, its just gone, discarded, not recoverable.)

(Sony also move the clock and power supply themselves off to a remote corner of the Exmor die, which reduces potential thermal sources and, at least according to Sony's paper on the Exmor design, reduces noise from high frequency components within the ADC units themselves.)

Hmm.  I guess that makes sense.  With my audio hat on, when I hear someone talk about moving an ADC clock away from the ADC, my mind screams "Aaaah!  The jitter!  It burns!", but I suppose that jitter doesn't affect this use case very much, because the value isn't changing....

I don't gather, from the patents and papers, that the Exmor design was easy to achieve. When you look at the sensor layout, you can see in the upper left corner there is a clock, PLL, and a couple other components. Then you have the pixel array, with the photodiode, per-pixel amplifier, and the row/column activate and read wiring. Below that along the bottom you have the CP-ADC units, which contains a ramp ADC, the CDS/Pixel register (CDS readout counts negative, pixel readout counts positive, CDS is effectively "automatic"), and then some more electronics to ship the signal off the die. There are a few other components as well, although it's been long enough that I don't remember all of them.

Anyway, however Sony did it, they seem to think that moving the high frequency components off to an isolated area of the die reduced noise and jitter in the ADC units, which is part of the reason the Exmor readout is so clean. Plus, since each ADC is only responsible for reading out a few thousand pixels they can be clocked slower (whatever the image height is, basically, so in a 6000x4000 pixel sensor, each ADC unit is only responsible for 4000 pixels per read, vs. say Canon's which are responsible for 2.5 million pixels per read).

So I wouldn't say that moving the ADC unit closer to the detectors really has anything to do with reducing noise.

Well, the more important thing is for the first gain stage to be as close as possible to the detectors.  Any noise bleeding into the signal at that point is going to be massively amplified, so you would want to have as little distance there as possible.  I'd expect the distance from there to the ADC to matter, albeit not nearly as much.

The gain is applied by the amplifiers, not the ADC. Maybe you have the two mixed up? While I'll admit I haven't read patents for every possible image sensor design, in the case of CMOS sensors, every pixel always has an amplifier. They are built into the readout logic for each and every "pixel". Now, in some sensor designs use a "shared pixel" design where two or more photodiodes will share some readout logic. Usually, in shared pixel designs, there is one amplifier for every two pixels, connected diagonally. This allows for a larger (longer) amplifier, which I guess improves effectiveness or efficiency (this gets into a realm of CMOS transistor design that is a bit beyond my level of understanding...but I believe it falls into the same category as FinFET and Tri-gate technology...a long thin "fin" of a transistor with multiple source and drain connections allows for cleaner, lower noise, lower head electron transfer).

Anyway, yes, all pixels do have an amplifier right in the pixel, although not all pixels have their own amplifier. Some amplifiers are shared among pixels, however sharing allows for more efficient use of die space, meaning larger amplifier transistors and larger photodiodes, so higher efficiency overall.

One caveat, Canon cameras have two amplifiers. There is of course the per-pixel amplifiers. These kick in AT read time, so they amplify the signal in the pixel directly before anything else happens to it, so it's before any additional noise is added to the signal. However, to achieve the highest ISO settings (usually the top two or three), Canon also uses an off-die, downstream secondary amplifier. This secondary amp is also a source of noise in Canon sensors. I don't know why they do this, however I found a rather old article somewhere a couple of years ago that indicated that Canon somehow determined that the downstream amplifier was actually less noisy. I don't know enough about the specifics to be able to say one way or myself for sure...but I guess I'm willing to trust that Canon knows what they are doing.

Increasing the parallelism of the ADC units, allowing each one to operate at a lower frequency, has a lot to do with reducing noise. Because the ADC units are on-die with Exmor, it also means that the signal is converted from analog to digital immediately...rather than after transit across a bus and through who knows how many additional electronics.

And I suspect you can probably use less signal amplification, because you don't have to send the analog signal a long distance across a bus.

I'm not sure in this case. I'm sure that sending the signal over the bus introduces noise, however for the most part, amplification occurs in the pixels before any transfer across a bus. The one exception would be the top two ISO settings in Canon cameras (not the expanded settings, the top two native ISO settings), which uses a downstream amp.

Regardless, I think digital readout is the way of the future. Digital signals can be transmitted with error correction, and at very high speeds, without having to be concerned about analog noise interfering with the signal. With transistor sizes on sensors dropping to around 65nm now, that leaves a TON of room on the die for complex logic. I really hope Canon moves to a fully on-die system soon. I know they already tested the some of their patents, like their dual-scale CP-ADC and some other enhancements on the 120mp APS-H sensor, where they were able to achieve 9.5fps "low noise" readouts. God only knows when they might actually employ the technology in the actual sensors that go into actual consumer products, though.

Canon also has patents for some other interesting technology. Such as a read-time power disconnect, which decouples pixels being read from the power source, which, at least theoretically as I understand it, could potentially eliminate dark current entirely as a contributor of read noise. That would help shadow noise performance a lot when shooting in higher temperatures...such as outdoors, in the sunlight, for birds, wildlife, landscapes, etc. (I know that my 7D can get pretty hot when I'm out in the sun trying to photograph birds or wildlife...which can take a lot of time to get close, get the right angle, etc.)

Are we talking about ringing on the power supply rails here, or something else?

No, it was a fairly specific patent about a specific transistor setup around the pixels and some other logic in the sensor to disconnect the active power supply during readout (I think there was still some power from capacitors...can't remember). I'll see if I can find the patent again. It's interesting, but it was long ago enough now that I honestly don't remember the specifics.

At one point in time, I'd found a gold mine of patents for Canon. Stuff going back to the early 2000's. I probably still have the bookmark in my old Opera 12 bookmarks file. I'll see if I can dig it up, and hopefully the site is still around. Canon has a lot of cool patents, but they don't seem to employ them. At least, not in their stills cameras (I think they have used some of these patents in their video sensors....but that's nothing unusual, it seems everyone in the CMOS sensor game these days implements all the coolest stuff in video sensors. :P).

1056
Landscape / Re: Waterscapes
« on: May 14, 2014, 02:41:41 PM »
Very nice. Love that long-exposure fogging of coastlines like that. Nice, rich contrast, too!
And a lovely watermark ;)

LOL. That too. :D

1057
EOS Bodies - For Stills / Re: Advice on a upgrade from the Rebel XS
« on: May 14, 2014, 02:14:18 PM »
One thing that newer camera bodies do have is slightly better noise reduction circuitry on the sensor chip.  That's why you see a slightly better high ISO performance.

Not noise reduction circuitry, so much as less noisy circuitry, AFAIK.  You commonly reduce noise by:

  • Cooling the image sensor to reduce thermal noise (mainly used when doing astrophotography)
  • Moving amplifier circuits closer to the actual detectors—the photo sites, in this case—so you're amplifying less induced noise
  • Using cleaner amplifier circuits that add less noise to the signal
  • Improving the ADC circuitry that converts the analog voltage into a series of bits—adding precision, lowering the noise floor, etc.
  • Increasing the consistency of amplifier circuits and ADCs to avoid banding when multiple ADCs are needed to capture a single frame (for speed reasons)
  • Increasing the effective size of the photo sites on the image sensor by increasing the sensor's dimensions, moving chip features that partially occlude the sensor, etc.

There are probably many other techniques that I'm forgetting.

There is noise reduction circuitry. It's called CDS, or correlated double sampling. There is usually a CDS unit per column, which samples dark current before an exposure is made, and that sampling is subtracted from the pixel charge as each row is read.

Now, as far as I am aware, Canon's CDS technology hasn't really changed much in a long time. It may have been tweaked, but I don't suspect any of those tweaks would result in a significant improvement in their hardware noise reduction.

As for the ADC units, the reason ADC units introduce noise is because they are high frequency. Canon uses eight channels in most of their modern cameras, and sixteen  in their 1D X. At approximately 5200 to 5600 columns, that means each ADC unit with 8 channels has to process an average of 675 columns of pixels each, or an average total of around 2.5 million pixels each, in a fraction of a second. Canon has low ADC parallelism, and as a result of that low parallelism, each ADC unit must run at a high frequency, which means the frequency of the clock is closer to the frequency of noise in the circuit itself. Additionally, the clock and power supply for the ADC units is right next to them in the DIGIC processors.

Sony Exmor sensors use column-parallel ADC. They moved the ADC onto the sensor die, and hyperparallelized them. That means each ADC unit in an exmor is only responsible for handling a few thousand pixels, instead of a few million pixels, every fraction of a second. That allows a lower frequency to be used, so the frequency of the clock is lower than the frequency of noise in the circuit itself. (Sony also move the clock and power supply themselves off to a remote corner of the Exmor die, which reduces potential thermal sources and, at least according to Sony's paper on the Exmor design, reduces noise from high frequency components within the ADC units themselves.)

So I wouldn't say that moving the ADC unit closer to the detectors really has anything to do with reducing noise. Increasing the parallelism of the ADC units, allowing each one to operate at a lower frequency, has a lot to do with reducing noise. Because the ADC units are on-die with Exmor, it also means that the signal is converted from analog to digital immediately...rather than after transit across a bus and through who knows how many additional electronics. In Exmor, pixels are read, amplified, converted to digital via the ADC, and digital CDS is applied. From that point on, the DIGITAL signal can be moved around anywhere, error-corrected transfer can be used, and the signal, since it is now bits rather than analog charge, can be kept pure and accurate.

Canon actually has their own patent for an on-sensor-die column-parallel ADC. Canon's is called a "dual scale" ADC, in that their hyperparallel ADC units can actually operate at two frequencies. When necessary, they can operate at a lower frequency, which again reduces the amount of noise introduced. I think Canon moving from an off-die, high frequency, low parallelism ADC system to an on-die, low frequency, high parallelism ADC system is the key to them achieving lower noise. I don't think that moving the ADC's closer to the pixels in and of itself really reduces noise much...maybe a little, as it avoids the need to move the signal across a bus to external units, but overall, I think the lower operating frequency is really what will reduce noise.

Canon also has patents for some other interesting technology. Such as a read-time power disconnect, which decouples pixels being read from the power source, which, at least theoretically as I understand it, could potentially eliminate dark current entirely as a contributor of read noise. That would help shadow noise performance a lot when shooting in higher temperatures...such as outdoors, in the sunlight, for birds, wildlife, landscapes, etc. (I know that my 7D can get pretty hot when I'm out in the sun trying to photograph birds or wildlife...which can take a lot of time to get close, get the right angle, etc.)

1058
EOS Bodies - For Stills / Re: Advice on a upgrade from the Rebel XS
« on: May 14, 2014, 01:59:47 PM »
Fundamentally, ISO/noise performance is a factor of two things: Real sensitivity (quantum efficiency) and total sensor area. Assuming you frame your subject the same, the only way to really reduce noise is to use a larger sensor. Pixel size does not really play a role unless your only putting the same number of pixels on the subject (which means you are not necessarily framing the same). With the same number of pixels on subject, then pixel size matters, and larger pixels do better.

That's only true for shot noise.  You're forgetting read noise and thermal noise, neither of which is necessarily tied to sensor size in any way.  For more info, read:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/#SNR

Well, partially true. Pixel area is died to read noise. Larger pixels, as much as they are capable of carrying a larger charge due to photon strikes, are ALSO prone to experiencing more noise from dark current. This is evident in the actual measurements of Canon sensors. Check out sensorgen.info...you'll notice a very high correlation between pixel size and read noise levels.

There are indeed some other components of read noise, which are primarily caused by high frequency component oscillations, however overall, read noise is a TINY contribution of noise overall. At higher ISO settings, read noise is at its minimums (~3e-), where as photon shot noise is at it's maximums. For a very high ISO setting, say ISO 3200, where the saturation point may be around 1000e-, the photon shot noise is ~32e-. Even though there is some read noise, it's trounced by photon shot noise (by a factor of ten or more, usually).

So I stand by what I said before. At higher ISO settings, noise performance is by far a factor of pixel size, not of read noise. Realize, a read noise of 3e- is the same as the D800 has at ISO 100. It's extremely low, trivial. ISO/noise performance is a factor of pixel size and quantum efficiency, read noise is such a small factor that it doesn't matter.

1059
Animal Kingdom / Re: Show your Bird Portraits
« on: May 14, 2014, 01:24:26 PM »

hey you, come right here. Don't make me yell at you.

LOL! Now THAT is an awesome shot. I can't say I've ever seen a bird tongue quite that way before. :P

1060
Landscape / Re: Waterscapes
« on: May 14, 2014, 01:23:04 PM »

Three Kings by Le ARchie, on Flickr

Very nice. Love that long-exposure fogging of coastlines like that. Nice, rich contrast, too!

1061
Landscape / Re: Milky Way
« on: May 12, 2014, 08:40:26 PM »
Nice milky way view there, Student.

If you want to stack, you really need to use full RAW. The Canon mRAW format is actually more like a JPEG than anything, a 4:2:2 encoding of Y (luminance) and Cb (blue/yellow channel) and Cr (red/green channel).  It is 14-bit, but it is actually not even remotely "raw". Stacking non-raw images doesn't produce nearly the same kind of results as stacking true RAW images.

For stacking to be most effective, you also want to have fairly closely temperature matched "dark frames" (same exposure time, ISO, and temperature...within a few degrees), say 30 of them, as well as about 150 "bias frames" (same ISO, shortest possible exposure time like 1/4000th or 1/8000th...super easy to create, and the more you stack, the better they are to calibrate.)

Once you have some real RAW lights, and some darks and biases, then you can stack with DSS, and the process is largely automated. Export as a TIFF (WITHOUT applying modifications), and you can do some pretty amazing tweaking and stretching in photoshop.

1062
Landscape / Re: jrista et al, Why Astrophotography?
« on: May 12, 2014, 08:36:04 PM »
Thank you guys for your detailed responses in this thread. I have also always been fascinated with the universe outside of the Earth, and astrophotography is certainly part of that. Unfortunately, I live pretty much dead smack in the middle of Europe's largest heavy light pollution area, with the nearest clear-ish (note the -ish) skies being at least a 3 or 4 hour drive. One day I'll be in or closer to a better area, in the meantime I just enjoy the images of other astrophotographers :)

You don't have to worry about LP nearly as much these days. You can use a camera lens, DSLR, and a Light Pollution Suppression or Reduction filter, even under the most heavily light polluted "red" and "white" zones. I know quite a few astrophotographers now who live in the middle of or very near to big cities, and they still image.

Look for the Astronomik CLS EOS Clip In filter. It's super easy to use...it literally just clips right into Canon EOS DSLRs. You can then attach the DSLR to a telescope with a T-adapter and T-ring, or to a Canon EF lens (note, you CAN NOT use EF-S lenses, as the short backfocus doesn't leave room for the filter.) There are also other brands that offer similar filters, with varying strengths.

Personally, even though I am under a yellow->green transition zone, I use the Astronomik CLS with my 7D. It has allowed me to get quite a few great nebula shots:

http://jonrista.com/category/astrophotography/deep-sky/nebula-deep-sky/

The summer nebula and galactic core season is starting now, and I hope to be getting some more nebula photos with this filter soon.

Anyway, if you really want to do some astrophotography, and already have some Canon EF lenses and an EOS DSLR, then you CAN do astrophotography! You can do ultra wide field astrophotography with lenses of 50mm and wider, wide field with lenses between 50mm and 1200mm, and deep field with lenses longer than that. The Astronomik CLS EOS Clip-in filter is about $140. You can pick up a small equatorial tracking mount and tripod for around $800, or if you want to start even cheaper than that, with lenses of around 200mm and shorter you can use something like the iOpteron SkyTracker, which is about $500.

You could try without tracking, but you really going to be limited to really light exposures at focal lengths below 35mm. So something like the SkyTracker at the very least is important. You could take it a step up, and support larger lenses or small telescopes, with something like the Celestron Advanced VX mount or the iOptron ZEQ25 (the latter being very slightly more expensive but a fair bit better.)

1063
Third Party Manufacturers / Re: Landscape Filters
« on: May 12, 2014, 05:00:21 PM »
As for you can do everything in Photoshop etc. if that was the case then why do professional landscape photographers such as Joe Cornish, Jeremy Walker, Mark Denton, David Ward, Charlie Waite, David Noton, John Gravett, Tom Mackie, David Clapp (he has made videos for Canon on the 6d) etc. carry them in their kit bags adding weight & bulk?. I would suggest looking at Xposure the free new download magazine on the Lee Filters web-site this highlights professional  photographers using filters in many situations and why.

The reason you cannot simulate what a solid ND does in post is because it allows you to expose over a duration of time within which motion is occurring. You can take a photo of water, but if you take it with a high shutter speed, your not going to be able to replicate the effect that flowing water produces over a longer duration with an ND filter. Same goes for clouds, or anything else with motion. ND filters reduce the rate of light entering the lens, and therefor allow longer exposure times. There is no way to simulate a longer exposure time in post.

The reason you cannot simulate what a GND filter does in post is because it reduces the dynamic range OF THE SCENE. If you are actually clipping your highlights without a GND, then those pixels are pure white. There is no recovery, and there is no simulating a GND filter...all you could do is make those pixels gray, you could not actually recover the detail that was lost by not using a GND filter. With GND filters, you pull down highlights in ANALOG space, before the light ever even reaches the sensor, thereby reducing the dynamic range of the world around you AS it enters the lens.

These are real-world physical effects. They cannot be simulated. Hence the reason photographers who know what they are doing invest the money on a good multi-filter holder (like the Lee Filter system) and a bunch of good 4x6" filters. Because they are quite literally ESSENTIAL to achieve the effects they support.

I agree with most of what you said, but I think there are situations where GND can be simulated in post. Just like using a filter, it takes some forethought by taking multiple exposures of that scene and then in post compositing and masking. As long as it doesn't include motion such as clouds or water it will be a pretty good substitution for a GND filter.

The whole entire point of GND filters is to AVOID having to take multiple shots, which is where you get into HDR. HDR is really a misnomer...doesn't matter if you do an HDR blend and convert down to 16-bit, use Enfuse, or manually tonemapp, all three approaches achieve the same thing, and all three require more than one shot. HDR is certainly a viable option, however HDR is different than using a GND and it's not the same as single-shot photography. The purpose of a GND is to balance contrast and reduce dynamic range so you can take one single shot of your scene and not have to worry about clipped highlights.

1064
Third Party Manufacturers / Re: Landscape Filters
« on: May 12, 2014, 04:33:01 PM »
As for you can do everything in Photoshop etc. if that was the case then why do professional landscape photographers such as Joe Cornish, Jeremy Walker, Mark Denton, David Ward, Charlie Waite, David Noton, John Gravett, Tom Mackie, David Clapp (he has made videos for Canon on the 6d) etc. carry them in their kit bags adding weight & bulk?. I would suggest looking at Xposure the free new download magazine on the Lee Filters web-site this highlights professional  photographers using filters in many situations and why.

The reason you cannot simulate what a solid ND does in post is because it allows you to expose over a duration of time within which motion is occurring. You can take a photo of water, but if you take it with a high shutter speed, your not going to be able to replicate the effect that flowing water produces over a longer duration with an ND filter. Same goes for clouds, or anything else with motion. ND filters reduce the rate of light entering the lens, and therefor allow longer exposure times. There is no way to simulate a longer exposure time in post.

The reason you cannot simulate what a GND filter does in post is because it reduces the dynamic range OF THE SCENE. If you are actually clipping your highlights without a GND, then those pixels are pure white. There is no recovery, and there is no simulating a GND filter...all you could do is make those pixels gray, you could not actually recover the detail that was lost by not using a GND filter. With GND filters, you pull down highlights in ANALOG space, before the light ever even reaches the sensor, thereby reducing the dynamic range of the world around you AS it enters the lens.

These are real-world physical effects. They cannot be simulated. Hence the reason photographers who know what they are doing invest the money on a good multi-filter holder (like the Lee Filter system) and a bunch of good 4x6" filters. Because they are quite literally ESSENTIAL to achieve the effects they support.

1065
Landscape / Re: jrista et al, Why Astrophotography?
« on: May 12, 2014, 02:09:11 AM »
You know, why don't you get the Planewave 0.7m CDK Telescope System? Seems like a good option for 200,000$

Indeed. I think my HOA would crucify me if I mounted one in my driveway, though. ;P

Pages: 1 ... 69 70 [71] 72 73 ... 299