March 06, 2015, 06:18:50 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - jrista

Pages: 1 [2] 3 4
Software & Accessories / The Bane of Adobe Creative Cloud
« on: December 10, 2013, 05:17:49 AM »
I've been largely unhappy about Adobe Creative Cloud. Personally, I don't think it is fair to the huge numbers of freelance photographers, graphic designers, web designers, etc. who have effectively built their entire livelihoods on Adobe software. I think that Adobe, with a $50/mo fee for the full CC Master suite and $20/mo per-app fee, is greatly taking advantage of freelancers unmitigated and everlasting dependence.

That said, I decided to give the PS CC + LR5 $10/mo deal a try. It was the first deal that Adobe offered that seemed reasonable (we'll see if it stays that way in a year), and I wanted LR5. I still own PS CS6, and I prefer to use it as my primary editor...with SELECTIVE use of PS CC. Well, I've learned a few things, and I thought I'd warn people.

First off...Adobe CC is infectious. By that, I mean, once it is installed, the CC versions of it's products take over any automatic integrations and file associations. If you double-click a .psd, it opens in CC, rather than CS6. Worse, if you use LR, whenever you open images in Photoshop, it always opens in CC. The worst part is...there seems to be NO WAY to configure LR (either v4.x or v5.x) or other Adobe apps to use the Photoshop version of your choice...your STUCK with CC, unless you uninstall it...and then, you have the hassle of getting CS6 working again. Frustrating, and annoying...Adobe should allow their users to choose which version of Adobe products are used, rather than automatically forcing you to CC.

There is a deeper, more malicious demon lurking within Adobe Creative Cloud, however. I stopped using the .psd format a while ago. I never seemed to need the extra information that .psd stored over and above .tiff, so I switched to .tiff. As such, I NEVER expected that saving .tiff files created with Photoshop CC would not function properly in Photoshop CS6. I thought that since I was using a universal format, they would be compatible with anything that could load .tiff files.

Well, this plain and simply isn't true. An example is using smart objects. I use smart objects with stacked images, along with tweaking the stacking mode (usually mean & median), to do some pretty amazing noise reduction with still frames (macro, landscape) and astrophotography frames. Thanks to the issue described above, some of my recent astro stacks were done in PS CC, rather than PS CS6. I tried to open these .tiff files in PS CS6, and while they opened, they did not render 100% correctly. The issue? The "renderer" for the smart object stacks could not be found. PS CS6 supports exactly the same stacking modes, but Adobe cleverly changed how they store that information in .tiff it is no longer backwards compatible.

So the warning here is, BEWARE! While Adobe says you can open files saved with Creative Cloud apps, they have apparently "tweaked" a few things here and there to make life difficult for those who try to get around their insane monthly fees and use their "bought and paid for" previous versions. Even if you save in universally supported file formats such as TIFF, your file compatibility is NOT guaranteed. You can work around some of these issues, but just beware...there may be some "tweaks" to how CC apps save data that might permanently bind a perfectly normal TIFF file to that CC app, preventing its use in a prior version.

This is the kind of maliciousness that I was afraid Adobe would employ. To my great dismay, it seems my suspicions were correct. The truly frustrating thing is, I cannot afford the extremely hefty upgrade prices for some of the apps I need to upgrade, such as Illustrator and Premier. Even worse, in many cases, my versions for some apps like Premier are too old to upgrade (CS3 era), and I'm required to pay full price. So, my options are to either subscribe to CC, and get locked in forever...or shell out an unholy amount of cash for a product I already own, but for which I simply need an upgrade. Despicable. Adobe is rapidly becoming my most loathed company.


Lenses / New Samyang 10mm f/2.8 ultrawide! Looks impressive...
« on: December 05, 2013, 10:13:04 PM »
Samyang 10mm f/2.8 Manual Focus Wide Angle Prime

This looks pretty impressive. Samyang has made excellent wide angle primes for a while, but this is the first time I've seen one with a nano crystaline coating on an internal lens element. Canon and Nikon have been using nanocrystal coatings on internal elements for a while, and it has a truly amazing impact on reducing flare (total transmission loss is in the range of 0.1%, vs. often more than 1% for basic multicoating.

For rectilinear wide field astrophotography, this lens could be a true dream come true...not to mention the applications for high quality ultrawide landscape photography (especially on full frame!)

Curious to see how corner performance is. If it is anything like the 14mm and 24mm Samyang lenses, it should be phenomenal...but 10mm is pretty darn wide...

Landscape / Deep Sky Astrophotography (Gear Discussion)
« on: December 04, 2013, 11:45:08 PM »
There is already a stars above thread, but that one seems to be about wide field astrophotography. I've been taking a bunch of photos of the comets flying through the sky lately. Only ones I was able to get a decent shot of was Lovejoy R1 (see the Comets thread).

I discovered an intriguing new technique for stacking very short deep sky frames in photoshop, one which nearly eliminates noise without affecting detail. I've been trying to stack short (i.e. 1-2 second) frames of the Orion nebula for a while, never with satisfactory results...always still too much noise. This new technique resulted in my first fairly decent photo:

  • Body: Canon EOS 7D
  • Lens: Canon EF 100mm f/2.8
  • Exposure: 1s f/2.8 ISO 1600
  • Frames: 30

I stacked the frames in the following way:

  • Import as Layers to Photoshop fron LR
  • Align all layers (did it manually, auto-align freaked out for some reason)
  • Select first 5 layers, Layers->Smart Objects->Create
  • Set stacking mode to mean, Layers->Smart Objects->Stack Mode->Mean
  • Repeat 3-4 for each group of 5
  • Rasterize each smart object
  • Set opacity mode to (from bottom most light frame): 100%, 83%, 66%, 50%, 33%, 16%
  • Set blending mode to Screen for all light frames
  • Add Levels adjustment layer and correct black point, white point, and gray point to bring out most detail
  • Tweak color, levels, curves, etc. to taste

Landscape / Comets
« on: November 27, 2013, 09:26:23 AM »
Well, this seems to be the month of comets. In addition to Encke, ISON, and the new Lovejoy, four other comets were discovered this month (C/2013 V1 (BOATTINI),  C/2013 V2 (BORISOV),  C/2013 V3 (NEVSKI), C/2013 V5 (OUKAIMEDEN)). Nevski and Oukaimenden are moving right along. Nevski is passing by the constellation Leo, and Oukaimeden is approaching Jupiter in the sky. Not sure if/when they might put on a show, but currently, we have Encke (a main belt periodic), ISON and Lovejoy sharing the sky and putting on a show for at least binoculars and telescopes.

Given the plethora of cometary beauties moving through the skies right now, I thought it might be worth it to start a Comet thread. I had originally intended to have a Celestron EdgeHD 11" with their DX equatorial tracking mount...but circumstances have left me with only a 600mm f/4 lens. Not particularly ideal, but it allowed me to get a basic shot of Lovejoy:

If you've been photographing comets (especially if you have a tracking mount and a telescope), post em here! Would love to see them!

Well, I thought I'd start a thread for this. Not sure if anyone will get anything...the moon is full tonight, nice and bright...and it may ruin the show. The Orionids were mooned out this year as well, and here in Colorado we had cloudcover.

The Leonids peak in the early morning hours before sunrise, which means the moon will be lower towards the western horizon. Leo will be up high in the sky, but hopefully any meteors radiating towards the east will be visible and capable of being captured by a camera.

As an extra treat, Comet ISON reached naked-eye visibility today, so it should be visible, a little below Mars near the horizon, around the same time that the Leonids peak. ISON is a fairly fast moving comet, and it hasn't brightened all that much, so with the hunter's moon you might not see anything...but still, worth a try I guess. ;)

Anyway, if you get any pictures, post 'em here!

Landscape / Waterscapes
« on: September 04, 2013, 02:22:44 PM »
The title says it all!

Here is my first. Small creek cascading down a mountainside near Long Lake, in the Indian Peaks Wilderness area of the Colorado Rockies. The entire creek was shrouded in yellow and light purple flowers.

Gear: Canon 7D + EF 16-35mm f/2.8 L II
Exposure: 2s @ f/16 ISO 100

Macro / Denizens of the Forest Floor
« on: August 22, 2013, 10:36:23 AM »
If you live in a forest, or have any photos of the forest floor dwellers such as mushrooms, lichens, mosses, etc. this is the place to post them. Macro and close-up work only. Does not matter what lens you use, whether you use extension tubes or reversed lens, etc. so long as magnification is 1:2 or larger (1:1, 2:1, ... 5:1).

Name: Puffball Mushroom (Lycoperdon perlatum)
Edible: Yes (when white inside)
Location: Long Lake, Brainard Lake Recreation Area, Indian Peaks Wilderness, Colorado
Equipment: Canon 7D + EF 100mm f/2.8 Macro

Landscape / Perseid Meteor Shower Aug. 11-12 2013
« on: August 10, 2013, 07:32:46 PM »
Just in case anyone likes photographing meteor showers, the Perseids peak on August 11th and 12th. I am not sure I'll be able to get any shots...Colorado has been experiencing pretty powerful thunderstorms every evening and through most of the night for about a week now. *sob!*  :'(

Anyway, if anyone manages to capture any night sky photos of the shower, I'd love to see some posted here! :)

I just read some of the reviews on the Lumia 1020. I have to say, from a photography standpoint, I am REALLY impressed. It finally brings the true PureView 808's 41mp sensor, the 6-element Zeiss lens from the 925, and full Xenon flash to a phone pretty much built for photography. Their pro photo software looks rather nice, giving you complete control over all the standard aspects of exposure (i.e. want to do a long exposure and blur people walking can). I love the fact that it has the extended battery "grip" accessory, too.

So, does this mark the true end of the point and shoot, and the beginning of full blown photography phones with all the features we *photographers* have come to expect from an actual camera? To date, phone cameras have been geared more towards the instagrammer crowd...the Lumia 1020 seems to be positioned more for pro photographers who want something simpler, but still just as capable, for a handy every-moment alternative to a DSLR.

Is it only me who thinks this?

I seem to have an issue with my EF 100-400mm lens. I just took it out to shoot Double-Crested Cormorants and some other birds the other day, first time since early February. The last time I used the lens, it was for pretty close up shots of smaller birds, such as Juncos and Chickadees, in my yard. When it came to the Chickadees, I did not see anything wrong at first with the shots. Yesterday, while photographing the Cormorants, it seemed the 7D AF locked on pretty well, even though the subjects were only filling 1/4 to 1/3 of the frame most of the time. I spent a few hours photographing, and went home with a couple CF cards full of photos. When I got home, it seemed as though 100% of the shots were soft and/or slightly out of focus. I've been using this lens with it's current AFMA setting for months, and it seemed to work at that setting before. To my knowledge, the lens has been sitting on a tripod, attached to my camera, in the same place since the middle of hasn't been dropped or anything like that.

So far, out of quite a number of photos, I have yet to see one that I would call sharp or properly focused. Most of the shots seem to have the focal plane a few inches behind the subject, even though the 7D in AI Servo appeared to have locked onto the subject and tracked it while I held the AF button down (I use rear-button AF as taught by Art Morris.) The Camera is a 7D, no teleconverters used, and it was mounted on a GT3532LS tripod with the Jobu Pro 2. IS was in Mode 2, and was always active and stabilized before any shots were taken (so I don't think the issue is blurring caused by mid-exposure actuation of the lens' stabilization element group.) In addition to the softness and OOF issue, ALL of the highlights, even in the center of the frame, seem to have a coma-like shape. All of them are "stretched" towards the upper right corner of the frame.

(Not the best photo of the just has a lot of spot highlights that demonstrate the problem well. Small highlight on the end of the bill, another in its eye, several on its wing feathers. The whole image overall appears soft. Focal plane looks to be slightly behind the bird, despite the fact that I was using center-point only AF in with rear-button AF, and it was right on the bird (LR doesn't show AF points, sadly, and I a

Since getting home, I've run the lens through FoCal, as well as a manual AFMA procedure, and I keep ending up at the same setting: -5. I've tested a bunch of shots indoors on well-lit objects, including those with very fine details (I have a calendar with extremely fine lines). Things look pretty sharp, however a numeral six right at the center of the frame also appears to have a slight lower left to upper right "blur":

NOTE: Click images to view them full size, to actually see the 100% pixel peeping view.

(I used AF with -5 AFMA. The diagonal blur appears when using live view contrast-detect AF as well...pretty much identical.)

I am beginning to wonder if the Junco and Chickadee photos from a couple months ago might even have some problems. Based on this 100% crop of an unprocessed Chickadee photo, I now see that the same lower left to upper right blur and coma-like highlight shape. There also appears to be some soft fringing around the beak and the top of it's head. The feather detail does not look as sharp as I thought it was...certainly not as sharp as I would have expected for how close the subject was, and the fact that AI Servo AF seemed to instantly nail it right on the birds eye. I rarely look at my photos at 100% like this...I usually process and sharpen at the "Fit" setting in Lightroom. At 100%, it definitely appears as though shots from over a month ago have the same problem as the shots today. I am guessing the issue is just more pronounced with more distant subjects like the cormorants (which had to be at least 8-10 times farther away).

(Shot from a few feet away, almost at the MFD of the lens. IS was off in most of these shots, with the camera mounted on a tripod with a gymbal head.)

It is also possible that I've just become spoiled. I have rented several of Canon's new Mark II telephoto and supertelephoto lenses. The sharpness on those is quite stunning, and certainly puts the piddly little old 100-400mm lens to shame in every way. However, that said, I do believe the funky coma-like highlight halos and stretching seem to just be wrong to me, and I do not remember seeing them in my photos from a number of months ago. Do I need to send in my lens for repair? If I do need to send it in, is it best to send the camera body with it for proper alignment, or will AFMA cover that? Thanks for any insight!

I just came across some articles written by ctein at The Online Photographer. He brought to light a term that I think would be very useful when it comes to discussing dynamic range of modern cameras. Frequently, the debate arises about what DXOMark's Print DR statistic means, usually in conjunction with the D800's whopping and hard-to-swallow rating of 14.4 stops. Some people have come up with the term "Photographic Dynamic Range" to refer to the thing most photographers think of when they hear "dynamic range", but the meaning of PDR is not super clear all the time. I think ctein's explanation in the two articles below is an excellent one, and I like the differentiation the term "Exposure Range" allows relative to "Dynamic Range". I think Exposure Range (apparently an existing term used in the film days) appropriately and accurately describes what most photographers think of when they hear "dynamic range". Dynamic Range, the way DXO describes it, is quite appropriately called Dynamic Range as it has to do with the "signal", not necessarily the usable range of tones in an "image", nor the characteristics or quality of the noise that may affect the exposure range of the image.

Ctein also puts forward the notion that as many pixels comprise an image, it is theoretically possible for the exposure range to be higher than the dynamic range. He explains it in the second article. Interesting concepts. I am not sure how well it applies with RAW and raw editors these days...the expandibility of exposure range via dithering (which is effectively what Part II covers) is theoretically possible, but in my experience noise in the lower tonal range of a RAW image tends to have too high of a standard deviation to be effective as a medium for dithering. I've never used a top-end camera like the 1D X, however...perhaps its superior noise characteristics would.

I'm starting this thread to continue a tangent from another. Rather than derail the other thread, but in order not to lose the discussion, I thought we could continue it in its own thread. I think there is important information to be gleaned from the discussion, which started when I responded to a comment by @rs:

Ps - I really hope Canon resist the temptation to take their 1.6x crop sensor up to 24mp. It'll suffer from softness due to diffraction from f6.0 onwards - mount an f5.6 lens on there and you've got little in the way of options. Even the legendary 300/2.8 II with a 2x TC III will underperform, and leave you with just one aperture option if you want to attempt to utilise all of those megapixels. Leave the MP lower, and let those lower processing overheads allow them to push the hardware of the small mirror and shutter to its limits.

Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.

You can follow the quote above to read the precursor comments on this topic. So, continuing on from the last reply by @rs:

Once again, this rhetoric keeps cropping up and it is completely incorrect! NEVER, in ANY CASE, is more megapixels bad because of diffraction!  :P That is so frequently quoted, and it is so frequently wrong.
I'm not saying its worse, its just the extra MP don't make any difference to the resolving power once diffraction has set in. Take another example - scan a photo which was a bit blurry - if a 600dpi scan looks blurry on screen at 100%, you wouldn't then think 'let's find out if anyone makes a 10,000dpi scanner so I can make this look sharper?' You'd know it would offer no advantages - at that point you're resolving more detail than is available - weakest link in the chain and all that...

I think you are generally misunderstanding resolution in a multi-component system. It is not the lowest common denominator that determines system resolution is the root mean square of all the components. To keep things simple for this forum, and in general this is adequate for most discussion, we'll just factor in the lens resolution and sensor resolution, in terms of spatial resolution. The way I approach this is to determine the "system blur". Diffraction itself is what we call "blur" from the lens, assuming the lens is diffraction limited (and, for this discussion, we'll just assume the lens is always diffraction limited, as determining blur from optical aberrations is more complex), and it is caused by the physical nature of light. Blur from the lens changes depending on the aperture used, and as the aperture is stopped down, diffraction limits the maximum spatial resolution of the lens.

The sensor also introduces "blur", however this is a fixed, intrinsic factor determined by the size and spacing of the pixels, whether micro lenses are used, etc. For the purposes of discussion here, lets just assume that 100% of the pixel area is utilized thanks to "perfect" microlensing. That leaves us with a sensor blur equal to the pixel pitch (scalar size, horizontal or vertical, of each pixel) times two (to get us lp/mm or line pairs per millimeter, rather than simply l/mm or lines per millimeter).

[NOTE: I assume MTF50 as that is the standard that historically represents what we perceive as clear, crisp, sharp, with high microcontrast. MTF10, in contrast, is usually used to determine what might be considered the maximum resolution at the lowest level of contrast the human eye could detect...which might be useful for determining the resolution of barely perceptible features on the surface of the moon...assuming atmospheric conditions are perfect, but otherwise it is not really adequate for the discussion here. Maximum spatial resolution in MTF10 can be considerably higher than in MTF50, but there is no guarantee that the difference between one pixel and the next is detectable by the average person (Rayleigh Criterion, often described as the limit of human visual acuity for 20/20 vision) is more of the "true mathematical/theoretical" limit of resolution at very low, barely detectable levels of contrast. MTF0 would be spatial resolution where contrast approaches zero, which is largely useless for general photography, outside of the context of astronomy endeavors where minute changes in the shape and structure of an airy disk for a star can be used to determine if it is a single, binary, or tertiary system...or other scientific endeavors where knowing the shape of an airy disk at MTF0, or Dawe's Limit (the theoretical absolute maximum resolving power of an optical system at near zero contrast level) is useful.]

For starters, lets assume we have a perfect (diffraction-limited) lens at f/8, on a 7D sensor which has a pixel pitch of 4.3 microns. The lens, at f/8, has a spatial resolution of 86 lp/mm at MTF50. The sensor has a raw spatial resolution of approximately 116 lp/mm (assuming the most ideal circumstances, and ignoring the difference between green and red or blue pixels.) Total system blur is derived by taking the root mean square of all the blurs of each component in the system. The formula for this is:

Code: [Select]
tb = sqrt(lb^2 + sb^2)
Where tb is Total Blur, lb is Lens Blur, and sb is Sensor Blur. We can convert spatial resolution, from lp/mm, into a blur circle in mm, by simply taking the reciprocal of the spatial resolution:

Code: [Select]
blur = 1/sr
Where blur is the diameter of the blur circle, and sr is the spatial resolution. We get  0.01163mm for the blur size of the lens @ f/8, and 0.00863 for the blur size of the sensor. From these, we can compute the total blur of the 7D with an f/8 lens:

Code: [Select]
tb = sqrt(0.01163mm^2 + 0.00863mm^2) = sqrt(0.0001352mm + 0.0000743mm) = sqrt(0.0002095mm) = 0.014475mm
We can convert this back into lp/mm simply by taking the reciprocal again, which gives us a total system spatial resolution for the 7D of ~69lp/mm. Seems surprising, given the spatial resolution of the lens...but then again, that is for f/8. If we move up to f/4, the spatial resolution of the lens jumps from 86lp/mm to 173lp/mm. Refining our equation to stay in lp/mm:

Code: [Select]
tsr = 1/sqrt((1/lsr)^2 + (1/ssr)^2)
Where tsr is total spatial resolution, lsr is lens spatial resolution, and ssr is sensor spatial resolution, plugging in 173lp/mm and 116lp/mm for lens and sensor respectively gets us:

Code: [Select]
tsr = 1/sqrt((1/173)^2 + (1/116)^2) = 1/sqrt(0.0000334 + 0.0000743) = 1/0.0001077 = 96.34
With a diffraction limited f/4 lens, the 7D is capable of achieving ~96lp/mm spatial resolution.

The debate at hand is whether a 24.1mp APS-C sensor is "worth it", and whether it will provide any kind of meaningful benefit over something like the 7D's 18mp APS-C sensor. My response is absolutely!! However, we can prove the case by applying the math above. A 24.1mp APS-C sensor (Canon-style, 22.3mmx14.9mm dimensions) would have a pixel pitch of 3.7µm, or ~135lp/mm:

Code: [Select]
(1/(pitch µm / 1000µm/mm)) / 2 l/lp = (1/(3.7µm / 1000µm/mm)) / 2 l/lp = (1/(0.0037mm)) / 2 l/lp = 270l/mm / 2 l/lp = 135 lp/mm
Plugging that, for an f/4 lens, into our formula from above:

Code: [Select]
tsr = 1/sqrt((1/173)^2 + (1/135)^2) = 1/sqrt(0.0000334 + 0.0000549) = 1/sqrt(0.0000883) = 1/0.0094 = 106.4

The 24.1mp sensor, with the same lens, produces a better result...we gained 10lp/mm, up to 106lp/mm from 96lp/mm on the 18mp sensor. That is an improvement of 10%! Certainly nothing to shake a stick at! But...the lens is outresolving the sensor...there wouldn't be any difference at f/8, right? Well...not quite. Because of the nature of "total system blur" being a factor of all components in the system, we will still see improved resolution at f/8. Here is the proof:

Code: [Select]
tsr = 1/sqrt((1/86)^2 + (1/135)^2) = 1/sqrt(0.0001352 + 0.0000549) = 1/sqrt(0.00019) = 1/0.0138 = 72.5
Despite the fact that the theoretical 24.1mp sensor from the hypothetical 7D II is DIFFRACTION LIMITED at f/8, it still resolves more! In fact, it resolves about 5% more than the 7D at f/8. So, according to the theory, even if the lens is not outresolving the sensor, even if the lens and sensor are both thoroughly diffraction limited, a higher resolution sensor will always produce better results. The improvements will certainly be smaller and smaller as the lens is stopped down, thus producing diminishing returns. If we run our calculations for both sensors at f/16, the difference between the two is less than at f/8:

18.0mp @ f/16 = 40lp/mm
24.1mp @ f/16 = 41lp/mm

The difference between the 24mp sensor and the 18mp sensor at f/16 has shrunk by half to 2.5%. By f/22, the difference is 29.95lp/mm vs. 30.21lp/mm, or an improvement of only 0.9%. Diminishing returns...however even at f/22, the 24mp is still producing better results...not that anyone would really notice...but it is still producing better results.

The aperture used was f/9, so diffraction has definitely "set in" and is visible given the 7D's f/6.9 DLA. The subject, in this case a Juvenile Baird's Sandpiper, comprised only the center 25% of the frame, and the 300 f/2.8 II w/ 2x TC STILL did a superb job resolving a LOT of detail:
You've got some great shots there, very impressive  ;) - and it clearly does show the difference between good glass and great glass. But the f9 300 II + 2x shot isn't 100% pixel sharp like your native 500/4 shot is. I'm not saying there's anything wrong with the shot - it's great, and the detail there is still great. Its just not 18MP of perfection great. A 15MP sensor wouldn't have resolved any less detail behind that lens, but that wouldn't have made a 15MP shot any better. This thread is clearly going off on a tangent here, as pixel peeping is rarely anything to do with what makes a great photo - its just we are debating whether the extra MP are worth it. And just to re-iterate, great shots jrista  :)

No, it certainly isn't 18mp of perfection great, because it is only a quarter of the frame. It is more like 4.5mp "great". :P My 100-400 wouldn't do as well, not because it doesn't resolve as much, at f/9 it would resolve roughly the same...but because it would produce lower contrast. Microcontrast from the 300mm f/2.8 II lens is beyond excellent....microcontrast from the 100-400 is bordering on piss-poor. There is also the advancements in IS technology to consider. I forgot to mention this before, but Canon has greatly improved the image stabilization of their new generation of lenses. Where we MAYBE got two stops of hand-holdability before, we easily get at least four stops now, and I've managed to get some good shots at five stops. As a matter of fact, the Sandpiper photo was hand held (with me squatting in an awkward manner on soggy, marshy ground that made the whole thing a real pain), at 600mm, on a 7D, and the BARE MINIMUM shutter speed to get a clear shot in that situation is 1/1000s.

So, I still stress...there are very good reasons to have higher resolution sensors, and with the significantly advanced new generation of lenses Canon is releasing, I believe we have the optical resolving power to not only handle a 24mp APS-C sensor, but up to 65-70mp FF sensors, if not more, in the future.

You've got some great shots there, very impressive  ;) -  /* ...clip... */ And just to re-iterate, great shots jrista  :)

Thanks!  ;D

Lenses / Which Gitzo: GT3532LS or GT3542LS?
« on: February 06, 2013, 12:28:16 AM »
I seem to have broken my current tripod. It was a nice, sturdy, and fairly light weight aluminum 'pod, and has served me well for nearly four years. I went to set it up today, and apparently one of the legs no longer just swings freely. I am not sure when or how I damaged the locking mechanism, but it is well and truly dead (no attempts to fix it have worked).

I already own a Gitzo Traveler Series 0 for long hikes. I love it, the CF design is excellent, it is extremely light weight, very compact, and extremely versatile. I've decided to stick with Gitzo for a replacement to the tripod that broke, and I'm looking at the Systematic Series 3 line. I'm having a hard time deciding between the three-leg and four-leg version of the GT35XXLS. The four leg seems to collapse slightly shorter and is slightly lighter. It also seems to be about an inch shorter.

I need a good, sturdy tripod to handler lenses up to the EF 600mm f/4 L IS II, with a TC, along with either a 7D or 5D III, with battery packs. I currently use a Jobu Pro 2 gimbal head, which might add about 10-12 inches of height on top of the tripod. I need something that will put the camera at eye-level when I am standing (I'm 6' tall). I think the GT35XXLS will do that, the pods seem to be around 58" tall. I need something nice and sturdy, too...which is really where my question comes from. Does the extra (fourth) leg section of the GT3542LS affect stability much?

I'm curious if anyone has used these two tripods with larger lenses, preferably with a gimbal, and a larger camera body. Thanks!

I am trying to decide on my next lens purchases. I am a bird and wildlife photographer who does landscapes & macro on the side. I currently have the EF 16-35mm f/2.8 L II for my landscape work, 100mm f/2.8 for my macro work, and the 100-400mm f/4.5-5.6 L IS for my primary photography. I currently use the 7D, however depending on when the 7D II is announced/released, or how the high MP camera pans out, I may get a 5D III.

I ran into the limitations of the 100-400 some time ago, and am ready to move on to bigger, better things. I have been renting Canon's new line of telephoto lenses. For my primary work, my heart is pretty dead set on the EF 600mm f/4 L IS II along with the EF 1.4x TC III (and probably 2x TC III with the 5D III + f/8 firmware) for the bird photography. However before I spend that kind of money, I wanted to figure out if there may be something shorter that I could use for birds in flight and wildlife. I rented the  300mm f/2.8 L II a few months back, and the quality is simply unbelievable. It blew me away. With 1.4x and 2x teleconverters it extends right up to 600mm f/5.6, which is good for a lot of things, including wildlife at a comfortable distance with 300m and 420mm and bird photography at 600mm in good light (although that f/5.6 aperture fails to handle morning and evening or overcast photography very well.)

If I do pick up the 600mm f/4 L II, that will burn up my budget and then some. I'm sure that lens will do well for some wildlife photography, however it won't be all that handholdable (although Canon's weight savings are an amazing achievement), and that focal length could make it difficult to get some wildlife shots for the less shy wildlife...and we have quite a bit of that here in Colorado...deer will get within feet of you at times. I've wondered whether picking up the EF 70-200mm f/2.8 L IS II would be a worth-while replacement for the 100-400, and capable of producing quality shots. According to my calculations, the aperture, despite being a relative f/2.8, is actually the same size entrance pupil as the 100-400 (200/2.8 = 71.43, 400/5.6 = 71.43). The smallish entrance pupil on the 100-400 has not really done much to produce that nice, high quality, creamy boke. It never comes close to the quality of the 300mm 500mm, or 600mm Mark II telephoto lenses. Additionally, the 100-400 is quite soft at 400mm until you stop down to f/7.1, at which point it sharpens up a bit, but is still visibly poor compared to the sharpness of any of those same telephoto lenses.

Is the boke and sharpness of the 70-200 f/2.8 II closer to the caliber of Canon's new Mark II telephoto lenses? How does it fare with the 1.4x TC? Is the 280mm focal length with the 1.4x TC a good enough replacement for a 300mm prime?

Or, to get the kind of quality I'm looking for, do I really just have to knuckle down and get both the 300mm f/2.8 L II and 600mm f/4 L II?

Thanks for any help!

EOS Bodies / Canon to start using 0.18um (180nm) process for FF?
« on: October 28, 2012, 12:55:30 AM »
Chipworks recently released an article analysing the CMOS Image Sensor (CIS) processes from a variety of manufacturers, including Nikon, Sony, and Canon. Historically, Canon has used a 0.5 micron (500nm) process for all of their FF sensors since the original 1Ds. In the Canon analysis, they noted that Canon has a 0.18 micron (180 nanometer) fabrication process (possibly what they used for the 120mp APS-H?) that they may begin using for future FF sensors:

Quote from: Chipworks
Canon does have a 0.18 µm generation CIS wafer fab process, featuring a specialized Cu back end of line (BEOL) including light pipes (shown below). It is possible to speculate that Canon may be preparing to refresh its FF CIS line to supply devices for a new FF camera system.

A move from their 0.5um process to a 0.18um process for FF CIS manufacture would be a fairly significant move for Canon. The accompanying image figure also seems to indicate a double microlens above the CFA and one below...which could lead to higher Q.E. The article also mentions the use of "Light Pipes", a term I had not heard before. According to a few papers I've read, lightpipes in CMOS sensor design make use of high refractive index materials and a reflective wall in the optical stack the  to improve transmission of light from the color filter/microlens to the photodiode, which exists at the end of a narrow tube where all the readout wiring exists (in a frontside-illuminated design). Seems like a lightpipe is an alternative to using a backside-illuminated design that aims to improve Q.E while avoiding some of the complexities and issues with BSI designs. Additionally, the use of copper interconnects should improve efficiency, allowing lower power usage, and hopefully leading to a lower level of electronic noise (the great bane of Canon these days.)

Seems Canon is most definitely not out of the CMOS Image Sensor design game yet. They seem to have some new tricks up their sleeves, and hopefully they will see the light of day in their next FF camera. Ah, competition is good!

Pages: 1 [2] 3 4