March 03, 2015, 10:31:24 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jrista

Pages: 1 ... 75 76 [77] 78 79 ... 334
Third Party Manufacturers / Re: Sony A77II
« on: August 10, 2014, 09:38:13 PM »
Maybe I am misunderstanding the Sony benefit but with one in my hand it works much better and faster than my 1DX.

I use Case 5 mostly and if someone gets in the way long enough, it switches to the person in-between. This was not the case with the a6000. That also means you have to maintain your focus point on the subject, this is not the case with the Sony. It passes it off from one point to another. It literally moves the focus points as you track from one side of the EVF to the other side if you get behind or lost tracking your subject. If my 1DX does that, I am completely out of the loop.

If you shoot sports, Case 5 is probably not the best option. That one uses a medium tracking switch rate (the "Tracking Sensitivity" setting), so it's designed to jump shortly after a closer subject moves into the frame. Case 5 and 6 are what I use for bird photography. Birds can and do erratically change direction on a isn't quite that way with sports.

You want to be using Case 2 if you don't want the camera to switch subjects once it's tracking, as that uses -1 for the Tracking Sensitivity setting. You can edit it and put it to -2 if you want, then it will really stick to a subject, even if multiple other potential subjects pass in front of or near to your tracked subject.

Animal Kingdom / Re: BIRD IN FLIGHT ONLY -- share your BIF photos here
« on: August 10, 2014, 09:07:04 PM »
I think so - at least, there wasn't a body anywhere nearby!


Ccoo-coo-oo-woo-woo-woo-crwaak! *shakes head, wobbles a bit* "Aww, man, that messed up my dove-love groove!"

Third Party Manufacturers / Re: Sony A77II
« on: August 10, 2014, 09:01:07 PM »
Once you use the Flex point to lock on to a moving subject, you no longer have to keep that focus point on the subject. The flex point passes the focus points off to other focus points across the EVF. You can let the subject move across the FOV and that subject stays in focus. The Flex point locks focus when depressing the shutter button half way. Whatever you pressed the shutter button halfway down on it stays locked on until it exits the screen or it 

This is not the end all be all feature. No, it is not like a 5Diii or 1DX, I have both and can attest to this.

This is far more sophisticated and it is progress by a camera manufacture. Real progress. I really hope Sony pushes Canon and Nikon to up their game, soon.

What your describing is exactly how Canon's 61pt AF system works in all points mode. It does EXACTLY the same will use whatever focus points, out of the entire grid, to keep the subject it originally locked onto and is now tracking, in focus.

I also own the 5D III...that's exactly how it works for me. So either you are not using your 1D X and 5D III correctly, or your just not using the right AF mode on them.

Sony's flex point is no more sophisticated than what Canon introduced several years ago. For that matter, Nikon has been doing this for even longer! And Nikon's AF system, which is also tied into it's high res RGB metering sensor, has a large library of reference images that it uses to identify what kind of subject your tracking (which supposedly gives it cues as to subject behavior...however it still doesn't seem to work as well as Canon's 61pt AF system in practice...both the D800 and D4 these days don't perform quite as well as the 1D X in sports and other high action shooting based on reviews.)

Third Party Manufacturers / Re: Sony A77II
« on: August 10, 2014, 08:56:34 PM »
I just watched the video, and wow, what a load of crap! This is what Canon's 61pt AF system (and before that, their 45pt and 19pt AF systems) have been doing for years! This is exactly what Canon's AF systems do in all points mode...they LOCK onto a subject, then track that subject. With the 5D III and 1D X, the AF system is also very highly configurable, and comes with several preconfigured AF modes for different kinds of subject motion, as well as full custom configurability. (You can even assign different AF modes to different custom user dial modes for quick and easy access.)

Canon's system allows you to do full subject tracking in all points mode, but also has several zone modes (where instead of using all AF points, it will just use a selected zone of AF points that you can move around the entire grid), as well as expansion modes (where it will let you pick a primary AF point, and then utilize either the four or eight surrounding points to assist). Canon's tracking is also better than what was demonstrated in the video...the reviewer was trying to say that the camera did not lose focus on the guy in red, but it WAS losing focus...because by the end of the sequence, both the red and black guys who were crossing paths were OOF. Canon has a configurable tracking "switch rate" will try to keep focus on it's previously tracked subject (using a hysteresis of the previous AF frames) for as long as you configure it, then switch to a closer subject. You can configure this tracking switch rate from very slow through very fast. Canon wouldn't have lost focus on the red player at all, period.

Another thing about Canon's AF system, especially on the 1D X, it would NEVER lose the subject's face. In the video, the Sony was focused on the guys chest and knees most of the time, but no one want's a knee in focus and the face slightly out of focus. The 1D X ties the meter and AF system together via a dedicated processor that can do face recognition (which, actually, works with birds and dogs as well, possibly other wildlife. ;)) Once the face is identified, Canon can maintain the AF lock on the same subject, and on his face, the whole entire time.

So, I'm sorry, but the reviewer in this video is full of crap when he says this kind of AF system has never been done before. MASSIVE LOAD OF BULL SH*T!!

I am using the same source of information that you quoted for number of pixels on target - Clark.

Quote: "The images in Figures 10 and 11 illustrate that combining pixels does not equal a single image. The concept of a camera with many small pixels that are averaged to simulate a camera with larger pixels with the same sensor size simply does not work for very low light/high ISO conditions. This is due to the contribution of read and electronics noise to the image. Again this points to sensors with larger pixels to deliver better image quality in high ISO and low light situations."

I think your misinterpreting what he is saying. He isn't saying that read noise increases as pixels get smaller. He is saying that read noise represents a larger percentage of the image signal at higher ISO than at lower ISO, and the higher SNR of larger pixels offsets that. That is a true statement.

There are other factors to consider about high ISO, though. I think it was Lee Jay who stated it earlier in the thread, but read noise is lower with smaller pixels. Look at, which is empirical data, and look at the read noise levels (that is ALL read noise...dark current contribution as well as downstream electronics contributions). The 7D has ~8e- RN @ ISO 100, while the 5D III has ~33e- RN @ ISO 100. Since you can fit 2.1 7D pixels into the space of a single 5D III pixel, the "equivalent RN" of binned pixels would be ~16.8e- RN, still half what the 5D III has (I really don't understand why Canon's newer sensors have such high read noise...their RN levels are REALLY bad...but maybe it's a tradeoff they make for their high frame rates for the pixel count or something. I can't wait till Canon moves to an on-die CP-ADC design...) I used the word binned there, because it's important. If you average pixels together in post, the random component of read noise drops. Only the non-random component of read noise will strengthen. Canon in general has a handicap there...they have some strong pattern noise at low ISO on the 7D, and even some still on the 5D III. At least it only really shows up at lower ISO settings.

From a read noise standpoint, the 7D is actually very good. Some of the BEST ultra low noise CCD astro sensors on the market, one of which is the Sony ICX694, have ~5e- RN. At 5e- RN, that is one of the lowest read noise levels on the planet. There is a table of read noise levels on an astro site somewhere (I don't have the link handy now), and the lowest standard-gain RN I've ever seen was 4.5e-. Most DSLRs seem to bottom out at around ~3e- at high ISO (at least, according to's results are a little more linear, and his results indicate RN levels drop to as little as ~2e- at their lowest). Regardless of whether RN is 3e- or 2e-, it's EXTREMELY LOW, and a minor contributor to overall high ISO noise in general.

Larger sensors perform better at high ISO because they have the potential to gather more light in total. This thread is all about the reach limitation, in which case, framing identically is not an option. When framing identically is not an option, ** assuming all else is equal ** (I'm REALLY trying to emphasis this point, because the 7D and 5D III are not "all else equal"...the 70D and 5D III would be on more equal technological footing), then pixel size does very little to nothing to improve IQ. There is the fill factor issue to some point, you reach a small pixel size where, even with a small transistor/wire size, the sheer number of pixels necessitates contributing a meaningful amount of sensor space to that wiring unless you use a BSI design. If the small pixels are small enough that fill factor reduces total photodiode area by a meaningful amount, then averaging pixels is not going to be completely capable of normalizing noise.

The primary reason full frame cameras do better in low light is because they can gather more light in total. If I frame my subject identically with an APS-C and FF camera, then the FF camera is gathering more light in total for my subject. Once normalized, the noise will be lower with the full frame sensor. Because the subject is relative to the frame, instead of absolute within the frame. I could use two full frame cameras, one with larger pixels and one with smaller pixels. So long as I frame identically, all else being equal, the normalized results will exhibit the same noise. The only difference would be that one image is crisper and sharper than the other...and that would be the FF sensor with more, smaller pixels.

I kind of wish I had a 70D at my disposal now, so I could demonstrate with equipment of equivalent technology generation. The 70D has about 6000e- more FWC than the 7D, which is significant, considering the 7D only had about 20ke- to start with. (It's a 30% increase.) Averaging a 70D image to the same size as a 5D III image should have the effect of reducing noise to very similar levels...close enough that you would have to scrutinize to identify any differences.

+1 My biggest mistakes are when my camera is set for point exposure for birds against a normal background and one flies by against the sky and I don't have time to dial in +2 ev to compensate or vice versa. Two more stops of DR would solve those problems.

This is a case where you want more DR to eliminate the need for the photographer to make the necessary exposure change. If you encounter this situation a lot, I highly recommend reading Art Morris' blog, and maybe buy his book "The Art of Bird Photography". He has an amazing technique for setting exposure quickly and accurately, such that making the necessary change quickly to handle this situation properly would not be a major issue.

Personally, I wouldn't consider this a situation where more DR is necessary. It might be a situation where more DR solves a problem presented by a lack of certain skills...but it is not actually a situation where more DR is really necessary.

Autofocus is not necessary, automatic metering is not necessary, IS is not necessary. The fact is that having those features makes it a lot easier, and having an extra couple of stops of DR would also make it easier. It is not a question of lack of skill but having a camera that eliminates one more variable.

Well, I think were getting into semantics now, so I won't really press the issue. Yes, having more DR can certainly make things easier, but good technique can totally eliminate the need, and can be just as easy in practice. That's what I was trying to say.

Third Party Manufacturers / Re: Sony A77II
« on: August 10, 2014, 07:39:28 PM »
I'm not sure WTF Canon and Nikon are doing but Sony is innovating to the point I am becoming VERY tempted to completely jump ship.

An A77ii and A6000 look really good and I could afford to purchase a new body every year.

Don't forget that Sony uses a lossy "raw" file format. Their technology is good, but currently they are gimping it with a crappy image file format (which, given that it is lossy compressed, cannot legitimately be called "raw").

In crepuscular light, the low light around sunrise and sunset, you are NOT going to be using ISO 100 or 200. As you say, your going to be up at ISO 12800. You need the high ISO so you can maintain a high shutter rate, so you can freeze enough motion to get an acceptable image. There are times during the day when you can capture wildlife out and about, but the best times are indeed during the crepuscular hours of the day.

Just for reference, here are the dynamic range values for four key cameras at ISO 12800:

D810: 7.3
D800: 7.3
5D III: 7.8
1D X: 8.8

As far as dynamic range for wildlife and bird photography during "activity hours" goes, there is no question the 1D X wins hands down. It's got a 1.5 stop advantage over the D800/D810, the supposed dynamic range kings.

Good points in general, although the wrong specific data.
The actual values should have been listed as:

ISO12,800 DR
D810: 8.13
5D3: 8.25
1DX: 8.99

1DX with a 0.86 stop advantage over D810.

So there is nothing between it for the 5D3 vs D810, although the 1DX does give you nearly a stop more which is nice. In other terms it gets tricky, smaller pixels give finer grain which bothers the eye less and allow you to apply more advanced NR techniques. I'm not sure how banding and glow and such look between the nikon and canon at 12,800. So perhaps the real feel of the difference for some shots would be more or perhaps less.

(I know you asked to not have this brought up again, but since you went to DxO, there is no choice. You can't about smaller pixels not hurting and then suddenly when you go to DxO chose the wrong setting that does penalize smaller pixels.)

At ISO 6400, we have:

we actually have:
D810 9.08
5D3 9.07
1Dx 9.88


Here is my reference:

That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Totally agree here. Having more DR is not a problem, and can make some of the rarer but tougher situations, like the ones you described, easier to deal with.

To that end, I think Magic Lantern is a HUGE bonus for Canon shooters, as (at least so far, with the 6D) they have managed to increase high ISO DR to levels that were previously only attainable at ISO 400 and below on most cameras (the notable exceptions being 1DX and D4).

doesn't that lose half the res though? that would be bad for reach limited wildlife in particular I'd think
although perhaps for the parts of the body in shade the detail is not as often critical?

I did not think so...but I could be wrong. If it does, then you are right, it wouldn't be good in a reach-limited situation.

That said, sure more DR for wildlife shooting or sports would be welcomed too of course, no doubt.

Totally agree here. Having more DR is not a problem, and can make some of the rarer but tougher situations, like the ones you described, easier to deal with.

To that end, I think Magic Lantern is a HUGE bonus for Canon shooters, as (at least so far, with the 6D) they have managed to increase high ISO DR to levels that were previously only attainable at ISO 400 and below on most cameras (the notable exceptions being 1DX and D4).

I don't want to bog you down if it's a huge subject, but what is drizzle roughly, and why does a gap between frames make it less effective? Since the atmospheric distortion is random. Once the frames are aligned, how can the software tell whether they were taken milliseconds or minutes apart?

It isn't so much the gap between frames as the total frame count. It's a lot, lot easier to get thousands of frames when using video. When you take them one at a time, there is a fairly significant overhead, an overhead that could last the span of several frames. Using video cuts down that overhead significantly. For superresolution to be effective, you actually don't want all the frames to be perfectly want very very slight variations between each frame, as the algorithm uses those differences to enhance resolution and "see" past things like atmospheric turbulence, diffraction, optical aberrations, etc.

You could still do superresolution with individual still frames. You would just need several times as much time to gather enough frames for it to be effective. (Although, if there is enough movement of the subject between frames, as is the case without tracking...that can actually be too much movement. You want small movements between frames, but otherwise have the subject remain generally stationary...if it's drifting across the frame, then you first do have to align, and alignment might result in everything being TOO consistent across frames, thereby reducing the effectiveness of the superres algorithm.) This is particularly true when a superres algorithm is used in conjunction with a stacking algorithm and other algorithms, as is the case with planetary integration software, as those programs will drop a considerable number of frames that do not meet certain quality criteria. Remember, the goal, with the moon...or mars...or any other planet, is to take only the frames from those moments when seeing clears enough that the detail shows through really well. So, if you can take 100 still frames in  5 minutes, or 1000 video frames in 30 seconds, well, your going to choose video. ;)

I use both the 5D mk III and the 7D. I like have both, it is like have two sets of lenses. That being said if i am close enough i will always go to the 5D.

Same here. If I am not reach-limited, it will almost always be the 5D/6D. If I am, then the 7D offers more reach (at the cost of more noise).

I would be really interested in seeing the difference between the 7D, 70D, and a 5D III/6D in a reach limited situation. The 7D is old tech, so it is going to be noisier. Theoretically, with sensors that use the same generation of technology, in a reach-limited situation the noise should not be different once the results are normalized. I am willing to bet good money that the 70D performs markedly better than the 7D in such a situation, and when downsampled to the same size as the 5D III, there would not be any significant difference in noise.

Animal Kingdom / Re: BIRD IN FLIGHT ONLY -- share your BIF photos here
« on: August 10, 2014, 01:25:22 PM »
Wonderful photos as usual from everyone. My own small contribution is a different take on the theme...

LOL! Looks kind of like a Mourning Dove silhouette... Did it survive?

Anyway, the sky is cloudy and the bird is under the shade... so I think the details are a bit more difficult to resolve under this flat lighting condition.

For sure, flat lighting can indeed make it difficult to resolve fine detail...there just isn't any shading for it. I'd be interested to see you redo the test with better lighting. It's nice having a cooporative bird as a subject for that...none of the birds around here, except maybe Night Herons, are willing to remain still for long periods of time, but they are also very jittery, and the slightest thing sets them aflight.

If you get another chance during better light, I'd love to see you try again. There probably isn't much of a resolving power difference between the two at high ISO...however I would expect the 6D to take the lead in overall IQ.

Is that software Windows-only? I did a bit of searching, and couldn't find system requirements anywhere. For some reason, most of this sort of software doesn't run on Macs, so I had to use the only stacker I could find that did, called 'Keith's Image Stacker'. It's pretty good but I have no understanding of how different modes produce different results. I guess I should read up on it more.

Yeah, windows is pretty much the operating system of choice for astrophotography software. There are some new apps available for iOS devices, but overall, not much of the software we use runs on Macs. I think most astrophotographers either use a dual-boot (virtualization tends to be problematic and too slow) with macs so they can run windows when they need to...or, they simply have a windows based laptop for their astrophotography stuff.

If your interested in AP, then I highly recommend you pick up a windows box of some kind. The vast majority of the software out there, like BackyardEOS, is only available for windows.

As for video - this is a question I've had for a while. Even HD video is only 2MP. No matter how much resolution you're gaining through stacking, surely you're losing 90% (of the 5DIII's potential) versus stills? Any thoughts? When I stacked my moon (I've included a crop of a much reduced-size below) I had to shoot lots of stills manually and use those instead. You're clearly doing something better though, as you seem to be pulling out a similar level of detail even though I was at a much higher focal length (5600mm).

BackyardEOS has some unique features. It seems to be able to use the 5x and 10x zoom features of live view, then it records a 720p video from those zoomed in views. So, your actually getting more resolution than if you recorded a 720p video at 1x.

For superresolution algorithms to work, you need your frames to be pretty close together. You want some separation, a minimal mount of time to allow for the subject to "jitter" between frames, as it's the jitter that allows an algorithm like drizzle to work in the first place.

At 5600mm, you should be able to pull out some extreme detail of a small area of the moon's surface. I'd love to have 5600mm at my disposal! :D If you pick up a windows laptop, install BackyardEOS, and play around with the planetary imaging'll start to see how it all works, and you'll start getting amazing results.


Personally, I believe the idea of a lens "outresolving" a sensor, or a sensor "outresolving" a lens, is a misleading concept. Output resolution is the result of a convolution of multiple factors that affect the real image being resolved. Sensor and lens work together to produce the resolution of the image you see in a RAW file on a computer isn't outrsolving the other. I've gone over that topic many times, so I won't go into detail again here, but ultimately, the resolution of the image created by both the lens and sensor working together in concert is closely approximated by the formula:

It is not a misleading concept, just one that has to be used carefully. There are many processes in physics and chemistry where the end result is related to the all the components usually summed as reciprocals; e.g. resistors in parallel in an electric circuit; the overall resolution of an optical system, etc. Where those components all make similar contributions, none of them dominate and all are taking into the reckoning. E.g, a 1 ohm resistor in parallel with a 1 ohm has an overall resistance if 0.5 ohms. However, if dominates the system then the others are unimportant - the overall resistance of a 1 ohm resistor in parallel with a 100 ohm, is little different from a  1 ohm parallel with a million ohm, all very close to 1 ohm. If a lens at a particular aperture produces a point source that gives an image much smaller than a pixel, then increasing the number of pixels could be useful to increase resolution as the lens is outresolving the sensor. If the lens projects a point source to a size that is much larger than the size of a pixel, the sensor is outresolving the lens and it is a waste of time increasing the number of pixels. When the point size is similar, the situation is indeed more complicated as neither dominates.

Your basically talking about asymptotic relationships in systems that resolve. You are indeed correct, just like resistors in a circuit, output resolution is bound by the lowest common denominator. If the sensor is the limiting factor, which would be the case if the lens was resolving a spot smaller than a pixel, then would want to increase sensor resolution to experience more significant gains.

I think my opinion diverges from yours when talking about if the lens is resolving a larger spot than the lens. That doesn't suddenly mean increasing sensor resolution is useless. I wouldn't say there is a hard wall there. Once the spot of light resolved by a lens starts growing larger than a pixel, that is the point at which you first start experiencing diminishing returns. There is still value in increasing the resolution of the sensor, however. You begin to oversample...however, in the grand scheme of things, oversampling is actually good. If we had sensors that were consistently capable of oversampling the diffraction spot of a diffraction limited lens by about 3x, then we would be able to do away with low pass filters entirely, and NOT suffer the consequences of moire and other aliasing. Oversampling could do away with a whole lot of issues, eliminate the complaints of people who incessently pixel peep, etc. The frequency of photon shot noise would drop to well below the smallest resolvable element of detail.

To me, there is absolutely no reason to not use the highest resolution sensor you can get your hands on. As I stated in my previous reply...noise is relative to unit area. Average up whatever ratio of smaller pixels equal the area of a larger pixel, and you will have the same noise (all else being equal). I would much rather oversample my lens by a factor of two to three, than always be undersampling it. I'd MUCH rather have the frequency of photon shot noise be significantly higher than the frequency of the smallest resolvable detail, as then, it would simply be a matter of course to downsample by 2-3x for every single photo. Then, the smallest resolvable detail is roughly pixel-sized, and noise is 1.4-1.7x lower.

Pages: 1 ... 75 76 [77] 78 79 ... 334