March 02, 2015, 06:23:12 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 96 97 [98] 99 100 ... 334
1456
I understand normalization perfectly.

It seem like you do.

yadda yadda blah blah

You keep missing the point. Your locked into your limited notion of what is "comparable" and what is not. I'm choosing to compare something you have decided is not comparable. Sorry, I disagree. I've always disagreed, I always will disagree. I suspect your in the same position, so this the last I'll say on it in this particular thread.

In the context I'm always referring to, the same context I've been referring to for years, I'm not interested in how the images look in the end. I'm interested in what I can do with the RAW files. I'm interested in the editing potential...the latitude with which I can push and pull exposure and white balance and color around. RAW files are not scaled. You always work with them at their native size. Scaling does not play a factor when it comes to editing RAW files. I don't care what the final outcome looks like. That is ARBITRARY. I can output the same images DOZENS of times at different sizes, for different prints, all with different amounts of dynamic range, all with different SNRs. But when I'm sitting in front of Lightroom, that's all the last thing on my mind. We ALL sit in front of lightroom, pushing exposure around...all the time, day in and day out, year in and year out.

Just because DXO says I get 14.4 stops of DR at an 8x12" 300dpi size specifically doesn't mean that's what your going to be sizing to in the end. You may downsample it more, you may downsample it less, you may ENLARGE! DXO's Print DR is an arbitrarily chosen standard FOR THE PURPOSES OF comparing ONLY within the limited context of DXO's web site. It doesn't tell you anything about actual, real-world results as if your sitting in front of your computer, using Lightroom to actually WORK with the RAW files those cameras output. It just tells you what you could get IF you downsampled to EXACTLY that size. That's all. And that's fine and dandy...when I'm browsing around DXO's site selecting cameras to compare with their little camera comparer, it gives me a contextually valid result.

It's impossible to edit RAW at an other size than 100%. So 100% size is all that matters when you want to know what you can do, as far as lifting shadows for the purposes of compressing 10, 12, 13.2 or 15.3 stops of dynamic range into the 8 stops of your screen, or the 5-7 stops of print. The output dynamic range is arbitrary...it depends on countless factors that ultimately affect it (which, yes, total megapixel count is one of them, but noise reduction routines, HDR merge/enfuse, etc. are others). You may end up with 14 stops of DR, you may end up with 16 stops of DR in a file you were able to perform some epic noise reduction on. The output isn't what matters when your actually sitting in front of Lightroom actually editing the RAW itself. The RAW file itself, at 100% size, is what matters.

1457
EOS Bodies / Re: A Few EOS 7D Mark II Specs [CR1]
« on: June 17, 2014, 04:37:15 PM »
...who says you can't use your 7D2 for all shots,

Who says you can?!?  At this point I can safely say that you can use your 7D2 for any shots you take while riding on the back of unicorn.   :P

You could even take shots of the marshmallows that unicorn is crapping out as it flys around on it's magical cloud. :P




1458
Man you just did it again, you can't fairly compare between cameras using Screen DR, you have to use Print DR. I'm start to doubt that you do get normalization after all, either that or are sneakily tricking people to make Canon look better in this scenario (also don't forget the banding differences where the 5D3 has tons more than D800 at low ISO).

I understand normalization perfectly. Normalization only works for certain things, though. It doesn't tell me everything, and quite specifically a normalized image that has 14.4 stops DOES NOT tell me the actual real-world editing latitude (i.e. the shadow lifting capability) of the D800. It EXAGGERATES it, unrealistically, by another two thirds of a stop at least. I am not trying to trick anyone. I believe DXO is tricking people when it comes to how they "sell" DR. They aren't technically incorrect, however they ARE practically incorrect.


Apparently you have fallen back into the trap where you no longer do understand normalization perfectly.

DxO is mostly used to compare sensor to sensor and in that case it is practically correct to use normalization and NOT practically correct to look at the Screen ratings that you keep pushing.

Screen DR/SNR is NOT useful whatsoever when comparing camera to camera and trying to determine which one has better SNR or DR comparatively. If you use Screen results then you are equating different frequencies of noise to be the same which is a totally false thing to do.


I understand normalization perfectly. I do not think it is valid in all contexts. Noise frequencies are one thing...but that does nothing to tell you about editing latitide in a RAW editor like Lightroom. YOU CAN NOT EDIT DOWNSAMPLED RAW. That's a misnomer. Downsampled RAW images do not exist.

Your talking noise frequencies. I'm talking editing latitude.

The problem here is not that I do not understand normalization. It's that you refuse to look at the problem of comparing cameras from a different angle than the one DXO has imposed upon you. ;P


Screen results do tell you what you can get out of it if you chose to try to figure out what you can get out if it using the full resolution and detail of the camera.

Quote
Your still comparing equipment in an isolated, agnostic context.

How can you compare in isolation? The very fact you are comparing means you are no longer looking at things in isolation. The Screen results that you are so fond of are what you use when you are looking at things in isolation.

Your misunderstanding. DXO's results are only meaningful when you are on DXO's site (i.e. isolated) comparing cameras that DXO has tested. Many of DXO's results and measurements have no relevance outside of their site, in the real world...such as, oh, say, lifting shadows in Lightroom. Lifting the shadows of a RAW image, an UNSCALED RAW image, in Lightroom?

What if I want to know how to cameras compare IN THAT SPECIFIC CONTEXT? Well, Print DR is invalid, it doesn't have the capacity to answer my question in that context. Screen DR, on the other hand, DOES. It tells me the dynamic range of the full sized, unscaled RAW images. I WANT to COMPARE that between cameras. That is NOT an invalid goal. On the contrary, THAT IS WHAT EVERYONE CARES ABOUT WHEN THEY THINK ABOUT DR!! :P

Do you get it now?

Quote
I'm comparing real-world image workability. There is a difference.

You are comparing energy scales at different levels as if they were the same and penalizing cameras more and more, the more MP they have, relative to cameras that have lower MPs. If you just want to know what you can get out of a camera using it's full resolution, then yeah, use Screen, but it's not fair to COMPARE ACROSS CAMERAS ONE TO ANOTHER using Screen results.

When it comes to shadow lifting, the number of megapixels doesn't matter. The dynamic range of each and every pixel is what matters. I don't really care about the photon shot noise levels, which permeate the entire signal. I care about the READ NOISE levels, which only exist in the deep shadows. In that context, it is entirely fair to compare across cameras, because what I want to compare is only valid at full resolution. The frequencies of all noise are immaterial, the RMS level of the READ NOISE is what matters.

Quote
PrintDR is only useful within the context of DXO's web site. It has ZERO meaning outside of it.

Totally false. It gives a reasonably fair comparison between what you'd comparatively get out of cameras having different MP counts. I mean try this on for size. Say camera A is 100MP and camera B has 12MP and let us say that camera A compared pixel to pixel has worse SNR and DR than camera B but camera A binned to 12MP has much better SNR and DR than camera B. It would be ridiculous to knock camera A as having worse SNR and DR than camera A and yet that is just what you'd trick people into thinking by all your talk about how useless PrintDR is and how ScreenDR is where it's at.

You can only compare cameras using information produced the same way. Print DR, on DXOs site, is only valid when comparing cameras within the context of DXO. It is entirely invalid to use the Print DR value from DXO, and compare it to any dynamic range value derived anywhere else, say DPR. Print DR only gives you a numeric value with which to compare cameras in one specific context...DXO. It does not give you any real-world information beyond that context.

I am not trying to trick anyone. I believe DXO IS tricking people with their Print DR numbers...they are regurgitated all over the net, OUT OF CONTEX, ALL THE TIME...and that is exceptionally missleading. A D800, for example, ONLY gets 14.4 stops of DR when you assume the DXO formula is used to calculate DR, and you assume that the target image size is an 8x12" 300dpi "print". That's why I refer to Screen DR. When people talk about dynamic range, they are pretty much always (99% of the time) talking about the ability to lift shadows. Shadow lifting isn't really dynamic range...but it's what people interpret dynamic range numbers to mean. The only thing that give you an honest interpretation of the shadow lifting ability of a camera is a DR measurement from the actual RAW image.

Quote
It has ZERO meaning when it comes to actually editing your images. No one downsamples an image, THEN processes it. Everyone processes their images a RAW, in which case, you NEVER downsample, because you CANNOT downsample and still BE editing RAW.

Yeah it doens't tell you how it acts at full resolution, but it does tell you how things relatively compare on a fair basis and we are talking about COMPARING various sensors here.

True, for most things. Not true for shadow lifting. :P

Quote
The use of ScreenDR does not change the fact that the D800 has an advantage. Not at all. Screen DR still shows a significant advantage for the D800. This doesn't make the 5D III better, it just doesn't make the D800 even more better than it actually is. The difference is that Screen DR tells you the REAL WORLD editing latitude advantage. A real, tangible thing that, as a photographer, once you are no longer comparing cameras within the limited context of DXO, and actually USING it, you can actually REALIZE.

Once again you fail to understand how normalization works. Screen does tell you the real world editing latitude.... AT THE MAX RES OF EACH CAMERA, but that is not fair, since you penalize a camera then just for the ability to offer more max res/detail even though if you decide to not take advantage of the extra detail it might actually have better SNR/DR than the camera that is stuck and locked into offering less res/max detail.

It is fair! I can't lift shadows with a RAW image that's been downsampled...because I can't downsample a RAW image. I have to convert it to RGB pixels, then downsample it, then save it as, say, a TIFF. The TIFF doesn't have even remotely close to the same editing latitude.

I could care less about the rules DXO enforces on "comparing" cameras. I know what normalization is, and they provide useful details in some contexts. But that is not the context I am usually referring to. There is more to comparing a camera than ONLY comparing JUST the sensor, and JUST a normalized output at that. There are far more things and ways to compare than just the normalized image context. I'm not saying comparing in a normalized context is invalid...it's just incomplete.

Quote
All I care about is being realistic about the ACTUAL capabilities of these cameras.

Yes, ScreenDR tells you realistically what you can get out of the camera using it to it's full max res, but again when people talk about comparing how one camera can do compared to another it can potentially give a misleading take on real world differences.

It depends on the context. DXO's "Landscape" or "Print DR" numbers are used as the biblical dynamic range that you can ACTUALLY WORK WITH in real life...everywhere. There are some huge threads on DPR where people have debated the 14.4 stops DR thing with gobs of actual examples of shadow lifting in Lightroom. I've seen D800 images pushed 6 to 8 stops. By six stops, a lot of red, blue, and green color noise shows up in the shadows, indicating that you have lifted beyond the capabilities of the camera. That is exactly what I would expect based on the Screen DR measure. The Print DR measure, however, tells people that they can lift at least six stops, and more. THAT is what's missleading. THAT is what I have a problem with. DXO's Print DR numbers are only valid WHEN COMPARING WITHIN THE CONTEXT OF DXO's OTHER TESTED CAMERAS, ONLY USING DXO DATA.

Outside of that context, a dynamic range value of 14.4 stops of DR for the D800 is invalid. When people get into lengthy, extended debates about the shadow lifting range of the D800, they should be using 13.2 stops of DR as the reference...which would mean they could lift a bit over 5 stops without seeing noise in the shadows. That is exactly what a lot of the examples on DPR indicate...but the debate still rages on, why? Because DXO says 14.4 stops.

Anyway, were talking at perpendicular angles here. I understand normalization. Normalization has it's place. Normalization has it's use. When it comes to discussions of dynamic range an the shadow lifting ability of cameras, Print DR is invalid. Screen DR is valid. If you want to COMPARE the shadow lifting ability of cameras, then Screen DR is the value you have to use.

I'm done debating with you on this particular subject at this point. Were just talking in circles around each other. I concede the point about normalization for comparing "fairly" as you say. I've never denied it. But that is different than what I'm talking about, and it ignores the constant debate about WHAT DYNAMIC RANGE ALLOWS in cameras that have more of it (or, to be more precise, allows in cameras that have LESS READ NOISE...because that is primarily what were talking about here...the difference between the D800 and 5D III isn't sensor (pre-read) dynamic range...it's read noise levels (post-read).)

Well, I think most people know what I'm talking about. I believe most people don't think I'm intentionally trying to misslead them or make any particular camera "look bad"...I've praised the D800 for years around here, I have never said it is worse than it really is. I think people know that, and that's all I really care about.

Later.

1459
EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: June 17, 2014, 04:07:00 PM »
I recently got the chance to test a 5DIII against my 7D; I tested sharpness, DR and noise in a highly unscientific way, shot a tree against the sun for DR and found only a slight advantage held by the 5D, sharpness got me about the same results (everything after an equal amount of processing in LR). What did really impress me was the high ISO performance of my old 7D. I shot RAW and applied moderate NR in Lightroom to each file of a row from 1600 to 12800. What I found out, was that, surprisingly, the 7D was only 1 stop worse than the 5DIII, so 1600ISO on my 7D look in 100% view like 3200ISO on the 5DIII!!! The gap grew very marginally from 6400ISO on the 7D upwards, but not even a third of a stop!! Of course, with the 7D, there's some crappy background noise at low ISO numbers, but the high ISO performance stunned me. I always assumed like minimum 2 stops difference between those two...  :)
So I will ask anybody, who got the same bodys: is that the same with your cameras or did I just have insanely infantile dreams about the 5DIII?  ;)

About the specs: It would surprise me if the 7DII featured 'only' a 16 MP sensor, although I would be perfectly fine with that. To fit the majority of potential buyers, they will increase resolution to something like 21-24MP. The noise will most likely be like the 70D as a result of this increase.

That's not my idea about those 2 bodys. The use of my 7D is almost reduced to zero. I'm really hope they release soon the spec of the 7D2, otherwise I will get a second 5D3. Iso was the real disaster why I bought a 5D3. I even got a much, much better AF with the 5D3. So for me I'm sure, the 7D will be exchanged by a 7D2 or a 5D3.

For me to, 16mb would be sufficient if I get 1 to 2 stops less noise, with a much better AF (comparable to 5D3), a deep buffer and a high fps of 8 at least. if that would we the spec then 7D2 is welcome, otherwise a 5D3. Case the last one, also a 500 of 200-400 is whished, and that's the reason I wait on the 7D2.

Francois

Whether pixel size improves noise really depends on what your doing. Assuming your maximizing your use of the total sensor area, then pixel size is not actually a critical factor in noise levels. Noise is determined by the TOTAL amount of light gathered. Not per pixel, but entirely. That means sensor area is really the defining factor in noise levels (all else being equal). That is why full-frame sensors will always have better noise performance than crop sensors when your subject fills the frame equally on both.

The only time pixel size comes into play as a noise factor is when your cropping. That usually implies a limit of reach. Pixel size is primarily a detail factor, and smaller pixels mean more detail. When you have to crop, more detail is king, but the smaller pixels usually mean the trade-off for more detail is more noise. (Really, it still isn't actually the pixel size...noise increases when cropping because the total relevant area of the sensor that is actually used for your subject is smaller...so again, area affects noise, not pixel size...smaller pixels simply resolve a higher frequency noise than larger pixels.)

The only thing that is really going to affect the noise performance of any Canon APS-C sensor is quantum efficiency. That is the ratio of incident photons to electron charge released in the photodiode. It's already at 41% in the 7D II...a one stop improvement would require 82% Q.E. That isn't going to happen...not at room temperature. A boost to 56% would improve things a bit, by about a third of a stop.

There are potential alternatives to using a CFA that might reduce or eliminate the filtering effect, and increase the amount of incident light at the photodiode. That would also help improve total sensitivity. If Canon moved from using a CFA to using some kind of micro color splitter technology, and preserved nearly 100% of the light, then that could produce a one-stop improvement in noise...however that technology is currently patented by Panasonic. Canon hasn't filed any patents over the last few years that indicates they have anything like that in the works. That does not mean they don't...however if they do, it is unlikely the technology would find it's way into the 7D II at this point. Maybe something in the future.


1460
EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: June 17, 2014, 04:11:19 AM »
I can't thank you enough for the thorough reply, Jrista.  My wife is interested in starting astrophotography, and there was much useful information in that post.  :)  Sorry that darker skies aren't available in your area, though I wonder if summertime in the Rockies further west would allow access to high elevations and perhaps less atmospheric interference?  The West Coast is blessed with potential access to dark sky zones from Stone Mountain Provincial Park and Wells Grey Provincial Park (both in British Columbia), all the way down to the Warner Mtns and Siskiyous in northernmost California.  Go west, young man, go west!  ;D

It isn't that dark skies are not available...within about a two hour drive, there are skies that approach some of the darkest on earth. In the north western corner of the state, there are skies that should actually be about the darkest on earth. Darkness isn't the problem...seeing (atmospheric turbulence) is the problem. That affects any state, any region, where the main path of the jetstream passes over. During late winter/spring, the jetstream tends to stretch from the north western region of the country, down through colorado, and back up to the north eastern region of the country. Seeing in that whole band tends to be pretty crappy. There are periods of the year, the heart of summer and early fall, the heart of winter (mainly december) where the jetstream moves off more, and seeing improves. You have to get really high, in regions where the jetstream doesn't frequent, to get "exceptional" seeing...there aren't many places on earth like that. The mountains of Chile are one such region.

I am hoping that July and August will bring fewer clouds, less weather overall, and better seeing conditions.


A couple follow-up questions:

1) What keeps one from binning?  Is it a physical problem resulting from the Bayer sensor, or just an image processing issue?  It seems to me like there might solutions to a purely software problem.  If you have a TIFF, you might be able to post-process that with some sort of binning algorithm.  One might also enhance the camera firmware à la Magic Lantern, which wouldn't necessarily destroy the general utility of the camera for other purposes.  Even if the Bayer sensor is the issue, some interpolation might be doable from slightly offset images though obtaining the correct offset might be difficult.

Binning is very specifically a hardware thing. It occurs at the point the sensor is read. It literally means to combine the charges of NxN neighboring pixels into a single output charge. So, if you have a 100x100 pixel sensor, and you bin 2x2, then you effectively have a 50x50 pixel sensor. Binning can also only be done (at least properly) with a monochrome sensor...there is no logical way to bin a color sensor, since each neighboring pixel is not the same thing.

So there are two reason you cannot bin a color DSLR camera: It's color, and not monochrome...and, they usually do not contain the hardware to perform binning in the first place. Binning is hardware-only. The software counterpart would basically be downsampling. By downsampling 2x, you reduce the width and height by a factor of two, averaging together 2x2 regions of pixels (with a simple algorithm...bicubic is a bit more complex). Downsampling has the benefit of reducing noise by averaging, thus SNR improves. Binning has the benefit of increasing the actual signal strength, thus SNR improves. The latter is the better approach, at least for astrophotography, as good signal strength is key to lifting dim nebula or galaxy detail above the read noise floor. Any attempt to lift anything above the read noise floor MUST occur before readout occurs...otherwise its moot.

2) If I am understanding you correctly, a 5Dii/5Diii might make a better pairing with the AT8RC.  The increased pixel pitch would offset the slightly longer focal length, and assuming that 1200mm is near the maximum that the 7D can support those bodies fall just a tad more comfortably in the useable range.  [I think the AT8RC actually has an eyepiece large enough to allow full frame coverage.]  Am I reasoning correctly here?  Is this combination popular?

You are indeed correct that, at least at the native focal length, the AT8RC and 5DIII/6D are a better combo. Technically speaking, the 6D is a superior astrophotography camera...it's actually the best in Canon's entire lineup. DSLR modders are also offering 6D modification now as well, and recent tests have indicated it's low read noise results in some exceptional results (you usually need really dark skies to get those results, though.)

In addition to barlow lenses, you can also use focal reducers with scopes like the AT8RC. The most popular for that particular scope is the Astro-Physics CCDT67. It is a 0.67x reducer by default, so it makes the scope's focal length 1089mm (assuming you actually space the imaging train out to actually achieve 0.67x...many people opt for ~0.75x reduction, which again gets you around 1219mm). Focal reducers and barlow lenses can be used to change the focal length of the scope, which for a given sensor changes the image scale. You could make the AT8RC work with a 7D or similar sensor, or you could make it work for sensors with much, much larger pixels (such as the 9µm pixel KAF-11000 series sensors, or the KAF-16803 series sensors, both of which have big pixels).

Changing the focal length obviously changes your field of view. At 1000-1200mm, your still relatively "wide field". At 3300mm, your getting into deep field or narrower field territory. Wide fields work better with small pixels, deep fields work better with large (or binned( pixels.

3) The mod I was thinking of was indeed removing the IR and/or UV filters in the stack, but I saw the Astronomik clip-in filters for DSLRs that narrow-pass for hydrogen-alpha, oxygen-iii, and other wavelengths, so one could collect additional stack frames with those to pump up selected color bands, right?  It increases the stack size required, but leaves the camera ready for general purpose photography after the filter is removed.  [Unfortunately, it looks like Astronomik does not support Canon FF bodies.]

It's generally a bad idea to use narrow band filters with a color sensor of any kind. The color filter array, in either DSLRs or OSC (one-shot color) CCD cameras usually keep the total Q.E. per channel to 33% (R/B) or 40% (G) at most. It usually requires about 20 minute exposures with a high Q.E. CCD camera (~56% or higher) to image any given narrow-band channel. That is with a mono sensor, where you have a full fill factor.

With color sensors, for each narrow band filter, only one set of pixels is going to get any light. So it isn't just that you get around (usually less than) 33% Q.E. with red pixels...you get 33% Q.E. and only 25% fill factor, on top of the significantly lower total light due to the filtration going on. Assuming your telescope is transmitting 90% of the light...that is 0.9 * 0.33 * 0.25, or only 7.425% of the light reaching the scope actually releases an electron in photodiodes. The poor fill factor creates other problems for registration, calibration, and stacking as well.

In contrast, a monochrome sensor with 56% Q.E. is going to gather 0.9 * 0.56 percent of the light and release electrons in all of it's photodiodes, or 50.4% of the light reaching the scope. The 100% fill factor makes registration, calibration, and stacking far more effective. If it takes 20 minutes to properly expose say a single Ha band image deeply enough, then it will take 6.79x longer for the DSLR to expose to the same level (approximately 136 minutes). There are relatively few mounts that can track well enough to do 20-30 minute exposures, and even fewer of those that could track well enough to support 136 minute exposures...ASA's mounts come to mind, as they are direct-drive mounts with an inherent <0.1" periodic error.

There is another issue with exposures that long. Noise. Read noise, ironically, becomes a distant background factor for long exposures like this. Dark current noise becomes a vastly greater problem. For short exposures, the sub-second exposures common in normal photography, dark current is practically a non-problem. CDS takes care of it, and we never really have to think about it. But dark current accumulates over time, and it is temperature dependent. An average KAF sensor, like the KAF-8300M, might have about 0.02e-/px/s dark current noise at 0°C. That means that, at that temperature, for an exposure of 20 minutes, your dark current noise is 24e-. If you are using the 6D at ISO 800, read noise is 5.1e-, or almost 1/5th the amount of dark current noise. In other words, dark current swamps read noise. And that is for a thermally regulated CCD that was designed to have low read noise (and ironically, it isn't even the lowest, Sony's new ICX 694, for example, has the lowest dark current levels ever heard of, at 0.003e-/px/s...after 20 minutes, total dark current accumulation would only be 3.6e-, still under the read noise floor). DSLRs have SIGNIFICANTLY more read noise, say an order of magnitude more (~0.2e-/px/s @ 0°C), and on top of that, they run hotter (these days, my 7D and 5D III run around 27-32°C). Dark current doubles for every 5.8°C, which means that at 30°C, it's ~1.03e-/px/s. After 20 minutes of exposure, dark current would be 1236e-! CDS takes care of some of that, however there is always a residual as the pixels and CDS units cannot count identically....they reside in different regions of the sensor die, and the discrepancies can be quite large. On a warm night, dark current noise can be as high as several hundred e-, again completely swamping read noise (and possibly even topping photon shot noise.)

If you want to do narrow band imaging...you should seriously look into getting a proper thermally regulated CCD camera. You can find some of the entry-level Atik CCDs, some of which use the new ultra low dark current Sony sensors, for around $1500. That is for the camera only...a filter wheel would also be required for color or narrow band imaging, and that is usually a few hundred more. But for astrophotography, if you are considering modding a brand new 7D II, it is the better option by far. (Note that narrow band imaging is a great way to get started with AP in the city under light polluted skies...the narrow bands, which are 3nm to 12nm or so wide, block out not only all the light pollution, but you can also usually image during a full moon if your not within about an arc-hour of it.)

4) Are there comparable resolution monochrome CCDs to the ~20Mp general purpose DSLRs but at lower cost, and can one use color filters on these to efficiently recreate a color image?  Would that increase or decrease the stack size needed to create an image relative to the DSLR stack, assuming similar resolution?  The CCDs I've seen seem to be lower resolution, or rather expensive once comparable resolution and the requisite cooling unit is factored in.  Most of the comparably priced CCDs I've seen are under 4 Mp.  Perhaps the cooling unit isn't necessary here, since our mountain nights tend to be chilly.

In terms of resolution, no. In terms of sensor area, yes. There are FF-sized CCDs (36x24mm, i.e. KAF-16803), and there are also 37.8x37.8mm "large format" CCD sensors (i.e. KAF-11002). These sensors are huge as far as astro imaging goes. Regular photographers are actually quite spoiled when it comes to sensor size. Amateur astrophotographers have been using 1/3" and 1/2" sized sensors for a very long time, and those are around or less than 1/2 the area of an APS-C sensor. The KAF-8300 sensor is an APS-C size. The KAF-8300 tends to roll into the middle of the cost range, usually about $4000 or so for a full camera package (camera, filter wheel, filters, and maybe an OAG.)

The larger format cameras, like the 16803 and 11002 are much more costly. They are usually about $8000 at least, and for the higher grade sensors, they can be as much as $45,000. If you want a full frame sensor, and want it to be monochrome, you could do a full mono mod on a 6D. You still wouldn't have binning, and you would have to find a filter wheel that would work with it (there are a few odd products that might.) For best performance, you'll probably want to build a cold box for it as well (peltier-cooled insulating box) with either a radiator or water cooling rig. Overall, the 6D, while it does have a large sensor, is never going to perform the same as a dedicated astro CCD cam that is thermally regulated. If you are serious enough about astrophotography to mod a brand new DSLR, then you should really invest your money into a CCD. Even a smaller APS-C sized KAF-8300 (like the SBIG STF-8300M) would be a superior performer for astro in the long run, especially if you want to do narrow band.

5) If the 7Dii turns out to be a high Mp APS-C camera, say ~36Mp, then the pitch might become so small (around 3 micrometers) that it might not be useable for deep sky photography since the useable focal range might be only about 250mm - 750mm?  [Or alternatively would this put enough pixels on each star that one could use firmware to bin, even despite a Bayer sensor?]

It really depends on what you want to image. The focal range from 200mm to 750mm is very wide field. Ultra wide field would be wider than about 180mm down to your 10mm to 14mm primes and zooms. Wide field is a really good place to be for a LOT of stuff. I use my 600mm lens for a reason...it gives me a very nice relatively wide field view of the sky, and has EXCELLENT image quality. If your interested in nebula, 600-800mm is actually sometimes even a little "tight"...it can be tough to frame some of the huge nebula that span hundreds to many thousands of light years at once. For example, even my 5D III cannot fully encompass the North American/Pelican nebula region of Cygnus, and it can't even come close to encompassing the entire molecular cloud within that constellation...I would need to mosaic somewhere around 15x20 panels (300 integrations, each of which would probably need a minimum of 50 subs, so a total of 15,000 individual light frame exposures.)

There are reasons to use pretty much every focal length from 50mm all the way up through 3500mm just for imaging nebula (although the longer the focal length, the more difficult the job gets, as tracking and guiding smoothly at focal lengths over 2000mm can get very difficult...that's where spending the money on high end gear, like high end Astro-Physics, 10Micron, Software Bisque, and ASA mounts, all of which cost somewhere between $10k-$25k or so, becomes REALLY useful.) At long focal lengths, your zeroing in on very small parts of large nebula, say just a small part of pelican or just a small part of orion or just a small part of heart nebula. Your spreading the light out more, you need longer exposures to get deep exposures, mono sensors become increasingly important for their fill factor. At 250mm, you could image the entire region around Orion's belt and sword in one go, gathering data for horse head, flame, running man, and orion nebulas, as well as all the various reflection nebulas scattered about, and even including the greater extent of the molecular cloud around there (which permeates the entire constellation of Orion, and is most visible in Ha band.) At 600mm, you can zero in on just his belt, or just his sword, and get more detail on the flame/horse head region or the running man/orion nebula region.)

The 7D II, even if it had a 30-40mp sensor, could still be used at a good image scale between 0.5" to 1", for imaging large, beautiful nebulous regions of the night sky. It would only be if you wanted to push your focal length and get in real close on much smaller parts of those nebula that you would find the 7D II's pixels to be wanting...too small, not gathering enough light each...then you'll want a KAF-11002 with it's square frame and huge 9 micron pixels, and you may even find that pixels that large are still not quite good enough. But you would have to be pretty advanced, and willing to spend a LOT of money, to even really begin to attempt imaging at that scale.

Start wide. Wide is easy. Wide is forgiving. Wide lets you suck in light from huge regions of the sky that are packed with beautiful detail. And the 7D II would do superbly well, for what it is....a "one shot color" camera. Don't bother using narrow band filtration with it, not worth it. Use DeepSkyStacker and Photoshop to start, and look into PixInsight for more advanced processing once you get the hang of things. Oh, and make sure you get at least a moderatly decent mount (I HIGHLY recommend starting with the Orion Atlas, an EQDIR cable, and EQMOD...GODSEND!), make sure you get a guiding setup of some kind, and use BackyardEOS (it has a focusing module that allows you to control the AF system of your EOS gear, if your using a Canon DSLR with a Canon lens...without BYEOS, I'd have been completely lost, and focusing would have been a much more significant and painful chore).

1461
EOS Bodies / Re: New Full Frame Camera in Testing? [CR1]
« on: June 16, 2014, 09:17:56 PM »
And yet many photographers choose Canon because of their inherent colour rendition. Skin tones are far nicer on Canon than Nikon. I belive this is due to hot reds on the Canon gamut. I don't want a clinical colour accuracy, that would be boring. I want a colour interpretation whihc is nice and pleasing on the eye. In a simular way to hi fi...some components are very neutral and a little bland. I like speakers and amps which inject a little colour to the sound and add some charector to the performance. This is why I like Arcam amps and Ruark speakers. Unfortunatly, both companies have been pretty much killed by the iPhone market....go figure!

And those "many photographers" would be ill educated.

There is no "inherent" aspect to colour in a RAW file. RAW files don't have a gamut, nor a colourspace, they are rendered into a profile that contains a gamut by software, there is no quality impact or degradation by different rendering algorithms, that is why you can change WB in post to a RAW file with no ill effects, or choose Portrait, Landscape etc Picture Styles after the fact.

While true, the fact remains that in the default rendering the Canon sensors tend to produce warmer tones with more red and less blue than Sony's sensors, which are still drastically warmer than, for example, Panasonic's ultra-cool sensors.  I couldn't tell you how much of that is the choice of colors in the Bayer filters and how much of it is arbitrary white point math differences, but even 20+ years ago, back in the analog CCD days, Canon was always the warmest, Panasonic/JVC the coolest, with the rest at various points in between.  And oddly enough, that hasn't changed much despite radical changes in the underlying processing electronics.  So I'm guessing that at least part of it is the choice of color filters.  Either that or Canon just prefers slightly oversaturated reds.  :)

What is "default rendering", DPP, LR, ACR 2003/2010/2012, DXO, Capture One, which profile? Camera Standard, Adobe Standard, Landscape, Portrait, Neutral, Faithful, or a custom profile made for the illumination of the subject? What colourspace, Prophoto, Melissa, RGB, sRBG, CMYK?

That is the point, there is no "default rendering", you have to choose one and making your own is very easy. If your Canon files are red, it is your choice.

IMO, the default rendering is what you get when you compute the color information using the camera-provided AWB color temperature value from the RAW file's EXIF data.  Any other color temperature value is a user decision, whereas the camera-provided AWB value is what the camera believes to be "truth".

This isn't actually the case. The AWB color temp is just that...a color temp. It is not an actual mathematical algorithm that specified how to achieve that white balance when rendering the raw to screen or to another image format. It's just a piece of metadata. It is then up to the implementer of the RAW editor to actually define the algorithm, to specify the tone curves, that go into actually applying that white balance during rendering.

That's why people comment on how Lightrooms "Canon Faithful" camera style is different than DPP's "Faithful" camera style. It's the same style NAME, but IMPLEMENTED differently. Having a white balance setting of 5250K is largely meaningless...you will get small, often noticeable differences in white balance depending on what RAW editor you use, because they all use slightly different approaches to applying things like white balance, exposure, saturation, picture or camera styles, etc.

There is no "default rendering"...because RAW is not rendered by default. It is RAW...it's just data, that's it. The rendering ENGINE is what determines how the RAW is rendered, and there are many RAW rendering engines out there.

1462
The improved processor(s) and high speed memory required for 4K video and DPAF opens up an interesting possibility for the action shooter..... 30fps burst mode in live view....

Interesting indeed. Here's what's most interesting:

Assuming a 24mp sensor, which has a "full" pixel count of say 27mp (including masked border pixels, inactive calibration pixel rows and columns, etc. all of which DO get read and which ARE included in every RAW image). Also assuming the ADC is 14-bit. Then, for a 3-second burst:

(3 * 30 * 27,000,000 * 14) / 8 = 4,252,500,000 bytes

In one three second burst at 30fps, you generate 4.2GB worth of data! :P If you tend to take 3-5 second bursts, and shoot at least a few dozen bursts on any given outing... Well, S___...now those two new 3Tb hard drives I just purchased aren't going to be going very far...and I'm going to need four times as many long-term backup and storage bluray disc for permanent backups...and my import/review/cull time is going to go through the roof...

;) Be careful what you wish for...   :D
:) I know :)
Storage demands are constantly going up.... I remember buying a hard drive for work $9995 for 10Mbytes and my first digital camera shot 640x400 with 8 bit color... Todays camera storage requirements were unthinkable back then.... two days ago I shot a time lapse on a GoPro that sucked back 48GBytes...

The crazy thing is that storage space doesn't seem to be advancing as quickly as it use to anymore. It was quite a number of years ago that we hit 2Tb....then a few years ago that we hit 3Tb, and now only recently have 4Tb drives have begun to become "affordable" (the ones with TERRIBLE access times are still around $150, and the ones with faster access times are still in the $220-$300 range). There are less than a handful of 6Tb drives on the market, and only LaCie seems to be selling 5Tb hard drives...both of which are at lest $300 a pop if not considerably more expensive.

While larger hard drives, all built with the same semi-reliable technology that has been plaguing computer users for decades, trickle slowly onto the market, our data use needs are RAPIDLY growing. As video, especially 4k video, becomes more accessible, I think 48Gb worth of video files is only the beginning! :P And as still image sizes skyrocket to 40, 50, 70 megapixels and beyond... Yeesh...I shudder to think about the costs of storing it all. Cloud services aren't even remotely "there" yet when it comes to space/dollar, and then you have to deal with transferring tens or hundreds of gigs across the wire.

That would be because the need for more storage isn't there for most consumers. And in any case, even if the storage per drive has slowed down, prices have come down a lot as well. The number of SATA ports and drive cages in computers has steadily increased, so for power users who for whatever reason need massive storage (I have two 8x3TB arrays on my network, for example) it is relatively simple to create it.

Sorry, but when you start creating GIGS of data PER SECOND, even your RAID arrays are not going to suffice. I have I have an 8TB NAS on my own network, however with my shooting patterns, at 30fps, I would create 8TB worth of images in a mere 5643 seconds, which amounts to 94 minutes of shooting. Eight TERRABYTES of data in a mere hour and a half. That is just ludicrous. The cost of storage hasn't come down even remotely fast nor significantly enough to justify cameras with frame rates of 30fps or higher. These days, at 8fps/18mp or 6fps/22.3mp, I can fill four 16GB CF cards in a couple/few of hours on a burst-heavy day (i.e. lots of flight or other action.) That's 64Gb in an outing. That's already a lot of data, and even after culling the guaranteed bad frames (missfocuses, motion blurred, clipped highlights, etc.), that alone is still a fairly significant storage footprint.

Plus, this is all JUST the storage impact. I don't think people realize the overall impact of having ultra high frame rates like that. There is "cost" everywhere...it requires significantly more time to import all your data (even over USB 3.0), it takes significantly more time to organize it, it takes significantly more time to cull it, etc. There is a reasonable point where diminishing returns in frame rate become negative returns in your overall efficiency as a photographer. I believe 30fps is well over that point, and anything faster....well, then your just plain insane! :P

1463
EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: June 16, 2014, 04:24:16 PM »
Hey Jrista, would you be consider buying a 7Dii for conversion as a full-time astrograph?  What feature set would be ideal for that application--fewer or more Mp, sensor technology, add on features... ??

Well, that question is not really as simple as it might sound. ;) Astrophotography is a different beast.

In normal photography, there is pretty much NOTHING wrong with having more resolution...more resolution is pretty much always a good thing. While, in the context of cropping, pixel size can affect noise levels, sensor size and quantum efficiency are generally the primary determining factors of image noise...so the general rule of thumb should pretty much always be: Get as much resolution as you can.

When it comes to other features...like the AF system, metering, frame rate, etc. (all of which I generally consider AT LEAST as important as sensor IQ, if not more important depending on your style and type of photography), you should generally go for the best you can that meets your needs. The 7D II is an action photography camera, and while sensor IQ is important, it's really the frame rate and AF system that are paramount.

When it comes to astrophotography, none of the "add on features" matter. They are pretty much worthless, so long as you actually have AF. (More on why in a moment.) Resolution in astrophotography is also evaluated in an entirely different way as well, and for the most part, you want to "match" sensor resolution to lens resolving power in a specific way. The term used to describe this matching of resolutions is "image scale", and I'll go into detail in a second here. Lets start with a couple of exceptions to the image scale guidelines.

First, for those who like to image star waveforms (diffraction patterns), for the purposes of analysis of things like double and multiple-star systems, exoplanet investigation, etc. resolution is absolute king. You want as much resolution as you can get. It is not uncommon to use focal lengths of thousands of millimeters, even ten thousand millimeters or so. The smaller your pixels, the better your sensor will be able to resolve the airy pattern. In terms of normal resolution in normal photography, you really aren't gaining "resolution" here. These systems for surveying star patterns are usually fully diffraction limited. Were talking about F/Ratios in the range of f/29 to f/40 or beyond. In regular photography, that would cause significant blurring because diffraction is softening the image. In star surveying, however, your working with individual points of light...there is no blurring, your just magnifying the actual diffraction effect, and your analyzing it directly. A LOT can be learned about stars by analyzing heavily magnified diffraction patterns.

Second, planetary imaging tends to be high focal length/high f-ratio. Planets are pretty small in the grand scheme of things, so again it is not uncommon to see thousands of millimeters focal length and high f-ratios in the f/10-f/20 range. Planetary imaging is quite different than normal astrophotography, it is usually done with video, at high crops and ultra high frame rates (320x240px @ 200fps is not unheard of), and having lots of resolution helps. Planetary imaging is all about superresolution and "seeing through" atmospheric turbulence. Having a lot of sensor resolution in this circumstance is also helpful. In the end, many thousands of frames, some of which may appear quite blurry due to atmospheric turbulence, are processed, the bad ones are thrown away, the best ones are kept, and stacked with a superresolution algorithm to produce crisp, high resolution images if planets.

In both of the above cases, small pixels are a huge benefit. When it comes to imaging larger objects, DSOs or Deep Sky/Space Objects, resolution is a bit different. This is where Image Scale comes into play. Image scale is  an angular measure of arcseconds per pixel (angular, because pretty much everything in astrophotography is done in angular space...pointing, tracking, coordinates, etc.) You determine the arcseconds per pixel (image scale) by using the following formula:

Code: [Select]
imageScale = (206.265 * pixelSize) / focalLength
In the case of the current 7D, with a 600mm lens (what I've been using so far), my image scale is 1.478"/px. In the case of a larger, longer telescope, such as the AT8RC astrograph, which has a focal length of 1625mm, the image scale would be 0.546"/px. If I was using that telescope with a 2x or 3x barlow on it, which multiplies the focal length like a teleconverter, image scale would be 0.273"/px and 0.182"/px, respectively. The image scale becomes critically important once you understand how the resolving power of a telescope affects the distribution of light at the sensor.

Before we get into that, a quick sidebar on star sizes. Star size, from earth-bound telescopes, is ultimately a product of their native size combined with the impact of seeing. Seeing, the term we give to how well we can see the true form of stars due to atmospheric turbulence, can blur stars and make them larger than they actually are. On a night of excellent seeing, where atmospheric turbulence is low, the average star size for naked-eye star gets close to their true size, around 1.8". When seeing is worse than excellent, the average star size can increase to 2" or 3", possibly even larger. For the most part, we figure average seeing produces stars around 2.2", or a little over two arcseconds. Ok, now that you understand star size, back to the discussion of image scale.

In astrophotography, we aim to match lens resolution to sensor resolution in such a way that our image scale falls somewhere between 0.75" to 1" per pixel, or 0.75"/px to 1.0"/px. For stars that are 2"-3" in size, this results in each star covering about a little more than a 2x2 pixel grid of pixels. This avoids a problem where, when image scale is too large, stars end up looking like square pixels, instead of round dots. It also avoids another problem, the light spread problem, which I'll go into in a bit. In my case, my seeing makes my stars about 2.8-3.2" in size (I don't have very good seeing most of the time here in Colorado) in most nights. On the best nights (like two nights ago) I've had my seeing as low as 2.2". For the average case, my image scale of 1.478" is pretty decent, although for smaller stars, it does tend to make the smaller/dimmer stars a little square. An image scale of 1-1.2" would be more ideal.

Beyond simply avoiding square stars, keeping your image scale at a reasonable level can be important to achieving the right exposure "depth". This isn't a term we use in normal photography, as we tend to work with relatively gargantuan quantities of light. It only takes a fraction of a second to saturate our pixels with normal photography, and we often have significant problems with dynamic range in the sense that our scenes contain considerably more than we can capture in those extremely small timeslices. In astrophotography, we often have the opposite problem...it can be very difficult to saturate our pixels and achieve a reasonable signal to noise ratio. If our image scale is too small, say 0.5", 0.2", 0.1" then that means that the light from one single star is spread out over a 4x4, 10x10, or 20x20 matrix of pixels. The smaller our image scale, the less saturated each pixel is going to be. This is a problem where light is being spread out over too great an area on the sensor, which greatly impacts our ability to get a saturated exposure with a strong signal, and therefor high SNR.

If you are using a monochrome CCD camera designed for astrophotography, you usually have the option of "binning" pixels during readout. A sensor with 4.5µm pixels can be binned 2x2, 3x3, 4x4, sometimes even nxn. That gives you the option of having 8µm, 13.5µm, 16µm pixels if you need. As you increase focal length, binning, usually 2x2, becomes very useful as it helps you keep your image scale within that "ideal" range. Electronically binned pixels are effectively equivalent to having larger pixels, which is a bit different than averaging pixels in post with downsampling. With downsampling, you reduce noise and increase SNR, but don't actually improve signal strength, where as with binning, you DO increase signal strength.

When using a DSLR, it can be difficult to achieve an ideal image scale, since you cannot bin. That limits you to using a certain range of focal lengths, or else means you have to expose for a much longer period of time to get the same results. Now...in with the 7D II. I do not yet know what it holds (I think Don wrote a humorus post on that very subject last night on one thread or another, basically epitomizing how we really don't know JACK about the 7D II, despite all the "informative" rumors! :P) Assuming the 7D II gets the much-needed boost to quantum efficiency it really needs to perform well (I'm really hoping it lands somewhere around 56% Q.E.), then I think, for its pixel size, that it could be a very good performer for astrophotography.

It would ultimately depend on the other sensor factors...the most important of which is the IR filter. DSLRs are, in the grand scheme of things, are actually really CRAPPY for astrophotography, The IR filters block out most of the red light at the most critical emission band: Hydrogen-alpha, or 656.28nm wavelength. Most of emission nebula in our skies are comprised of hydrogen, which when excited, emits light in a few very narrow bands. Hydrogen has two key emission bands for astrophotography: Hydrogen-alpha (Ha) and Hydrogen-Beta (Hb). Ha is a very red band, and Hb is a very blue band, which results in a pinkish-red color. Most DSLRs pass a mere 12% or less at the Ha band, while a monochrome CCD will usually pass anywhere from 45% to 80% at the Ha band.

You did mention a full-time astro mod of the 7D II. There are a few astro conversion mod options available for DSLRs. You can simply replace the IR/UV filters in the filter stack with Baader or Astrodon filters that are better-suited to astrophotography, where they pass 90% or more of the light through the entire visible spectrum, with a "square" falloff into IR. You can also get full spectrum filters that will block UV, but pass the entire visible spectrum then gradually fall off into deep IR (useful for infrared imaging as well as astro imaging so long as you use an additional IR block filter when doing visual work). Finally you can do full mono mods, where the CFA (and the microlenses) are actually scraped off the sensor. With a full mono mod, you can greatly increase the sensitivity of the sensor, but it becomes useless for any other kind of astrophotography. It should also be warned, converting any DSLR for astro use can greatly diminish it's usefulness for regular photography. Even a basic astro IR/UV mod has a considerable impact on the reds in your photography, and you will forever be bound to using custom white balance modes...none of the defaults will ever work again.

So, if the 7D II comes in with a much-needed Q.E. boost, and so long as you are using moderate focal lengths (400-1200 I'd say), it would make for a decent astrocam. If you modded it with a Baader or Astrodon IR filter, it would probably be quite excellent, in the grand scheme of DSLRs used for astrophotography. It will never compare to even the cheapest thermally regulated CCD camera, and in the case of some of the lower end ones, you can spend a mere $1500 on a good cooled CCD, where as the 7D II is likely to hit the streets with a price at least $500 higher, if not more. If you REALLY want to get into astrophotography, I highly recommend looking into some of the lower end cooled CCDs, as even the cheapest one is likely to be better for astro than any DSLR, modded or otherwise.

1464
EOS Bodies / Re: New Full Frame Camera in Testing? [CR1]
« on: June 16, 2014, 02:46:14 PM »
And yet many photographers choose Canon because of their inherent colour rendition. Skin tones are far nicer on Canon than Nikon. I belive this is due to hot reds on the Canon gamut. I don't want a clinical colour accuracy, that would be boring. I want a colour interpretation whihc is nice and pleasing on the eye. In a simular way to hi fi...some components are very neutral and a little bland. I like speakers and amps which inject a little colour to the sound and add some charector to the performance. This is why I like Arcam amps and Ruark speakers. Unfortunatly, both companies have been pretty much killed by the iPhone market....go figure!

And those "many photographers" would be ill educated.

There is no "inherent" aspect to colour in a RAW file. RAW files don't have a gamut, nor a colourspace, they are rendered into a profile that contains a gamut by software, there is no quality impact or degradation by different rendering algorithms, that is why you can change WB in post to a RAW file with no ill effects, or choose Portrait, Landscape etc Picture Styles after the fact.

While true, the fact remains that in the default rendering the Canon sensors tend to produce warmer tones with more red and less blue than Sony's sensors, which are still drastically warmer than, for example, Panasonic's ultra-cool sensors.  I couldn't tell you how much of that is the choice of colors in the Bayer filters and how much of it is arbitrary white point math differences, but even 20+ years ago, back in the analog CCD days, Canon was always the warmest, Panasonic/JVC the coolest, with the rest at various points in between.  And oddly enough, that hasn't changed much despite radical changes in the underlying processing electronics.  So I'm guessing that at least part of it is the choice of color filters.  Either that or Canon just prefers slightly oversaturated reds.  :)

What is "default rendering", DPP, LR, ACR 2003/2010/2012, DXO, Capture One, which profile? Camera Standard, Adobe Standard, Landscape, Portrait, Neutral, Faithful, or a custom profile made for the illumination of the subject? What colourspace, Prophoto, Melissa, RGB, sRBG, CMYK?

That is the point, there is no "default rendering", you have to choose one and making your own is very easy. If your Canon files are red, it is your choice.

PBD is exactly right here. RAW data is RAW data...it has no default, nothing inherent (other than the very minor impact of the silicon's native response curve, however that is generally not even remotely a dominant factor these days). Color is the result of processing, and that processing definitely changes depending on the tool we use to process, the camera profiles/tone curves we apply, the color space we process within, etc.

1465
EOS Bodies / Re: A Few EOS 7D Mark II Specs [CR1]
« on: June 14, 2014, 01:24:13 AM »
As a quick counter point... there are certain images that are really hard to get... a ball being compressed by a bat at the moment of contact, a diver just before he puts his fields in the water, etc.

You may have 120 images to sort through, but know which image exactly you want makes it easy enough to find.

But that is about it... no thank you to the remaining 119 images.

I agree, there may be a few rare situations where you want a higher frame rate than 12fps. That said, people get those kinds of shots. They have actually been getting them for years, with equipment older and slower even than we have today. I'm not sure 120fps is necessary. I'm not even sure 30fps is necessary, although it might be the point where diminishing returns have kicked in enough that anything faster would still be pretty useless.

Beyond the point of diminishing returns, you would have many, even dozens of frames of essentially the same thing. At that point, your gathering dozens or hundreds of frames per second so you can choose a more appealing squashed base ball shape...which changes only by a few millimeters per frame. :P Is that really worth the extra import time and storage requirements and cost?

1466
Animal Kingdom / Re: A new take on BIF
« on: June 13, 2014, 11:47:42 PM »
It was shot at 1/160s with rear shutter curtain synch.  The rear drop shadow is the movement of the flying fox before the flash kicked in.  But that's the sort of problems I'm trying to work around.  If I shoot at faster speeds, I can avoid this, but then I lose detail in the sky and if I shoot faster that 1/250s then I'm also stuck with HSS, and the bats are too far away for this to work effectively with the gear that I have.  To counter that, I can increase ISO and decrease flash power, but increasing ISO results in more noise.  Its hard to get the balance right.

Hmm. I guess that's a plausible explanation. Based on the actual offset of the shadow, it was moving at a vector of about 120° then...as the shadow is not directly behind, it's offset to a about a -35° angle. That seems a little odd...but, eh.

I think you could crank the ISO up more. The 1Ds II isn't that bad...it's certainly better than the 7D, despite being older, and the 7D is pretty good up to ISO 800. Also, that is indeed how clear and sharp the bats are after the flash, then that means they are easy to mask. With masking, you could clean the background noise right up! You might even want to look into PixInsight, which has some of the most advanced multi-scale noise reduction tools around, but Photoshop or LR could certainly do it as well, I mean you could completely obliterate the noise once you have the bat masked. If that's the real problem, and that shadow really is caused by flash lag with rear-curtain, then I think you could crank ISO by at least a stop, if not two, and use two stops faster shutter speed.

1467
Animal Kingdom / Re: A new take on BIF
« on: June 13, 2014, 08:16:42 PM »
One of my better attempts.  Taken with a 1Ds Mkii, ISO 400, with a 135/2 lens and flash.  My camera isn't ideal for this.  It focuses fine, but struggles with noise at higher ISOs.

Hmm. Forgive me...because my analytical mind just has to make sure here...but...why does the bat have a close, dark, sharply refined "drop" shadow? Isn't it the sky behind it...at....a very, very, very great distance that wouldn't actually allow a sharply defined shadow so close to the creature.......if it allowed a shadow to show up at all..................?


...I just hate to say it....but...without a VERY good explanation of the shadow....I'm calling bull!  ::)

1468
EOS Bodies / Re: A Few EOS 7D Mark II Specs [CR1]
« on: June 13, 2014, 08:13:18 PM »
Well, not necessarily.  I shoot about 400 shots per basketball game and pick the best 60.  I can sort them and edit them in one night, maybe 2-3 hours.  It's not fun I agree, but not absolutely terrible.

400 shots is manageable. But at the suggested 100fps that would be just 4 seconds' shots. Even at 30fps you'd be into the thousands in a typical action session. I just don't see video frame extraction as a realistic option for all but the highest profile work.

Aye. I don't think that everyone who wants a super high stills frame rate understands the immense volumes of data they will be creating. At 30fps, it's bad enough, but I hear mirrorless diehards talking about 60fps or 120fps all too frequently. Could you imagine the data you would have to sift through to find the "best" shot? And before even that...you have to IMPORT it all! A three-second burst at 120fps is 360 images. A three second burst is a SHORT burst, five is average, and when it comes to longer sequences of action, such as is often the case with BIF, you might have 10-15 seconds of continuous frames. At 120fps, that is 1200 - 1800 images!

IMO, frame rates that high for still shooting are just plain and simply not worth it. Having a higher frame rate helps, but there is a point of diminishing returns. I can't imagine I'd ever want more than 20fps at most, and I would be willing to bet that 10-14fps is probably superb. That means every 1/10th to 1/14th of a second you get a frame, which is a pretty darn small timeslice. But 1/60th? Or even 1/120th of a second? The human eye is generally capable of discerning about 1/30th or so when watching video, and we use 1/60th only to completely and totally GUARANTEE that there is no stutter, but we can't actually discern each frame. The problem with stills is, the frames aren't cycles on top of each other, so we can't tell the difference at 1/30th of 1/60th or 1/120th when they are lined up next to each other. It would just be a waste of time, space, and effort (and actually money, as you would need MONSTER memory cards and IMMENSE hard drives to store all that data) to use a stills frame rate that high.

1469
EOS Bodies / Re: A Few EOS 7D Mark II Specs [CR1]
« on: June 13, 2014, 03:37:33 PM »
As a matter of interest, would an expectation of relatively clean image quality at ISO 1600 be unrealistic for a crop body?
One man's clean is another man's filthy. It's best to speak in comparative terms. For me, if the 7D-II's ISO 3200 image looks as good as the 6D's ISO 6400 image then I'd be very happy with it.
I hear you on that.

As a birder, I dial in ISO 400 on my 500D but just like the 7D, anything over 400 leads to very obvious noise.
Anything over ISO 800, the images become somewhat and probably poorly described as rough.
Useable ISO 3200 would be a very worthwhile reason to buy this camera.

Mind if I ask, is it usually sunny where you shoot? I'm almost never able to do bird shots at ISO 400, and even 800 is low. 1600-3200 normal (this is at f/10 though).

Sure you may ask, no problem.

As I know my 500D isn't the top camera, I work within its limits and I watch my subject and the sun very carefully.

I do not shoot unless my subject has direct light on it. If the bird is bathed in sunlight, I shoot in manual mode at ISO 400 and vary my shutter speed from 1/2000 - 1/4000.

I only do about 5% of my shots above ISO 400. I just do not like the lack of image clarity at the higher ISO's and just do not push my shutter in poor light.

What aperture?

1470
EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: June 12, 2014, 07:30:25 PM »
Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf

That concept seems really interesting!
I've had thoughts about why no one seem to have adopted something similar to a logarithmic amplifier. That was something I saw in certain radar equipment, where one typically sent out a few kW, and expected to get signal back that were only some fW (10-15W). However, you couldn't be sure on the returning signals strength, so the receivers had to cope with signals that were several magnitudes greater - without frying the entire array of discrete components/transistors/tubes.

In short, that "problem" were solved with stages of amplifiers that, when saturated, automatically opened up for the next stage to take care of the signal handling without ever hitting any ceilings, or frying any components.

In sensors you would've the problem of miniaturising this concept and making some 20 million photon receivers behave identical, but all that counts in the end is counting photons. Every pixel is there for the sole purpose of counting the number of photons that hits it (preferably coming in from the lens). And you don't want to fill your buckets.
Since most of us take our shots in temperatures above 0K, we always have to deal with thermal noise. A logarithmic approach to handling our combinations of signal + noise wouldn't be bad.


Sorry for sidestepping the original idea of this thread.

What your talking about is a photomultiplier. That is actually a very different concept still, and is neither similar to the multi-bucket pixels nor DPAF. :P

Photomultipliers do use mult-stage amplifiers to significantly amplify extremely low signals by a significant magnitude, without requiring ultra-specialized amplifiers that can do so without frying themselves. But that's just a more complicated means of amplifying a weak signal. It doesn't actually improve the signal strength itself, so it can neither reduce noise, nor support something like HDR.

The multi-bucket pixel concept in that paper effectively embeds analog memory into the pixel. Global shutter sensors already do this, but they only have a single memory (when the exposure is done, every pixel's charge is immediately pushed to it's memory at once, the pixels are reset, then the memory can be read out in the background while the next exposure occurs.) Multi-bucket memory allows charge to be pushed to memory more than once, which expands the dynamic range by N times. At the point of read, the charge stored in each bucket is then binned as the pixels are read out.

That is significantly different than a photomultiplier, as instead of amplifying the signal (which also amplifies the noise, and does not actually improve the quality of the signal itself), it allows longer exposures combined with multiple "memory pushes" to literally enhance the quality of the signal itself WITHOUT amplification. THAT....that is what is so intriguing about the multibucket concept. ;)

Pages: 1 ... 96 97 [98] 99 100 ... 334