September 20, 2014, 06:07:59 AM

Author Topic: More Sensor Technology Talk [CR1]  (Read 11851 times)

scyrene

  • Canon 70D
  • ****
  • Posts: 297
    • View Profile
    • My Flickr feed
Re: More Sensor Technology Talk [CR1]
« Reply #75 on: May 05, 2014, 03:57:08 PM »
(f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively allows you to image well beyond the diffraction limit.)

This is very interesting, and news to me. Dare I ask how that is possible? I assumed stacking would take the image to the theoretical best the setup can produce - how does it deal with diffraction? I was using my 500L with extenders to photograph planets using stacking recently, and assumed softness due to diffraction (I was at 4000mm f/40 for Jupiter and 5600mm f/56 for Mars).
5D mark III, 50D, 300D, EOS-M; Samyang 14mm f/2.8, 24-105L, MP-E, 85L II, 100L macro, 500L IS II, EF-M 18-55; 1.4xIII, 2x III + 2xII extenders; 600EX-RT; EF-M--EF adaptor.
Former lenses include: 70-200L f/4 non-IS, 200L 2.8, 400L 5.6

canon rumors FORUM

Re: More Sensor Technology Talk [CR1]
« Reply #75 on: May 05, 2014, 03:57:08 PM »

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4557
  • POTATO
    • View Profile
    • Nature Photography
Re: More Sensor Technology Talk [CR1]
« Reply #76 on: May 05, 2014, 04:54:22 PM »
(f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively allows you to image well beyond the diffraction limit.)

This is very interesting, and news to me. Dare I ask how that is possible? I assumed stacking would take the image to the theoretical best the setup can produce - how does it deal with diffraction? I was using my 500L with extenders to photograph planets using stacking recently, and assumed softness due to diffraction (I was at 4000mm f/40 for Jupiter and 5600mm f/56 for Mars).

There are different ways to stack. The most common is averaging, either basic averaging, weighted-averaging, or sigma-kappa clipping averaging. Those forms of stacking are usually used on star field images, for nebula, galaxies, clusters, to reduce noise (noise is reduced by a factor of SQRT(stackCount)...so stacking 100 frames reduces noise by a factor of 10.)

You can also use "drizzle" stacking and other forms of superresolution stacking. The purpose of these methods is less to reduce noise (although they do help reduce noise), and more to increase detail. Stacking for superresolution aims to chose the best version or versions of any given pixel out of thousands of frames, and sample each pixel in each frame and across frames multiple times with alternate "rotation" factors or something similar. That allows the algorithm to extract the maximum amount of information for each point of your subject.

While diffraction certainly limits your resolution when doing planetary imaging, seeing limits it to a FAR greater degree. The vast majority of blurrieness when doing planetary imaging is due to atmospheric turbulence and poor transparency, by about an order of magnitude compared to diffraction. Stacking thousands of frames with a superresolution algorithm easily cuts through both, assuming you get enough high quality frames. Because these algorithms pick the best version of a pixel and multisample each pixel, you can end up with surprisingly high detail images, despite the effects of seeing and diffraction.
My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

Stu_bert

  • EOS M2
  • ****
  • Posts: 243
    • View Profile
Re: More Sensor Technology Talk [CR1]
« Reply #77 on: May 05, 2014, 06:52:15 PM »
Glancing at his gear wish list, it looks like he's more into action than astro. An A7R is 2500 less in the budget (camera + EF adapter). Personally I would love one for portrait and landscape work, but I can not justify the expense. I suspect I'd get more use from that tamron 150-600 and a new tripod.

So, while I'd like an A7r for my landscape photography, it is actually one of the worst possible choices for astrophotography. I do landscapes sometimes, wildlife and birds most of the time, and astrophotography every time there is a clear night.


I was looking at the A7R with adapter for landscapes, but then I read on Thom Hogan's site that Sony uses lossy compression on their RAWs (unless I misread him), and you can't switch it off!

Why would they do that?   :( :(

On that basis, it may have amazing DR but then it surely will just smudge out some of the detail for err, actually I'm not sure for what benefit...

Hmm, I hadn't heard of that. If they do, it's foolish, and you really no longer have a RAW image. I am a bit skeptical of that...it doesn't seem logical, but who knows.

Had a look at that astro link - it's a whole new language there  :) If I understood correctly, then it's a 2000mm lens? And optically is it better than your 600mm lens with a 1.4x and 2.x attached? Just curious as to the benefits. Thanks.

Reflecting light tends to produce superior spots at the sensor plane in comparison to refracting light. Reflecting light can warp star diffraction spots due to coma and astigmatism, but that's about it. Refracting light, on the other hand, suffers from all forms of optical aberrations...which also includes chromatic aberrations, spherical aberration, etc. The RC, or Ritchey-Chretien, telescope design is one of the more superior designs. It's the same design used in all the major earth-bound telescopes...the huge ones, up to 10 meters in size. It tends to produce superior results, although it does suffer from some coma and astigmatism in the corners.

There is a better telescope design than even the RC, called a CDK or Corrected Dall-Kirkham. The CDK uses a mirror and built in corrector to get one of the best spot shapes, center to corner, of any telescope design I've ever seen. PlaneWave makes CDK scopes, but they are pretty pricey. From what I've read and seen, a CDK is about the best telescope design in the world today.

As good as my lens is, and it is very good with a very flat field corner to corner, it is no RC and certainly no CDK. If I throw on teleconverters, that gets me more focal length (which is not necessarily the best thing...a LOT of nebula are even larger than I can fit in my field with the 600mm, let alone a 2000mm scope), but  it also increases the optical aberrations. For galaxies, clusters, and getting close up on parts of nebula, a longer, better scope like the Astro-Tech 10" RC is better. The larger aperture, ten inches vs. six inches, also means I can resolve smaller magnitude stars, galaxies, and other details. Most scopes work with focal reducers, so while it is 2000mm natively, I can use a 0.63x reducer to make it an f/5 1260mm telescope. That is relatively fast with a moderately wide field. For planetary work, I can also throw on a 2x or 3x barlow lens, and get a 400mm f/16 or 6000mm f/24 scope, which is much better for planetary imaging (f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively allows you to image well beyond the diffraction limit.)

Thanks for the comprehensive reply.

Re A7R - http://www.sansmirror.com/cameras/a-note-about-camera-reviews/sony-nex-camera-reviews/sony-a7-and-a7r-review.html

Scroll down to "How do they Perform?"
If life is all about what you do in the time that you have, then photography is about the pictures you take not the kit that took it. Still it's fun to talk about the kit, present or future :)

scyrene

  • Canon 70D
  • ****
  • Posts: 297
    • View Profile
    • My Flickr feed
Re: More Sensor Technology Talk [CR1]
« Reply #78 on: May 05, 2014, 07:20:22 PM »
(f-ratio doesn't usually matter for planetary, as you image planets by taking videos with thousands of frames for anywhere from a couple minutes to as long as a half hour...then filter, register, and stack the best frames of the video, which is basically performing a superresolution integration...that eliminates blurring from seeing, and effectively allows you to image well beyond the diffraction limit.)

This is very interesting, and news to me. Dare I ask how that is possible? I assumed stacking would take the image to the theoretical best the setup can produce - how does it deal with diffraction? I was using my 500L with extenders to photograph planets using stacking recently, and assumed softness due to diffraction (I was at 4000mm f/40 for Jupiter and 5600mm f/56 for Mars).

There are different ways to stack. The most common is averaging, either basic averaging, weighted-averaging, or sigma-kappa clipping averaging. Those forms of stacking are usually used on star field images, for nebula, galaxies, clusters, to reduce noise (noise is reduced by a factor of SQRT(stackCount)...so stacking 100 frames reduces noise by a factor of 10.)

You can also use "drizzle" stacking and other forms of superresolution stacking. The purpose of these methods is less to reduce noise (although they do help reduce noise), and more to increase detail. Stacking for superresolution aims to chose the best version or versions of any given pixel out of thousands of frames, and sample each pixel in each frame and across frames multiple times with alternate "rotation" factors or something similar. That allows the algorithm to extract the maximum amount of information for each point of your subject.

While diffraction certainly limits your resolution when doing planetary imaging, seeing limits it to a FAR greater degree. The vast majority of blurrieness when doing planetary imaging is due to atmospheric turbulence and poor transparency, by about an order of magnitude compared to diffraction. Stacking thousands of frames with a superresolution algorithm easily cuts through both, assuming you get enough high quality frames. Because these algorithms pick the best version of a pixel and multisample each pixel, you can end up with surprisingly high detail images, despite the effects of seeing and diffraction.

It's certainly powerful, though I don't understand the technicalities and I'm not sure what the program is doing to obtain the results (for nebulae I do it all by hand, which takes a long time but I have a grasp of every step of the process).

Sadly, I can't do thousands of images, due to limitations of my setup. I know most planetary work nowadays is done with video, stacking lots of extracted frames, but because even at 5600mm (lens focal length), the targets are too small in the frame to use the camera's video function, I take stills manually at full sensor resolution, and then crop to a reasonable size for stacking. That way I can do tens to over a hundred, but I could never do much more (I'm also aiming by hand, so it's a matter of human fatigue, no way of automating the process). Still, it's amazing what you can do without dedicated kit - a few key postprocessing techniques are what make the difference.
5D mark III, 50D, 300D, EOS-M; Samyang 14mm f/2.8, 24-105L, MP-E, 85L II, 100L macro, 500L IS II, EF-M 18-55; 1.4xIII, 2x III + 2xII extenders; 600EX-RT; EF-M--EF adaptor.
Former lenses include: 70-200L f/4 non-IS, 200L 2.8, 400L 5.6

jrista

  • Canon EF 400mm f/2.8L IS II
  • *******
  • Posts: 4557
  • POTATO
    • View Profile
    • Nature Photography
Re: More Sensor Technology Talk [CR1]
« Reply #79 on: May 05, 2014, 10:13:04 PM »
Welcome.

Re A7R - http://www.sansmirror.com/cameras/a-note-about-camera-reviews/sony-nex-camera-reviews/sony-a7-and-a7r-review.html

Scroll down to "How do they Perform?"

I believe that only applies to their 11-bit "RAW" encoding. That would be something akin to Canon's sRAW and mRAW, not necessarily in encoding, but in lossyness. Neither are actually RAW files, they encode data in a specific way. In Canon's case, the m/sRAW formats are YCb'Cr' formats, or Luminance + Chrominance Blue-Yellow + Chrominance Red-Green. The Y or Luminance channels is stored full resolution, however the Cb and Cr channels are stored "sparse". In Canon's case, all of the stored values are still 14-bit precision, but they do store lower chrominance data. Canon's images would be superior to Sony's, in both that they store more information in total, as well as with a greater bit depth...however both will suffer from the same limitation: The information is not actually RAW, which severely limits your editing latitude.

Generally speaking, the fact that these formats store lower resolution color information doesn't matter all that much. Because of the way our brains process information, if done carefully, a lower resolution chrominance is "missed" in favor of a higher level of detail. YCbCr formats have been around for a long time, since the dawn of color TV even. The Luminance channel was extracted and sent in full detail, while the blue/yellow and red/green channels were sent separately, in a more highly compressed format. This actually allowed color information to be piggybacked on the same signal that "black and white" TV channels were sent on, making it possible for B&W TVs to pick up the same signal as Color TVs.

If you have paid any attention to Canon's video features, you've already heard of similar video compression techniques. You may have heard of 4:1:1, 4:2:2, or 4:4:4. Those numbers refer to the Y, Cb, and Cr channel encoding. A 4:1:1 encoding has full luminance and 1/4 Cb & Cr channels. A 4:2:2 encoding has full luminance and 1/2 Cb and Cr channels.  As you might expect, a 4:4:4 encoding use the same sampling rate for all three channels, and is effectively "full resolution". A standard RAW image is also technically a 4:4:4 R'G'B' image.

My Photography
Current Gear: Canon 5D III | Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: SBIG STT-8300M | Canon EF 300mm f/2.8 L II

Stu_bert

  • EOS M2
  • ****
  • Posts: 243
    • View Profile
Re: More Sensor Technology Talk [CR1]
« Reply #80 on: May 06, 2014, 02:32:24 AM »
Welcome.

Re A7R - http://www.sansmirror.com/cameras/a-note-about-camera-reviews/sony-nex-camera-reviews/sony-a7-and-a7r-review.html

Scroll down to "How do they Perform?"

I believe that only applies to their 11-bit "RAW" encoding. That would be something akin to Canon's sRAW and mRAW, not necessarily in encoding, but in lossyness. Neither are actually RAW files, they encode data in a specific way. In Canon's case, the m/sRAW formats are YCb'Cr' formats, or Luminance + Chrominance Blue-Yellow + Chrominance Red-Green. The Y or Luminance channels is stored full resolution, however the Cb and Cr channels are stored "sparse". In Canon's case, all of the stored values are still 14-bit precision, but they do store lower chrominance data. Canon's images would be superior to Sony's, in both that they store more information in total, as well as with a greater bit depth...however both will suffer from the same limitation: The information is not actually RAW, which severely limits your editing latitude.

Generally speaking, the fact that these formats store lower resolution color information doesn't matter all that much. Because of the way our brains process information, if done carefully, a lower resolution chrominance is "missed" in favor of a higher level of detail. YCbCr formats have been around for a long time, since the dawn of color TV even. The Luminance channel was extracted and sent in full detail, while the blue/yellow and red/green channels were sent separately, in a more highly compressed format. This actually allowed color information to be piggybacked on the same signal that "black and white" TV channels were sent on, making it possible for B&W TVs to pick up the same signal as Color TVs.

If you have paid any attention to Canon's video features, you've already heard of similar video compression techniques. You may have heard of 4:1:1, 4:2:2, or 4:4:4. Those numbers refer to the Y, Cb, and Cr channel encoding. A 4:1:1 encoding has full luminance and 1/4 Cb & Cr channels. A 4:2:2 encoding has full luminance and 1/2 Cb and Cr channels.  As you might expect, a 4:4:4 encoding use the same sampling rate for all three channels, and is effectively "full resolution". A standard RAW image is also technically a 4:4:4 R'G'B' image.

Jrista - did not appreciate that for mRaw/sRaw so thank you. My understanding on Sony is however, that their 11 bit RAW is their standard raw. I'm not sure if you read the whole article, but Tom is talking about deficiencies as a result, and he's comparing it to the D800. There's a further link embedded

http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection

which I believe confirms that they do the same on all their RAW encoding.

Don't get me wrong, I think many people would not notice it or can work around it - Fred Miranda has been positive in his review and he's a canon user - in fact I think there's a whole forum on his site discussing it in detail. To me it just seems somewhat self-defeating - you have a sensor with better DR than your competitors but you then impair the output with your compression scheme which "fails" when you have scene with higher DR.

Maybe I misinterpreted the information...
If life is all about what you do in the time that you have, then photography is about the pictures you take not the kit that took it. Still it's fun to talk about the kit, present or future :)

NancyP

  • 7D
  • *****
  • Posts: 342
    • View Profile
Re: More Sensor Technology Talk [CR1]
« Reply #81 on: May 06, 2014, 02:34:31 PM »
This is an entertaining, if now grossly off-topic, thread.
I am still identifying the more obvious Messier objects by binoculars. Learning the basics is a good idea. At some point soon, I will buy a second-hand beginner's (Dobson) telescope from a club member for learning basic visual observation.  A good german equatorial mount and a starter astrophotography optical tube are further away.

I have run some Moon shot stacks (400mm f/5.6L + 1.4x TCII) through Lynkeos freeware, which is really designed as a very simple moon/planet video image processor.

canon rumors FORUM

Re: More Sensor Technology Talk [CR1]
« Reply #81 on: May 06, 2014, 02:34:31 PM »

whatfind

  • SX50 HS
  • **
  • Posts: 2
    • View Profile
Re: More Sensor Technology Talk [CR1]
« Reply #82 on: May 08, 2014, 07:15:56 PM »
could this be a EF mount without mirror?
Thus "lower cost" got explained, because without a mirror, the AF chip and Exposure measuring chip etc. can be discard, which will low down the cost quite lot.
Look at 70D, if Dual Pixel AF can be used for AF in stills(and the make it lower battery use), there will be no mirror needed.
also EVF rumor explained.

canon rumors FORUM

Re: More Sensor Technology Talk [CR1]
« Reply #82 on: May 08, 2014, 07:15:56 PM »