7d2 IQ thoughts.

jrista said:
Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.

Which is the reason why the GH3 and GH4 were total failures in sales ::)

No one would ever buy a camera like that....oh, wait....they do....how can that be? Very weird, there must be something wrong with those customers.
 
Upvote 0
Tugela said:
jrista said:
Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.

Which is the reason why the GH3 and GH4 were total failures in sales ::)

No one would ever buy a camera like that....oh, wait....they do....how can that be? Very weird, there must be something wrong with those customers.

Your missing my point. I'm not saying 7D II's wont sell for video. I'm saying that won't be the primary reason they sell...not by a very long shot.

The reasons the GH3 and GH4 are successes in sales has not been determined to be solely due to their video features. They are also good CAMERAS. The notion that DSLRs sell best because they have video features is an ASSUMPTION. It is not backed up by any data. Sure, more will sell because of video, because people who want a DSLR for video purposes will buy them, but that doesn't change the fact that the majority of sales are due to PHOTOGRAPHERS buying them for of PHOTOGRAPHY. That's the case for pretty much every DSLR or Mirrorless with video features...they are still cameras, designed for still photography, and far more of pretty much any camera model you bring up will sell significantly more for photography purposes.

As far as the 7D II being a better seller for video than the 5D III, I don't think so. The larger sensor in the 5D III is extremely appealing for that cinematic look and feel. I'm not saying no 7D II's will sell for video purposes, but I don't think that video features will be the primary reason the 7D II sells. I still think the 7D II will sell primarily because of action photographers, particularly bird and wildlife photographers, want a camera with high resolution, lots of reach, and a damn fast frame rate (and doesn't cost a mint and a half to buy.)
 
Upvote 0
NancyP said:
Well, I am going to be optimistic. My 60D will last until the 7D2 shows up. And I want the 7D2 for a birding camera - the reach is important. Also important is AF at f/8. I hand-hold a 400mm f/5.6L for all my birding photos, and if I successfully bulk up my scrawny arms, have plans for a hand-held 500mm or 600mm f/4.

+1 for the desire for AF at f8. To be honest, being able to set iso ranges in various modes would make me happy. In my experience the 400 5.6 is great if you get the exposure just right, but at higher iso this is tricky, and a highly customizable 'auto' iso would be endlessly useful. If I'm walking through a forest, being able to switch quickly (half a second) from iso 400-800 @f8 for bif, and then to 800-2000 @5.6 for a bird on a perch would be terrific, especially if I was also able to keep ss above 1/1250.
 
Upvote 0
It will sell because it is an all purpose imaging unit. In the case of a new 7D, the properties of a crop sensor that are attractive for still photographers in certain applications are just as attractive to videographers taking video instead of stills. A video centric 7D will be more attractive to sport and wildlife videographers than a 5D would, for the exact same reasons as stills.

In the modern era a camera needs to be able to perform both types of imaging well to really succeed as a general purpose imaging device (which is how the average owner would use it).

The concepts of consumer/prosumer cameras being dedicated still or video cameras is an outdated idea that properly belongs in the past.
 
Upvote 0
Tugela said:
It will sell because it is an all purpose imaging unit. In the case of a new 7D, the properties of a crop sensor that are attractive for still photographers in certain applications are just as attractive to videographers taking video instead of stills. A video centric 7D will be more attractive to sport and wildlife videographers than a 5D would, for the exact same reasons as stills.

In the modern era a camera needs to be able to perform both types of imaging well to really succeed as a general purpose imaging device (which is how the average owner would use it).

The concepts of consumer/prosumer cameras being dedicated still or video cameras is an outdated idea that properly belongs in the past.

I've never made any point about consumer/prosumer cameras being dedicated still or video cameras.

None of this changes the point I was making. I was debating the points made by Pallette about the reasons behind why Canon might add or enhance the video features of the 7D II. His points were based in the notion that Canon made somekind of mistake with the 5D III, and that they would correct that mistake with the 7D II. It's a false notion. Canon will fix problems in the 5D III with the 5D IV. Those who might have bought the 5D III for video and passed it up won't be buying a 7D II as an alternative...they are most likely going to want a full frame sensor for the cinematic quality it offers when using EF lenses, which means the only camera that Canon can "fix" any presumed problems with the 5D III's video is in the 5D IV.

That video will somehow make the 7D II sell like hotcakes is another mistaken notion. Video is certainly an endemic feature of DSLRs and Mirrorless cameras now, but it isn't the primary reason DSLRs like the 7D II sell. It isn't even the primary reason cameras like the 5D III and 5D II sell or sold. For every person doing cinematography with a 5D II, there were a dozen doing landscapes, and at least a dozen more doing weddings. That doesn't count all the dozen other photographers using the 5D II for other STILL photography purposes. All for each and every individual who actually bought the 5D II for the purpose of doing video...EXTRA sales of the camera that primarily intended to use it's secondary feature set. The amount of photographers using still camera DSLRs for photography completely swamps the amount of photographers or cinematographers using them for video.

Any failures with the 5D III, at least any failures that have a significant impact on the bottom line for sales numbers, primarily have to do with the core functionality and core technology. The sensor, the AF unit, ergonomics. Canon MIGHT have "lost" a few customers here and there because the 5D III, which DOES have many improvements for video over the 5D II, might not have the specific video feature they want (i.e RAW HDMI out). Most of the reasons why video people might have skipped the 5D III have also been fixed by Magic Lantern, so most of the points are moot these days anyways. I don't doubt that Canon will be improving the 7D II. I highly doubt those improvements will have a particularly significant impact on the number of units Canon will sell, given it's predecessors primary use cases.
 
Upvote 0
Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf
 
Upvote 0
Palettemediaproduktion said:
Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf

That isn't quad-pixel technology. There is still a single pixel "per pixel", a single photo-diode "per pixel". It is multi-"bucket" technology. Just reading the abstract (haven't had time to read the entire paper yet), this is a means of reading out each photodiode (one photodiode per pixel, so no relation to Canon DPAF) multiple times per exposure. The "buckets" allow independent storage of pixel charge each partial read cycle, which can then be later combined (binned) to produce a signal charge MUCH greater than that of the photodiode itself. In the case of a four-bucket design, the total charge of the pixel, and therefor it's SNR and dynamic range, can be up to around four times that of a classic single pixel.

This is effectively a means of achieving hardware HDR, performed within the sensor itself, at the time of exposure and readout. I don't know the specifics of how it actually works yet (have to read the paper), but it sounds intriguing.

I would NOT draw any parallels between this and Canon's DPAF technology though...the two are entirely different, and serve different purposes.

(Frankly, I find the multi-bucket pixel concept far more intriguing than DPAF...if we just apply the concept to the 1D X, assuming ~1.3e- intrinsic sensor noise per pixel and a 90ke- FWC, this would extend the 1D X's intrinsic (pre-read) dynamic range from 16.14 stops (20 * log(90376/1.3)) to 18.15 stops (20 * log((90375*4) / 1.3)). Factoring in read noise, 38e-, that reduces the 1D X DR to 13.3 stops, however that is still over two stops better than the 11.2 stops it gets currently. If Canon can reduce their read noise to the same range as Exmor, ~3e-, then the 1D X with a quad-bucket design would still have 16.93 stops of DR...that's more than is possible with a 16-bit ADC, and I highly doubt we'll see anything like an 18- or 20-bit ADC in a DSLR any time soon.)
 
Upvote 0
ajfotofilmagem said:
I hope 7D Mark ii will be a great camera for fast action, lighter and cheaper than 1DX. Therefore, I would like an AF system as good as 5D Mark III. And most important, the amount of chroma noise at ISO 3200 as good as 5D Mark III at ISO 6400. I do not need more than 16 megapixels, and I think with this pixel density is feasible to achieve low noise as I hope.

+1

Same idea of 7D2, if that's the case, I buy change my 7D for a 7D2, otherwise for another 5D3
 
Upvote 0
sanj said:
I am wondering when (and if ever) the latest crop cameras will be able to compare with 5d2. Is 6 years enough for technology to reach a point where new crop camera's catch up to full frame?
sanj, the 70D hasn't caught up to the original 5D yet, which was released nearly 8 years before the 70D:
http://www.dxomark.com/Cameras/Compare/Side-by-side/Canon-EOS-70D-versus-Canon-EOS-5D___895_176
Ignore the "Score", and the DR measurement is close enough to be considered a margin of error.
 
Upvote 0
Palettemediaproduktion said:
Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf

That concept seems really interesting!
I've had thoughts about why no one seem to have adopted something similar to a logarithmic amplifier. That was something I saw in certain radar equipment, where one typically sent out a few kW, and expected to get signal back that were only some fW (10-15W). However, you couldn't be sure on the returning signals strength, so the receivers had to cope with signals that were several magnitudes greater - without frying the entire array of discrete components/transistors/tubes.

In short, that "problem" were solved with stages of amplifiers that, when saturated, automatically opened up for the next stage to take care of the signal handling without ever hitting any ceilings, or frying any components.

In sensors you would've the problem of miniaturising this concept and making some 20 million photon receivers behave identical, but all that counts in the end is counting photons. Every pixel is there for the sole purpose of counting the number of photons that hits it (preferably coming in from the lens). And you don't want to fill your buckets.
Since most of us take our shots in temperatures above 0K, we always have to deal with thermal noise. A logarithmic approach to handling our combinations of signal + noise wouldn't be bad.


Sorry for sidestepping the original idea of this thread.
 
Upvote 0
DominoDude said:
Palettemediaproduktion said:
Here is some interesting research on Quad Pixel tech from a couple of guys at Aptina. Read about it and let me know if you think it might open up the discussion a bit more. The future demands for HDR video and the computational techniques being discussed in this work by Gordon Wan, Xiangli Li, Gennadiy Agranov, Marc Levoy and Mark Horowitz.

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf

That concept seems really interesting!
I've had thoughts about why no one seem to have adopted something similar to a logarithmic amplifier. That was something I saw in certain radar equipment, where one typically sent out a few kW, and expected to get signal back that were only some fW (10-15W). However, you couldn't be sure on the returning signals strength, so the receivers had to cope with signals that were several magnitudes greater - without frying the entire array of discrete components/transistors/tubes.

In short, that "problem" were solved with stages of amplifiers that, when saturated, automatically opened up for the next stage to take care of the signal handling without ever hitting any ceilings, or frying any components.

In sensors you would've the problem of miniaturising this concept and making some 20 million photon receivers behave identical, but all that counts in the end is counting photons. Every pixel is there for the sole purpose of counting the number of photons that hits it (preferably coming in from the lens). And you don't want to fill your buckets.
Since most of us take our shots in temperatures above 0K, we always have to deal with thermal noise. A logarithmic approach to handling our combinations of signal + noise wouldn't be bad.


Sorry for sidestepping the original idea of this thread.

What your talking about is a photomultiplier. That is actually a very different concept still, and is neither similar to the multi-bucket pixels nor DPAF. :P

Photomultipliers do use mult-stage amplifiers to significantly amplify extremely low signals by a significant magnitude, without requiring ultra-specialized amplifiers that can do so without frying themselves. But that's just a more complicated means of amplifying a weak signal. It doesn't actually improve the signal strength itself, so it can neither reduce noise, nor support something like HDR.

The multi-bucket pixel concept in that paper effectively embeds analog memory into the pixel. Global shutter sensors already do this, but they only have a single memory (when the exposure is done, every pixel's charge is immediately pushed to it's memory at once, the pixels are reset, then the memory can be read out in the background while the next exposure occurs.) Multi-bucket memory allows charge to be pushed to memory more than once, which expands the dynamic range by N times. At the point of read, the charge stored in each bucket is then binned as the pixels are read out.

That is significantly different than a photomultiplier, as instead of amplifying the signal (which also amplifies the noise, and does not actually improve the quality of the signal itself), it allows longer exposures combined with multiple "memory pushes" to literally enhance the quality of the signal itself WITHOUT amplification. THAT....that is what is so intriguing about the multibucket concept. ;)
 
Upvote 0
mackguyver said:
sanj said:
I am wondering when (and if ever) the latest crop cameras will be able to compare with 5d2. Is 6 years enough for technology to reach a point where new crop camera's catch up to full frame?
sanj, the 70D hasn't caught up to the original 5D yet, which was released nearly 8 years before the 70D:
http://www.dxomark.com/Cameras/Compare/Side-by-side/Canon-EOS-70D-versus-Canon-EOS-5D___895_176
Ignore the "Score", and the DR measurement is close enough to be considered a margin of error.

Oh! Thanks for this. I will not expect much then.
 
Upvote 0
I am hoping for overall incremental improvements in many functions, with overall significantly better user experience. You will notice that Nikon differentiated its high MP camera from its jack-of-all-trades flagship pro camera, and that new technology is introduced in lower-end products before the flagship model gets it. Pros want rock-solid reliability and excellent ergonomics, in addition to image quality, burst speed, etc
 
Upvote 0
--8< --Snip!--8<--
jrista said:
What your talking about is a photomultiplier. That is actually a very different concept still, and is neither similar to the multi-bucket pixels nor DPAF. :P

Photomultipliers do use mult-stage amplifiers to significantly amplify extremely low signals by a significant magnitude, without requiring ultra-specialized amplifiers that can do so without frying themselves. But that's just a more complicated means of amplifying a weak signal. It doesn't actually improve the signal strength itself, so it can neither reduce noise, nor support something like HDR.

The multi-bucket pixel concept in that paper effectively embeds analog memory into the pixel. Global shutter sensors already do this, but they only have a single memory (when the exposure is done, every pixel's charge is immediately pushed to it's memory at once, the pixels are reset, then the memory can be read out in the background while the next exposure occurs.) Multi-bucket memory allows charge to be pushed to memory more than once, which expands the dynamic range by N times. At the point of read, the charge stored in each bucket is then binned as the pixels are read out.

That is significantly different than a photomultiplier, as instead of amplifying the signal (which also amplifies the noise, and does not actually improve the quality of the signal itself), it allows longer exposures combined with multiple "memory pushes" to literally enhance the quality of the signal itself WITHOUT amplification. THAT....that is what is so intriguing about the multibucket concept. ;)

*nods* Yeah, I see what you mean. I think I lost myself slightly and had trouble finding a good way to formulate myself in English (not my primary lingo). I, certainly, do agree that the multi-bucket looks very interesting and promising, and hopefully it will boil down to some useful technology that can be used by many parties.
 
Upvote 0
We will all just have to wait and see!
Canon are not going to produce what I want ( a 10/12mp high ISO, fast apsc camera) as too many people out there are convinced that the current 18mp+ apsc sensors gives more "Reach" - I have tried them - they don't really, well just a little but with a whole host of compromises, in the real world larger sensors win hands down.
Let's hope the high MP fans don't win and that they (Canon) actually produce a useful upgrade over the 7D!
Many will disagree with me - that's fine - but I have tried them all (Canon's and the better Nikons) and the larger sensor lower MP cameras produce the goods. High (12mp +) small sensors are just too much of a compromise.
 
Upvote 0
scottburgess said:
Hey Jrista, would you be consider buying a 7Dii for conversion as a full-time astrograph? What feature set would be ideal for that application--fewer or more Mp, sensor technology, add on features... ??

Well, that question is not really as simple as it might sound. ;) Astrophotography is a different beast.

In normal photography, there is pretty much NOTHING wrong with having more resolution...more resolution is pretty much always a good thing. While, in the context of cropping, pixel size can affect noise levels, sensor size and quantum efficiency are generally the primary determining factors of image noise...so the general rule of thumb should pretty much always be: Get as much resolution as you can.

When it comes to other features...like the AF system, metering, frame rate, etc. (all of which I generally consider AT LEAST as important as sensor IQ, if not more important depending on your style and type of photography), you should generally go for the best you can that meets your needs. The 7D II is an action photography camera, and while sensor IQ is important, it's really the frame rate and AF system that are paramount.

When it comes to astrophotography, none of the "add on features" matter. They are pretty much worthless, so long as you actually have AF. (More on why in a moment.) Resolution in astrophotography is also evaluated in an entirely different way as well, and for the most part, you want to "match" sensor resolution to lens resolving power in a specific way. The term used to describe this matching of resolutions is "image scale", and I'll go into detail in a second here. Lets start with a couple of exceptions to the image scale guidelines.

First, for those who like to image star waveforms (diffraction patterns), for the purposes of analysis of things like double and multiple-star systems, exoplanet investigation, etc. resolution is absolute king. You want as much resolution as you can get. It is not uncommon to use focal lengths of thousands of millimeters, even ten thousand millimeters or so. The smaller your pixels, the better your sensor will be able to resolve the airy pattern. In terms of normal resolution in normal photography, you really aren't gaining "resolution" here. These systems for surveying star patterns are usually fully diffraction limited. Were talking about F/Ratios in the range of f/29 to f/40 or beyond. In regular photography, that would cause significant blurring because diffraction is softening the image. In star surveying, however, your working with individual points of light...there is no blurring, your just magnifying the actual diffraction effect, and your analyzing it directly. A LOT can be learned about stars by analyzing heavily magnified diffraction patterns.

Second, planetary imaging tends to be high focal length/high f-ratio. Planets are pretty small in the grand scheme of things, so again it is not uncommon to see thousands of millimeters focal length and high f-ratios in the f/10-f/20 range. Planetary imaging is quite different than normal astrophotography, it is usually done with video, at high crops and ultra high frame rates (320x240px @ 200fps is not unheard of), and having lots of resolution helps. Planetary imaging is all about superresolution and "seeing through" atmospheric turbulence. Having a lot of sensor resolution in this circumstance is also helpful. In the end, many thousands of frames, some of which may appear quite blurry due to atmospheric turbulence, are processed, the bad ones are thrown away, the best ones are kept, and stacked with a superresolution algorithm to produce crisp, high resolution images if planets.

In both of the above cases, small pixels are a huge benefit. When it comes to imaging larger objects, DSOs or Deep Sky/Space Objects, resolution is a bit different. This is where Image Scale comes into play. Image scale is an angular measure of arcseconds per pixel (angular, because pretty much everything in astrophotography is done in angular space...pointing, tracking, coordinates, etc.) You determine the arcseconds per pixel (image scale) by using the following formula:

Code:
imageScale = (206.265 * pixelSize) / focalLength

In the case of the current 7D, with a 600mm lens (what I've been using so far), my image scale is 1.478"/px. In the case of a larger, longer telescope, such as the AT8RC astrograph, which has a focal length of 1625mm, the image scale would be 0.546"/px. If I was using that telescope with a 2x or 3x barlow on it, which multiplies the focal length like a teleconverter, image scale would be 0.273"/px and 0.182"/px, respectively. The image scale becomes critically important once you understand how the resolving power of a telescope affects the distribution of light at the sensor.

Before we get into that, a quick sidebar on star sizes. Star size, from earth-bound telescopes, is ultimately a product of their native size combined with the impact of seeing. Seeing, the term we give to how well we can see the true form of stars due to atmospheric turbulence, can blur stars and make them larger than they actually are. On a night of excellent seeing, where atmospheric turbulence is low, the average star size for naked-eye star gets close to their true size, around 1.8". When seeing is worse than excellent, the average star size can increase to 2" or 3", possibly even larger. For the most part, we figure average seeing produces stars around 2.2", or a little over two arcseconds. Ok, now that you understand star size, back to the discussion of image scale.

In astrophotography, we aim to match lens resolution to sensor resolution in such a way that our image scale falls somewhere between 0.75" to 1" per pixel, or 0.75"/px to 1.0"/px. For stars that are 2"-3" in size, this results in each star covering about a little more than a 2x2 pixel grid of pixels. This avoids a problem where, when image scale is too large, stars end up looking like square pixels, instead of round dots. It also avoids another problem, the light spread problem, which I'll go into in a bit. In my case, my seeing makes my stars about 2.8-3.2" in size (I don't have very good seeing most of the time here in Colorado) in most nights. On the best nights (like two nights ago) I've had my seeing as low as 2.2". For the average case, my image scale of 1.478" is pretty decent, although for smaller stars, it does tend to make the smaller/dimmer stars a little square. An image scale of 1-1.2" would be more ideal.

Beyond simply avoiding square stars, keeping your image scale at a reasonable level can be important to achieving the right exposure "depth". This isn't a term we use in normal photography, as we tend to work with relatively gargantuan quantities of light. It only takes a fraction of a second to saturate our pixels with normal photography, and we often have significant problems with dynamic range in the sense that our scenes contain considerably more than we can capture in those extremely small timeslices. In astrophotography, we often have the opposite problem...it can be very difficult to saturate our pixels and achieve a reasonable signal to noise ratio. If our image scale is too small, say 0.5", 0.2", 0.1" then that means that the light from one single star is spread out over a 4x4, 10x10, or 20x20 matrix of pixels. The smaller our image scale, the less saturated each pixel is going to be. This is a problem where light is being spread out over too great an area on the sensor, which greatly impacts our ability to get a saturated exposure with a strong signal, and therefor high SNR.

If you are using a monochrome CCD camera designed for astrophotography, you usually have the option of "binning" pixels during readout. A sensor with 4.5µm pixels can be binned 2x2, 3x3, 4x4, sometimes even nxn. That gives you the option of having 8µm, 13.5µm, 16µm pixels if you need. As you increase focal length, binning, usually 2x2, becomes very useful as it helps you keep your image scale within that "ideal" range. Electronically binned pixels are effectively equivalent to having larger pixels, which is a bit different than averaging pixels in post with downsampling. With downsampling, you reduce noise and increase SNR, but don't actually improve signal strength, where as with binning, you DO increase signal strength.

When using a DSLR, it can be difficult to achieve an ideal image scale, since you cannot bin. That limits you to using a certain range of focal lengths, or else means you have to expose for a much longer period of time to get the same results. Now...in with the 7D II. I do not yet know what it holds (I think Don wrote a humorus post on that very subject last night on one thread or another, basically epitomizing how we really don't know JACK about the 7D II, despite all the "informative" rumors! :P) Assuming the 7D II gets the much-needed boost to quantum efficiency it really needs to perform well (I'm really hoping it lands somewhere around 56% Q.E.), then I think, for its pixel size, that it could be a very good performer for astrophotography.

It would ultimately depend on the other sensor factors...the most important of which is the IR filter. DSLRs are, in the grand scheme of things, are actually really CRAPPY for astrophotography, The IR filters block out most of the red light at the most critical emission band: Hydrogen-alpha, or 656.28nm wavelength. Most of emission nebula in our skies are comprised of hydrogen, which when excited, emits light in a few very narrow bands. Hydrogen has two key emission bands for astrophotography: Hydrogen-alpha (Ha) and Hydrogen-Beta (Hb). Ha is a very red band, and Hb is a very blue band, which results in a pinkish-red color. Most DSLRs pass a mere 12% or less at the Ha band, while a monochrome CCD will usually pass anywhere from 45% to 80% at the Ha band.

You did mention a full-time astro mod of the 7D II. There are a few astro conversion mod options available for DSLRs. You can simply replace the IR/UV filters in the filter stack with Baader or Astrodon filters that are better-suited to astrophotography, where they pass 90% or more of the light through the entire visible spectrum, with a "square" falloff into IR. You can also get full spectrum filters that will block UV, but pass the entire visible spectrum then gradually fall off into deep IR (useful for infrared imaging as well as astro imaging so long as you use an additional IR block filter when doing visual work). Finally you can do full mono mods, where the CFA (and the microlenses) are actually scraped off the sensor. With a full mono mod, you can greatly increase the sensitivity of the sensor, but it becomes useless for any other kind of astrophotography. It should also be warned, converting any DSLR for astro use can greatly diminish it's usefulness for regular photography. Even a basic astro IR/UV mod has a considerable impact on the reds in your photography, and you will forever be bound to using custom white balance modes...none of the defaults will ever work again.

So, if the 7D II comes in with a much-needed Q.E. boost, and so long as you are using moderate focal lengths (400-1200 I'd say), it would make for a decent astrocam. If you modded it with a Baader or Astrodon IR filter, it would probably be quite excellent, in the grand scheme of DSLRs used for astrophotography. It will never compare to even the cheapest thermally regulated CCD camera, and in the case of some of the lower end ones, you can spend a mere $1500 on a good cooled CCD, where as the 7D II is likely to hit the streets with a price at least $500 higher, if not more. If you REALLY want to get into astrophotography, I highly recommend looking into some of the lower end cooled CCDs, as even the cheapest one is likely to be better for astro than any DSLR, modded or otherwise.
 
Upvote 0