7d2 IQ thoughts.

Why would anyone even remotely think that Canon is going to come out with a low-megapixel APS-C camera? You loose the reach advantage of APS-C, yet unless you drop down to less than a 7.8 megapixel camera, you will still have smaller pixels than a 6D and worse low-light performance.....

Not going to happen!
 
Upvote 0
Don Haines said:
Why would anyone even remotely think that Canon is going to come out with a low-megapixel APS-C camera? You loose the reach advantage of APS-C, yet unless you drop down to less than a 7.8 megapixel camera, you will still have smaller pixels than a 6D and worse low-light performance.....
Not going to happen!
I do not expect APS-C has performance equal to 1DX, but ISO3200 as good as 5D Mark III in 6400. I also do not believe that Canon will launch the replacement for 7D with just 12 megapixel, although that would be suitable for photojournalism. My point is that APS-C is advantageous when you need the depth of field MORE WIDE.
 
Upvote 0
ajfotofilmagem said:
In my specific case, full frame is not the best choice because I need to shoot in low light, and still have wide depth of field. For such depth of field with full frame, I would have to close the aperture more than one stop, losing much of the advantage of high ISO. How I use two cameras at the same time also need cameras and lenses lightweight as only APS-C can be.

That's a pretty specific set of needs. I'm sure you understand why Canon might not see a market for that.

I'm guessing you are shooting for publication, which is why you don't need more than 12 mp.

I'm not sure that depth of field works quite like you describe though. My understanding has always been that perceived greater depth of field with APS-C is created by the distance from the subject to the camera.

A 200mm lens on an APS-C camera and a 200mm lens on a full frame camera -- both pictures taken from the same spot and the full frame image cropped to match the APS-C crop -- should have the same depth of field, correct? Although the full frame crop is likely to get you below your 12mp target.
 
Upvote 0
unfocused said:
ajfotofilmagem said:
In my specific case, full frame is not the best choice because I need to shoot in low light, and still have wide depth of field. For such depth of field with full frame, I would have to close the aperture more than one stop, losing much of the advantage of high ISO. How I use two cameras at the same time also need cameras and lenses lightweight as only APS-C can be.

That's a pretty specific set of needs. I'm sure you understand why Canon might not see a market for that.

I'm guessing you are shooting for publication, which is why you don't need more than 12 mp.

I'm not sure that depth of field works quite like you describe though. My understanding has always been that perceived greater depth of field with APS-C is created by the distance from the subject to the camera.

A 200mm lens on an APS-C camera and a 200mm lens on a full frame camera -- both pictures taken from the same spot and the full frame image cropped to match the APS-C crop -- should have the same depth of field, correct? Although the full frame crop is likely to get you below your 12mp target.
You are right in that the depth of field of a 200mm lens is the same regardless of what body it is mounted on... however, if we are talking about field of view, a 125mm lens on a crop camera would have the same field of view as a 200mm lens on a FF camera, and all else being equal, the 125mm lens would have greater depth of field.
 
Upvote 0
unfocused said:
ajfotofilmagem said:
In my specific case, full frame is not the best choice because I need to shoot in low light, and still have wide depth of field. For such depth of field with full frame, I would have to close the aperture more than one stop, losing much of the advantage of high ISO. How I use two cameras at the same time also need cameras and lenses lightweight as only APS-C can be.
I'm not sure that depth of field works quite like you describe though. My understanding has always been that perceived greater depth of field with APS-C is created by the distance from the subject to the camera.
A 200mm lens on an APS-C camera and a 200mm lens on a full frame camera -- both pictures taken from the same spot and the full frame image cropped to match the APS-C crop -- should have the same depth of field, correct? Although the full frame crop is likely to get you below your 12mp target.
I mean the depth of field with APS-C and full frame, both with the same framework. When I photographed film, always ended up stopping down to F5.6 lens at least, and with APS-C can use F3.5 to get the same depth of field.
 
Upvote 0
sagittariansrock said:
So yeah, when everyone has a 4K monitor on their desks, can you imagine the level of pixel peeping that will go on?

Actually, since the pixels in 4k screens are about 1/4 the size of pixels in 1080p screens, and are that much harder to see, pixel peeping will actually be much more difficult to do. Not only that, the increase in density should improve sharpness on-screen, so pixel peepers should be seeing better results...and might finally stop bitching. :P
 
Upvote 0
I wouldn't get my hopes up... It will probably be the same sensor as the 70d. But if they did something dramatic... I think it might be possible to do a 20mp aps-h sensor like the 1d line had a generation or two back. That way you could improve low light performance, though not as much as a 6d. It would be better in most regards than the crop bodies, and fps than the full frame, but not as good of low light performance.
 
Upvote 0
sagittariansrock said:
wsmith96 said:
sanj said:
Most of you have been using Canon for long and have been following its progress.

I realize 7d2 it will be fast, responsive etc etc. but do you think the IQ will be noticeably better than Canon's latest 70D say at ISO 1200?

I am wondering when (and if ever) the latest crop cameras will be able to compare with 5d2. Is 6 years enough for technology to reach a point where new crop camera's catch up to full frame?

I would be very happy if the new 7D2 quality would be close to 5d2. Wondering if that is too much to hope for considering the frame size difference?

As I peer into my crystal ball I'm seeing that the images will be technically better, but the increase in image quality will be unnoticed by most.

Don't you worry, the marketing department will make sure we not only notice the improvement, but are absolutely hooked to it.
Think television- once I thought standard definition was fine, and Trinitron was as good as it gets. Now, even high-def is passe and 4K is the next great thing.
So yeah, when everyone has a 4K monitor on their desks, can you imagine the level of pixel peeping that will go on?

I'm sure they will, and am not worried about it :)
 
Upvote 0
For me, I would want IQ at least roughly as good as 5Dmk3 _cropped_ to same size (using same area of the sensor). Especially with high(ish) ISO values. That is the hard part: if they can do that, everything else I want is almost a given. If not, nothing else matters much.

Otherwise, I'd want more speed, especially bigger buffer (that's the one thing where even old 7D beats 5Dmk3), better AF (at least as good as 5Dmk3) - and that's about it. WiFi, GPS, video features I don't care much about. One contraindicator: if it has fixed vertical handle like 1-series, then I won't buy it unless it is otherwise really miraculous or ridiculously cheap.
 
Upvote 0
tapanit said:
For me, I would want IQ at least roughly as good as 5Dmk3 _cropped_ to same size (using same area of the sensor). Especially with high(ish) ISO values.

That's the catch: With current Canon sensor tech, the advantage of the ff sensors is "larger pixels" which would mean a 12mp (or less?) 7d2 - no way Canon will put something like this on the market in the times of automated spec comparisons and spec religion.

For the kind of money the 7d2 will most likely will arrive at they have to target a large user crowd, unless selling 1d Canon is not in the business of creating niche products. This means video features, and then some more video-still combination features. The phase af might come last because people who want 1dx-like af should at least buy the 5d3 and some longer lenses :-o
 
Upvote 0
I would say the reason we are kept waiting for the next generation of sensor tech from
Canon is, in one word: timing. Bad timing is to release a camera with performance
that is surpassed by the rival - in this case most likely Nikon.

The bad timing of the release of 5D III was actually caused by the bad timing of the
release of 1D X. It was delayed after Nikon introduced the new D4. Canon managed
to use the extra time to adjust the sensor tech to match the Nikon performance.

The Nikon D800 forced Canon to accelerate the process of perfecting and releasing
the 5D III before they actually were ready to launch their next sensor.

The big problem was that we all (including Canon) predicted and expected the 5D III to
be the best ever video filming DSLR camera. With the heritage from 5D II the demand
for better inner quality in the filming department kind of forced the developers to go for
a sensor with less moire. Exactly how this is done is something I haven´t read or heard
about anywhere. But I suggest the inside software had to be designed to deal with much
softer images from the sensor and apply a radical up sharpening. This would explain why
the lo ISO performance is worse than expected. Readers here will surely share their opinion
on this. Please add comments.

My point is that I feel Canon does not want to make the same mistake again. They will
release the next tech when they are certain the 4K video standard is on pair with what
the other companies will be able to deliver in the next years to come. And they will have
to make the sensor output sharp and noise free for stills as well. Expect the 7D II to
be 20 megapixel with 4K video at 60p. That would be a well balanced step forward at
this moment I think.

The new sensor has to be able to read out a huge amount of data or pre process
it on chip before entering the processor.

I predict the suggested quad pixel tech to be used in a way no one has talked about here.
This tech allows not only for fast live AF, but also for reducing the sensor noise by using
the well known multi exposure technique. Instead of taking four separate images and
sandwiching together for lower visible noise, Canon will be able to make one exposure
with four separate channels of the same pixel read. This makes it possible to get a
much better ISO performance. The potential for reducing and minimizing artifacts is
huge, I would say.

The advantage is that as long as you have the computing power needed you will have
data that you can analyze with a broad range of noise reducing algorithms.

And not only can you compare differences between four reads of the same pixel.
You can compare the adjacent pixel reads or all pixels on the sensor and identify
noise introduced by the power supply much easier. Four separate reads of the
single pixel allow you to step into the zero time domain where the processor will
have the optimum working space for computing errors in signal transfer.

It will be a matter of computing power to take the full advantage of the quad pixel
tech and I guess this is why we are waiting for Canon to present the next generation
of DSLR sensors. If they get it right I think we will se images and video with much
less noise and improved color fidelity.

Another question is if Canon would prefer to introduce the next generation of sensor
I suggest on the 7DII or not. I suppose a demand for higher frame rates on this model
makes things more complicated.

The possibilities are just as overwhelming as the challenges. Canon will most likely
make sure they use the new sensor tech to the full extent before releasing it.

This is my guess. What do you think?
 
Upvote 0
Palettemediaproduktion said:
The bad timing of the release of 5D III was actually caused by the bad timing of the
release of 1D X. It was delayed after Nikon introduced the new D4. Canon managed
to use the extra time to adjust the sensor tech to match the Nikon performance.

Highly unlikely. It takes YEARS to design a sensor. Canon did not even have a year between the initial announcement of the 1D X and it's actual release to Photographers during the Olympics. The major changes between announcement and release had to do with the AF system, not the sensor.

Canon did not adjust the sensor technology to match Nikon's performance. Canon had designed and finalized the design of the sensor, and was probably well into mass producing them, by the time they announced the product. There is no chance they reengineered it after that point...not in time for release.

That means Canon released a highly competitive sensor out the gate WITHOUT the need to reengineer it to "match" the capabilities of the competition.

Palettemediaproduktion said:
The Nikon D800 forced Canon to accelerate the process of perfecting and releasing
the 5D III before they actually were ready to launch their next sensor.

Again, false. This is a 100% pure fabrication.

The 5D III was in the same boat as the 1D X. It takes a good six years to engineer, debug, and release the kind of technology found in cameras like the 5D III and 1D X. By the time these cameras releases rolled around, it was WAY past any time when Canon would have had a chance to make any significant changes to their sensor technology.

Palettemediaproduktion said:
The big problem was that we all (including Canon) predicted and expected the 5D III to
be the best ever video filming DSLR camera. With the heritage from 5D II the demand
for better inner quality in the filming department kind of forced the developers to go for
a sensor with less moire. Exactly how this is done is something I haven´t read or heard
about anywhere. But I suggest the inside software had to be designed to deal with much
softer images from the sensor and apply a radical up sharpening. This would explain why
the lo ISO performance is worse than expected. Readers here will surely share their opinion
on this. Please add comments.

Again, false. The 5D III is a sharper camera than it's predecessor. It's AA filter is slightly weaker than the 5D II's. Canon binned the pixels to produce video, which is where some of the "softening" came from, but binning concurrently reduced noise. Tradeoffs.

Palettemediaproduktion said:
My point is that I feel Canon does not want to make the same mistake again. They will
release the next tech when they are certain the 4K video standard is on pair with what
the other companies will be able to deliver in the next years to come. And they will have
to make the sensor output sharp and noise free for stills as well. Expect the 7D II to
be 20 megapixel with 4K video at 60p. That would be a well balanced step forward at
this moment I think.

Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.

Palettemediaproduktion said:
The new sensor has to be able to read out a huge amount of data or pre process
it on chip before entering the processor.

Assuming it hits at around 20-24mp, it actually won't need to read out much more than the 5D III. I've already demonstrated mathematically on multiple occasions that the DIGIC5+ chips in the 1D X are more than capable of handling 10fps @ 24mp 14-bit.

Palettemediaproduktion said:
I predict the suggested quad pixel tech to be used in a way no one has talked about here.
This tech allows not only for fast live AF, but also for reducing the sensor noise by using
the well known multi exposure technique. Instead of taking four separate images and sandwiching together for lower visible noise, Canon will be able to make one exposure with four separate channels of the same pixel read. This makes it possible to get a much better ISO performance. The potential for reducing and minimizing artifacts is huge, I would say.

Again, speculation. This is not a proven fact. It is a regurgitated assumption that people all over the net are spewing. There is no magic about the DPAF technology (which, BTW, is DUAL pixels, not quad pixels...all the patents and other evidence about the 70D clearly indicates the photodiode is split once, into two halves. The next refinement changes the sensitivities of each half. There is no quad pixel AF patent from Canon as of yet.) The photodiodes are split UNDER the color filters. Again , I've demonstrated mathematically on multiple occasions that dual-ISO reads of split photodiodes results in a net-zero result...you neither really gain nor lose anything. Dual-ISO with half-pixels is not the same as the dual-ISO with Magic Lantern, which utilizes FULL pixels and takes advantage of Canon's off-sensor, downstream secondary amplifier to do it's magic. Dual ISO with half pixels means your working with half as much light as what ML is working with now, which effectively nullifies any benefit you might have otherwise gained. Assuming Canon DOES eventually come out with QPAF, then each sub-photodiode is only receiving 1/4 of the light for the whole pixel. Same deal...Dual ISO with such a setup results in a net zero outcome...you cannot use less light to create a better result, no matter what ISO settings your using.

Palettemediaproduktion said:
And not only can you compare differences between four reads of the same pixel.
You can compare the adjacent pixel reads or all pixels on the sensor and identify
noise introduced by the power supply much easier. Four separate reads of the
single pixel allow you to step into the zero time domain where the processor will
have the optimum working space for computing errors in signal transfer.

Again, incorrect. It is not four reads of the same pixel. It is four reads of 1/4 of the pixel! It is four reads that result in 1/4 the light each (or, as the actual facts would have it, since it's DUAL pixel technology, two reads at 1/2 the light each). You cannot read a single half or quarter of a split photodiode, and assume it is the same as reading the whole pixel. That's WHY Canon bins the two photodiode halves in DPAF sensors when doing an image read (vs. an AF read)...because otherwise, they are just reading smaller pixels with less light. There is no magic here, no special capabilities. Smaller photodiodes are smaller photodiodes...they have less charge capacity, less total surface area for light to strike.

Four separate reads also mean more time to read out the sensor. It's more information, like going from a 20mp sensor to an 80mp sensor. I don't see how that allows any optimization of any kind...it's exactly the opposite. It's a factor of four increase in "pixels" to read, meaning at least that much more processing power would be required...more, really, if you factor in overhead.

Palettemediaproduktion said:
It will be a matter of computing power to take the full advantage of the quad pixel
tech and I guess this is why we are waiting for Canon to present the next generation
of DSLR sensors. If they get it right I think we will se images and video with much
less noise and improved color fidelity.

Assuming Canon ever creates a quad pixel sensor, yes, they will need significantly faster processors. Good thing they only do reads of each separate photodiode for AF purposes, and use hardware binning built into the sensor itself for image reads. That means they are still only reading out 20-24mp worth of "pixels", regardless of how many photodiodes there may be on the sensor.

Palettemediaproduktion said:
Another question is if Canon would prefer to introduce the next generation of sensor
I suggest on the 7DII or not. I suppose a demand for higher frame rates on this model
makes things more complicated.

The possibilities are just as overwhelming as the challenges. Canon will most likely
make sure they use the new sensor tech to the full extent before releasing it.

This is my guess. What do you think?

I think you've made a lot of wild guesses, assumptions and crazy speculative leaps. You make the assumption that Canon has QPAF technology, they do not. (Based on current patent filings, no one does...some competitors are finally developing their own DPAF-like patents. Canon's own subsequent patents to DPAF, some only a few months old, still indicate DUAL photodiodes, not quad. The changes have to do with sensitivity alone, and those sensitivity changes have to do purely with AF technology, the image readout technology is still exactly the same...binned.)

I'm really not sure why everyone things that Canon's DPAF tech is actually QPAF tech, or why everyone thinks that somehow this dual PHOTODIODE/pixel technology is somehow going to mean better dynamic range. I keep debating these mistaken points...they just don't seem to die. Every time you split a photodiode, each resulting smaller photodiode is less sensitive to light...it has a smaller area. Concurrently, it increases the number of photodiodes that need to be read. There is no way to construe less light and more photodiodes as some kind of magical optimization that suddenly somehow gives Canon either a performance edge or a dynamic range edge or a noise management edge.

There are only two things that affect REAL sensitivity as far as sensor design goes (three if you factor in downstream readout logic): Total sensor area and quantum efficiency. If you do throw in downstream read logic, then read noise also plays a role, but in Canon sensors readout logic is primarily off-die, so not actually a function of the sensor. Increase sensor area, increase sensitivity. Increase quantum efficiency, increase sensitivity. You can split photodiodes to your hearts content...so long as they are contained within the same total sensor area, splitting them really doesn't to jack to improve anything. A given amount of light is a given amount of light. Nothing done after you've gathered that given amount of light is going to change the original amount. Pixel size is largely irrelevant until you are reach limited. Only in reach-limited situations does pixel size matter, however have no illusions...smaller pixels mean more noise, less dynamic range. Always. The benefit of smaller pixels in reach limited scenarios is resolution, not better overall IQ.
 
Upvote 0
jrista said:
I think you've made a lot of wild guesses, assumptions and crazy speculative leaps. You make the assumption that Canon has QPAF technology, they do not. (Based on current patent filings, no one does...some competitors are finally developing their own DPAF-like patents. Canon's own subsequent patents to DPAF, some only a few months old, still indicate DUAL photodiodes, not quad. The changes have to do with sensitivity alone, and those sensitivity changes have to do purely with AF technology, the image readout technology is still exactly the same...binned.)

Although I think QPAF is possible, I don't think we will see it anytime soon. Since the pixels are binned, you could achieve the same results with having alternating pixels split vertically and horizontally and it would make for far simpler circuitry that could detect both vertical and horizontal shift.
 
Upvote 0
jrista said:
Every time you split a photodiode, each resulting smaller photodiode is less sensitive to light...it has a smaller area.

When you split a photodiode in two, you get LESS than half the light in each half. There is an amount of waste real-estate around the edges of a cell. To illustrate with a simple example, let's say the manufacturing process has a resolution of 1 unit and a pixel is 10x10 units square. You have a waste area of 1 unit around the outside of the photodiode so you end up with an 8x8 photodiode and 64% of the surface area used to gather light. By splitting the photodiode, you end up with 2 3x8 photodiodes, or 48% of the surface area used to collect light.

Yes, you can use microlenses to counter this, but perfection (which can never be achieved) would get you back to even with the single photodiode.
 
Upvote 0
Well, I am going to be optimistic. My 60D will last until the 7D2 shows up. And I want the 7D2 for a birding camera - the reach is important. Also important is AF at f/8. I hand-hold a 400mm f/5.6L for all my birding photos, and if I successfully bulk up my scrawny arms, have plans for a hand-held 500mm or 600mm f/4.
 
Upvote 0
Don Haines said:
jrista said:
Every time you split a photodiode, each resulting smaller photodiode is less sensitive to light...it has a smaller area.

When you split a photodiode in two, you get LESS than half the light in each half. There is an amount of waste real-estate around the edges of a cell. To illustrate with a simple example, let's say the manufacturing process has a resolution of 1 unit and a pixel is 10x10 units square. You have a waste area of 1 unit around the outside of the photodiode so you end up with an 8x8 photodiode and 64% of the surface area used to gather light. By splitting the photodiode, you end up with 2 3x8 photodiodes, or 48% of the surface area used to collect light.

Yes, you can use microlenses to counter this, but perfection (which can never be achieved) would get you back to even with the single photodiode.

Aye, this is very true. There are spatial losses. Based on Canon's patents, I don't think the waste is quite as significant as your basic example with the units. It's probably more on the order of each pixel being 100x100 units square, losing maybe 5 units around due to wiring and other amp/readout transistors. Then there may be a 1-unit gap between the two halves of the split photodiode. So there are losses, but maybe not quite as extreme as your 10x10 example.

Regardless, there is still no way to construe DPAF or some hypothetical future QPAF as being some magic bullet to increasing either the readout performance nor dynamic range of Canon sensors. ;P That's a myth that just won't die, it seems. It's like the horse that was beat, and is now undead. It just keeps coming back for more brains... O_o
 
Upvote 0