Palettemediaproduktion said:
The bad timing of the release of 5D III was actually caused by the bad timing of the
release of 1D X. It was delayed after Nikon introduced the new D4. Canon managed
to use the extra time to adjust the sensor tech to match the Nikon performance.
Highly unlikely. It takes YEARS to design a sensor. Canon did not even have a year between the initial announcement of the 1D X and it's actual release to Photographers during the Olympics. The major changes between announcement and release had to do with the AF system, not the sensor.
Canon did not adjust the sensor technology to match Nikon's performance. Canon had designed and finalized the design of the sensor, and was probably well into mass producing them, by the time they announced the product. There is no chance they reengineered it after that point...not in time for release.
That means Canon released a highly competitive sensor out the gate WITHOUT the need to reengineer it to "match" the capabilities of the competition.
Palettemediaproduktion said:
The Nikon D800 forced Canon to accelerate the process of perfecting and releasing
the 5D III before they actually were ready to launch their next sensor.
Again, false. This is a 100% pure fabrication.
The 5D III was in the same boat as the 1D X. It takes a good six years to engineer, debug, and release the kind of technology found in cameras like the 5D III and 1D X. By the time these cameras releases rolled around, it was WAY past any time when Canon would have had a chance to make any significant changes to their sensor technology.
Palettemediaproduktion said:
The big problem was that we all (including Canon) predicted and expected the 5D III to
be the best ever video filming DSLR camera. With the heritage from 5D II the demand
for better inner quality in the filming department kind of forced the developers to go for
a sensor with less moire. Exactly how this is done is something I haven´t read or heard
about anywhere. But I suggest the inside software had to be designed to deal with much
softer images from the sensor and apply a radical up sharpening. This would explain why
the lo ISO performance is worse than expected. Readers here will surely share their opinion
on this. Please add comments.
Again, false. The 5D III is a sharper camera than it's predecessor. It's AA filter is slightly weaker than the 5D II's. Canon binned the pixels to produce video, which is where some of the "softening" came from, but binning concurrently reduced noise. Tradeoffs.
Palettemediaproduktion said:
My point is that I feel Canon does not want to make the same mistake again. They will
release the next tech when they are certain the 4K video standard is on pair with what
the other companies will be able to deliver in the next years to come. And they will have
to make the sensor output sharp and noise free for stills as well. Expect the 7D II to
be 20 megapixel with 4K video at 60p. That would be a well balanced step forward at
this moment I think.
Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.
Palettemediaproduktion said:
The new sensor has to be able to read out a huge amount of data or pre process
it on chip before entering the processor.
Assuming it hits at around 20-24mp, it actually won't need to read out much more than the 5D III. I've already demonstrated mathematically on multiple occasions that the DIGIC5+ chips in the 1D X are more than capable of handling 10fps @ 24mp 14-bit.
Palettemediaproduktion said:
I predict the suggested quad pixel tech to be used in a way no one has talked about here.
This tech allows not only for fast live AF, but also for reducing the sensor noise by using
the well known multi exposure technique. Instead of taking four separate images and sandwiching together for lower visible noise, Canon will be able to make one exposure with four separate channels of the same pixel read. This makes it possible to get a much better ISO performance. The potential for reducing and minimizing artifacts is huge, I would say.
Again, speculation. This is not a proven fact. It is a regurgitated assumption that people all over the net are spewing. There is no magic about the DPAF technology (which, BTW, is DUAL pixels, not quad pixels...all the patents and other evidence about the 70D clearly indicates the photodiode is split once, into two halves. The next refinement changes the sensitivities of each half. There is no quad pixel AF patent from Canon as of yet.) The photodiodes are split UNDER the color filters. Again , I've demonstrated mathematically on multiple occasions that dual-ISO reads of split photodiodes results in a net-zero result...you neither really gain nor lose anything. Dual-ISO with half-pixels is not the same as the dual-ISO with Magic Lantern, which utilizes FULL pixels and takes advantage of Canon's off-sensor, downstream secondary amplifier to do it's magic. Dual ISO with half pixels means your working with
half as much light as what ML is working with now, which effectively nullifies any benefit you might have otherwise gained. Assuming Canon DOES eventually come out with QPAF, then each sub-photodiode is only receiving 1/4 of the light for the whole pixel. Same deal...Dual ISO with such a setup results in a net zero outcome...you cannot use less light to create a better result, no matter what ISO settings your using.
Palettemediaproduktion said:
And not only can you compare differences between four reads of the same pixel.
You can compare the adjacent pixel reads or all pixels on the sensor and identify
noise introduced by the power supply much easier. Four separate reads of the
single pixel allow you to step into the zero time domain where the processor will
have the optimum working space for computing errors in signal transfer.
Again, incorrect. It is
not four reads of the same pixel. It is four reads
of 1/4 of the pixel! It is four reads that result in 1/4 the light each (or, as the actual facts would have it, since it's DUAL pixel technology, two reads at 1/2 the light each). You cannot read a single half or quarter of a split photodiode, and assume it is the same as reading the whole pixel. That's WHY Canon bins the two photodiode halves in DPAF sensors when doing an image read (vs. an AF read)...because otherwise, they are just reading smaller pixels with less light. There is no magic here, no special capabilities. Smaller photodiodes are smaller photodiodes...they have less charge capacity, less total surface area for light to strike.
Four separate reads also mean more time to read out the sensor. It's more information, like going from a 20mp sensor to an 80mp sensor. I don't see how that allows any optimization of any kind...it's exactly the opposite. It's a factor of four increase in "pixels" to read, meaning at least that much more processing power would be required...more, really, if you factor in overhead.
Palettemediaproduktion said:
It will be a matter of computing power to take the full advantage of the quad pixel
tech and I guess this is why we are waiting for Canon to present the next generation
of DSLR sensors. If they get it right I think we will se images and video with much
less noise and improved color fidelity.
Assuming Canon ever creates a quad pixel sensor, yes, they will need significantly faster processors. Good thing they only do reads of each separate photodiode for AF purposes, and use hardware binning built into the sensor itself for image reads. That means they are still only reading out 20-24mp worth of "pixels", regardless of how many photodiodes there may be on the sensor.
Palettemediaproduktion said:
Another question is if Canon would prefer to introduce the next generation of sensor
I suggest on the 7DII or not. I suppose a demand for higher frame rates on this model
makes things more complicated.
The possibilities are just as overwhelming as the challenges. Canon will most likely
make sure they use the new sensor tech to the full extent before releasing it.
This is my guess. What do you think?
I think you've made a lot of wild guesses, assumptions and crazy speculative leaps. You make the assumption that Canon has QPAF technology, they do not. (Based on current patent filings, no one does...some competitors are finally developing their own DPAF-like patents. Canon's own subsequent patents to DPAF, some only a few months old, still indicate DUAL photodiodes, not quad. The changes have to do with sensitivity alone, and those sensitivity changes have to do purely with AF technology, the image readout technology is still exactly the same...binned.)
I'm really not sure why everyone things that Canon's DPAF tech is actually QPAF tech, or why everyone thinks that somehow this dual PHOTODIODE/pixel technology is somehow going to mean better dynamic range. I keep debating these mistaken points...they just don't seem to die. Every time you split a photodiode, each resulting smaller photodiode is
less sensitive to light...it has a smaller area. Concurrently, it increases the number of photodiodes that need to be read. There is no way to construe less light and more photodiodes as some kind of magical
optimization that suddenly somehow gives Canon either a performance edge or a dynamic range edge or a noise management edge.
There are only two things that affect REAL sensitivity as far as sensor design goes (three if you factor in downstream readout logic): Total sensor area and quantum efficiency. If you do throw in downstream read logic, then read noise also plays a role, but in Canon sensors readout logic is primarily off-die, so not actually a function of the sensor. Increase sensor area, increase sensitivity. Increase quantum efficiency, increase sensitivity. You can split photodiodes to your hearts content...so long as they are contained within the same total sensor area, splitting them really doesn't to jack to improve anything. A given amount of light is a given amount of light. Nothing done after you've gathered that given amount of light is going to change the original amount. Pixel size is largely irrelevant until you are reach limited. Only in reach-limited situations does pixel size matter, however have no illusions...smaller pixels mean more noise, less dynamic range. Always. The benefit of smaller pixels in reach limited scenarios is resolution, not better overall IQ.