December 22, 2014, 07:28:27 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 83 84 [85] 86 87 ... 322
1261
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 08:37:50 PM »
As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...

Thus you start with an 80MP sensor to get a nice 20MP image.

No, that is fundamentally incorrect. You start with a 20mp sensor, which has 40mp PHOTODIODES. The two are not the same. Pixels have photodiodes, but photodiodes are not pixels. Pixels are far more complex than photodiodes. DPAF simply splits the single photodiode for each pixel, and adds activate wiring for both. That's it. It is not the same as increasing the megapixel count of the sensor.

And, once again...I have to point out. There is no such thing as QPAF. The notion that Canon has QPAF is the result of someone seeing something they did not understand. Canon does not have QPAF. Their additional post-DPAF patents do not indicate they have QPAF technology yet...however there have been improvements to DPAF.

BTW, what your describing is called super-pixel debayering. That, too, is a common option in astrophotography image stacking...instead of basic or AHD debayering, you usually have the option to either super-pixel debayer, or "drizzle" (which, if you have enough subs...such as a couple hundred...is a means of achieving superresolution, and can increase your output image resolution by two to three fold.) You don't even need another microlens layer to do super-pixel debayering...you could use a tool like Iris or maybe even DarkTable/RawThearapy, to do it on any image you want.

Finally, even if you do super-pixel debayering, your not going to ever have "hard edges". Statistically speaking, the chances if a white/black line pattern you wish to photograph perfectly lining up with your pixels, regardless of how large or small they are, is so excessively remote that it is statistically impossible. Not in any real-world situation. You might be able to build some kind of contraption and AI software to eventually achieve it, but that is well beyond the realm of practicality. If you remove the AA filter, use super-pixel debayering, you might have larger pixels with full color fidelity...but your going to have a massive amount of aliasing. Those white and black lines would have some nasty stair-stepped edges, they would just look atrocious.

Wow, it looks like superpixel debayering (http://pixinsight.com/doc/tools/Debayer/Debayer.html) is exactly what I'm after. Make a 128MP sensor and use superpixel debayering and you'll have a nice compact, super accurate 32MP image.
Again, really, I'm fine with just shooting on a 128MP sensor and dealing with 100MB+ RAW files, the trick is to get a similar result in a format that's going to be acceptable to the majority of photographers who refuse to deal with large file sizes.

As long as your final image is around 32MP I don't think people are going to notice the stair stepping, unless you're standing right next to something like a 40" high quality print.

Well, someday we may have 128mp sensors...but that is REALLY a LONG way off. DPAF technology, or any derivation thereof, isn't going to make that happen any sooner.

1262
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 06:18:44 AM »
Right.
What would be really cool to see is some sort of hardware level binning process that maintains the integrity of the RAW file.

Half the reason I'm so anxious for super high resolution cameras is that I haven't been terribly impressed with the image quality off my 5D2. That nasty AA filter (which I'm pretty sure is especially bad on the 5D2) effectively cuts resolution in half. When I first saw my pictures on a decent 4MP monitor I was amazed at how little detail loss there was vs. looking at the image zoomed to 100%. My bet is that a good 4K (8MP) monitor is going to display your images with just as much detail as a high quality print... Because the detail actually isn't there in the first place.

One option is just quadrupling resolution and getting rid of the AA filter (which I'm actually fine with), but if they could bin the full per-pixel RGB signal on the sensor it should effectively deal with moire, and we get to keep our current file size, and it should produce an actual 20MP image instead of the blurred out fake we currently end up with.

The last thing I really want to see is the integration of clear microlenses. Even the heavily faded green pixels that we have right now still block a lot of light. Given how advanced interpolation is I doubt that eliminating the colour value for one of the pixels would have a significant impact on image quality.

Sorry, but that (bolded) is such a ludicrous, laughable comment, I'm just flabbergasted. An AA filter DOES NOT cut resolution "in half". That is blowing things SO FAR out of proportion it may be one of the most ludicrous things I've read on these forums. OLPFs, optical low pass filters, are designed to affect high frequencies only, and only around the nyquist limit at that. You lose a TINY amount of resolution...but it doesn't matter, because the "resolution" your losing just contains nonsense anyway. OLPFs blur very high frequency data that nearly or exactly matches the spatial frequency of the sensor's pixels just enough such that they the information doesn't alias. That's it. Aliased information is a REAL loss of information. Technically speaking, OLPFs PRESERVE information...they save information that can be saved, and discard information that cannot be correctly interpreted by the sensor anyway. On top of that, a very light application of unsharp masking can effectively reverse the blurring, and improve the resolution of that high frequency data, without actually bringing back all the nonsense.

Quadrupling resolution and removing the AA filter is only an option if your lenses cannot resolve that much detail. With the resolving power of Canon's current lens lineup at faster apertures, I'm not so sure that cutting pixels into quarters is actually enough to avoid any kind of aliasing. At narrower apertures, like f/8, diffraction already blurs information enough that it can't alias, but that's a really narrow aperture for a lot of work, not everyone uses it. There are very few applications where removal of an AA filter will not cause aliasing of some kind, and pretty much anything artificial is going to have repeating patterns that, depending on distance to camera, can create interference patterns (moire).

This whole "Remove the AA filter" craze is just that...a craze. It's a "thing" Nikon started doing to be different, to get some "wows", and maybe bring in some more customers. Ironically, given that removal of an AA filter is really NOT a good thing...it's worked. Nikon's marketing tactics have sucked in a whole lot of gullibles who don't really know what an AA filter does or how it works, or how to work WITH it, and now we have a whole army of "photographers" who want AA filters removed from all cameras. Personally, I REALLY, TRULY, HONESTLY DO NOT want Canon to remove the AA filter. It is NECESSARY, it PRESERVES preservable data and eliminates useless data, and I LIKE THAT.

And anything that is lost? It's MINIMAL. In the grand scheme of how much resolution you have...you maybe lose a percent or two of really high frequency information...but you really don't have that information anyway because it is similar in frequency to noise...so again, moot.

Given that the filter makes it physically impossible to have a repeating pattern of stripes the same frequency as the pixel grid, so that you cannot have a perfect transition of black pixels to white, I'd say that is cutting resolution in half. That is, compared to some magical thing that accurately reads the full RGB spectrum on each pixel.

You are right about the necessity of the AA filter though.
I was thinking that if the interpolation algorithm only sampled each pixel within a specific cluster of four pixels and not every pixel around it that it would solve the moire problem. Really that would just give you different colour banding instead.
Now, if we added a second layer of microlenses on top of the first to direct light only at individual groups of pixels, that would guarantee the full RGB read on each cluster, and allow hard transitions...

On second thought I guess that sounds a little excessive just to gain the ability to have large pixels with a hard transition instead of twice as many pixels with a row of grey pixels that's half as big. You can bin the smaller pixels with a normal AA filter just the same, we just need a way of doing that without destroying the flexibility of RAW (otherwise I assume people would have been using compressed formats a long time ago).

I think your conflating the CFA with the AA filter. The CFA, color filter array, is what produces the RGBG pixel pattern. That is ENTIRELY different than the AA filter, which does optical blurring only at high spatial frequencies near the spatial frequency of the sensor pixels.

The CFA doesn't cut resolution in half either. It has a minor impact on luminance resolution, it's mostly color resolution that is affected by the CFA. But since we pick up detail primarily due to luminance, a bayer sensor doesn't lose anywhere remotely close to as much resolution as Sigma would have you believe with their Foveon marketing, for example. The luminance resolution, the detail resolution, of a bayer still trounces anything else. It's your color fidelity and color resolution that suffers. Were not as sensitive to color spatial resolution as we are to luminance though, expecially when the luminance is combined. (It's actually a pretty standard practice in astrophotography to generate an artificial luminance channel, blur the RGB channel a bit (which practically eliminates noise and actually improves color fidelity a bit by reducing color noise), process the luminance channel for detail, then combine the L with the blurred RGB. The end result is a highly detailed image that has great color fidelity.)

As for the double layer of microlenses...sure, you could read a full RGBG 2x2 pixel "quad" and have "full color resolution". Problem is, that LITERALLY halves your luminance spatial resolution...so you actually don't gain squat from a resolution standpoint by doing that. Doing that, you would lose significantly more resolution than either the CFA or the AA filter cost you...both of which are trivial in comparison do doing what your asking for. BTW, what your describing is called super-pixel debayering. That, too, is a common option in astrophotography image stacking...instead of basic or AHD debayering, you usually have the option to either super-pixel debayer, or "drizzle" (which, if you have enough subs...such as a couple hundred...is a means of achieving superresolution, and can increase your output image resolution by two to three fold.) You don't even need another microlens layer to do super-pixel debayering...you could use a tool like Iris or maybe even DarkTable/RawThearapy, to do it on any image you want.

Finally, even if you do super-pixel debayering, your not going to ever have "hard edges". Statistically speaking, the chances if a white/black line pattern you wish to photograph perfectly lining up with your pixels, regardless of how large or small they are, is so excessively remote that it is statistically impossible. Not in any real-world situation. You might be able to build some kind of contraption and AI software to eventually achieve it, but that is well beyond the realm of practicality. If you remove the AA filter, use super-pixel debayering, you might have larger pixels with full color fidelity...but your going to have a massive amount of aliasing. Those white and black lines would have some nasty stair-stepped edges, they would just look atrocious.

1263
Landscape / Re: jrista et al, Why Astrophotography?
« on: June 20, 2014, 06:07:39 AM »
Because it's really hard to do well, I like a challenge.

Aye! Astrophotography is the most challenging photography I do. It takes so much time, with careful planning, careful management of gear and tracking, and hours of processing, to create one image. In comparison, my bird and wildlife photography is a cakewalk.

1264
Landscape / Re: Deep Sky Astrophotography
« on: June 20, 2014, 06:05:51 AM »
Bradbury and emag, great images! I love the veil, wonderful detail there.

1265
Landscape / Re: Deep Sky Astrophotography
« on: June 20, 2014, 06:04:08 AM »
More Cygnus. I really love this region of sky, it's amazing. Tonight I've been getting image time on IC1318, IC1318B which are large nebulous regions, and NGC6910 which is a nice little open cluster nearby. The full frame of the 5D III is JUST AMAZING. It's more than twice as big as the 7D frame, and the images, once processed, are pretty stunning.

This is my first pass at processing a single-frame image of North America and Pelican nebulas in Cygnus, near the top star. Not entirely satisfied with it...I'd like to stretch it more, bring out some more detail, but I need to get a better handle on noise and color correction (a lot of the color correction routines end up making things noisier as they end up nuking most of the green color channel.)



Usually, getting this entire region requires a 4-panel mosaic with the smallish CCD sensors you can usually find for a reasonable price. Only those with the big money can get comparable full frame CCD cameras...which usually cost about $10,000 or more. I've got a cold box in the works for the 5D III, which should help get my dark current levels under control, and help me get better, deeper, less noisy subs (although still not as good as a cooled CCD...my cold box will probably only get me down to around -10°C, where as a good CCD can get you down to -25°C. With dark current doubling/halving every 5.8°C, a CCD is going to be about about 2.6x less noisy (and even better than that, really, as a mono CCD has a higher fill factor, no sparse color spacing, and CCDs designed for astro tend to have lower dark current to start with...)

1266
EOS Bodies / Re: DSLR vs Mirrorless :: Evolution of cameras
« on: June 20, 2014, 03:57:48 AM »
On the first day I took my new 5D III out to photograph birds, another bird photographer came up (discourteously, I might add...I'd just spent 35 minutes getting VERY close (like, less than 15 feet to the closest one...in Colorado, where birds are jittery, that is REALLY close) to some 6 or 7 Night Herons...and he stomped right up and scared the whole lot off, along with a couple egrets and a great blue...and I think a couple ducks). Well, he persisted, stomped right up and sat down next to me. Turned out there was one younger BCNH left, and he hopped a couple trees and ended up right in front of us, about 45 feet out.

This other photographer had two cameras, both mirrorless, one a small Panasonic Lumix and one a Fuji. He chitchatted about how much he loved 'em, how great they were, etc. We both had 600mm focal lengths, me with my EF 600mm f/4, and the other guy with a small zoom lens that had an FF-effective 600mm focal length, or thereabouts.

In the end, the smaller sensors of his mirrorless cameras couldn't stand a chance against the 5D III. The slower frame rate, which were between 4-5 fps, did not do as well. The AF system did not lock remotely as fast (it's almost instantaneous with the 600/4II and 5D III), and in both cases with both cameras, he seemed to be using some kind of contrast-detection AF, or perhaps hybrid contrast/phase detection? Either way...it was quite slow, and while decently accurate, not as accurate as the 5D III seemed to be (although I guess that could boil down to technique.)

The only advantage I could really see in the mirrorless was their near-microscopic size...they were both TINY, and in comparison they almost looked like toys to the system I was using. The guy got antsy pretty quick, and was unwilling to stick around...within about 5 minutes he got up and left, but before he did, he mentioned the dozen or so other bird spots he'd tromped through in the park on the way to me. I suspect he tromped through a dozen more, and scared off another couple dozen beautiful subjects, before he finally called it quits. (The guy missed out, too...while in his exit he finally did scare off that one last BCNH, within about 10 minutes after he left, a snowy and a couple more of the night herons came back, and within another 15 minutes proceeded to fish. Mirrorless vs. DSLR...Mirrorless: 0, DSLR: 1)

The moral of the story? If your a discourteous, tromping wannabe who has to keep on the move because your too impatient to set up, sit, and wait for natures beauty to come to you in comfort...then a tiny light weight mirrorless with a tiny light weight lens is probably for you. You won't get the same action-grabbing performance, you won't have the same ergonomics (those mirrorless cams and lenses are TI-NY...like, toy tiny, like, barely fits in your hands tiny...like, WTF am I doing with a TOY with that BEAUIFUL BIG BIRD in front of me?!?!? OMG!), your IQ won't be as good (or maybe it will if you drop some dough on the FF A7r, but then you'll really be suffering on the AF and ergonomics front).

Anyway...mirrorless has it's place. They have their uses and their benefits.  But, every time I encounter a die-hard mirrorless user, my experiences tend to be similar to the above. Mirrorless users are ALWAYS on the move. Moving moving moving moving. No patience, no time to wait and let things just happen around you. MOVING. I totally understand why they are fanatics about mirrorless...but wow...slow down and enjoy something, enjoy life happening around you every once in a while! :P

1267
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 20, 2014, 03:33:16 AM »
Further, for everyone else who continues to perpetrate the myth that somehow the two halves of the pixels, which are under not only one microlens, but also under one color filter block, could somehow magically be used to expand dynamic range "for free" are fooling themselves, and anyone who listens to them. Magic lantern either uses two FULL sensor reads (vs. half sensor reads), or they do line interpolation for half the resolution, to achieve their dynamic range. There is no free increase to dynamic range, and DPAF isn't going to somehow allow more dynamic range for free. The problem with the idea of using one half of the AF photodiodes for an ISO 100 read, and the other half for an ISO 800 read, is that is HALF the light! That is not the same as what ML does, which involves the FULL quantity of light, or else half the light AND half the resolution.

Huh?  Please explain how reading both halves at the same gain gets you all the light but reading them at different gains gets you only half the light?  How different do the gains have to be to cut the light in half?  Is 1% enough?

What you said makes no sense to me.

The photodiodes are SPLIT. Each half gets half the light coming through the lens. It doesn't matter what ISO you read them at...if you read "half"...it's half the light. So your reading half the light at ISO 100, and half the light at ISO 800...well, you really aren't gaining anything. The only way to increase dynamic range by any meaningful amount is to either gather MORE light IN TOTAL...or reduce read noise by a significant degree (i.e. drop it from ~35e- to 3e-). Assuming it ever even becomes possible to read the photodiodes for image purposes like that, you might gain an extremely marginal improvement...but overall, there really isn't any point. It isn't the same as what ML is doing. They are either reading alternate lines of the sensor at two different ISOs, then combining them at HALF THE RESOLUTION, or they are doing two full reads of the sensor. Either way, for the given output size, they double the quantity of light. Reading two HALVES of a SPLIT photodiode gets you...ONE full quantity of light.

1268
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 19, 2014, 09:37:38 PM »
Assuming it's a Bayer sensor with multiple pixels under each microlens (like the 70D), there's not a lot they can do to improve sensor performance that's outside the realm of read noise.  There are several ways to attack that one, and some of them involve doing clever things with the multiple pixels per microlens, such as reading out each one at a different ISO and then combining them, sort of like what Magic Lantern has done to increase DR.

Either that - or, it might not be a Bayer sensor in the first place.

By the look of things, the so called dual-pixel tech is actually quad-pixel already.
See my previous post on the topic here.

With a quad-pixel design, rather than having a single color filter per pixel, it's theoretically possible to have individual color filters for each of the four sub-pixels.
These color filters don't need to be monochromatic R/G/B filters anymore.
Instead, these could be a combination of di/poly-chromatic filters, from which the full color of a pixel can be derived.
That's better than a Bayer sensor, where two of the pixel colors need to be interpolated from neighboring pixels. 

So, you never know. The 7DII could have the first non-Bayer sensor in a DSLR.
If they use a combination of dichromatic filters for each sub-pixel, they could achieve maybe 1 stop of ISO improvement vs a Bayer sensor.
I think Canon will inevitably implement this sooner or later, given that they have gone the quad-pixel route already.
The question is, will the 7DII be the first camera to have it - or will we have to wait more for that.

does it make sense to even call them sub-pixels at that point
nah

Right.
What would be really cool to see is some sort of hardware level binning process that maintains the integrity of the RAW file.

Half the reason I'm so anxious for super high resolution cameras is that I haven't been terribly impressed with the image quality off my 5D2. That nasty AA filter (which I'm pretty sure is especially bad on the 5D2) effectively cuts resolution in half. When I first saw my pictures on a decent 4MP monitor I was amazed at how little detail loss there was vs. looking at the image zoomed to 100%. My bet is that a good 4K (8MP) monitor is going to display your images with just as much detail as a high quality print... Because the detail actually isn't there in the first place.

One option is just quadrupling resolution and getting rid of the AA filter (which I'm actually fine with), but if they could bin the full per-pixel RGB signal on the sensor it should effectively deal with moire, and we get to keep our current file size, and it should produce an actual 20MP image instead of the blurred out fake we currently end up with.

The last thing I really want to see is the integration of clear microlenses. Even the heavily faded green pixels that we have right now still block a lot of light. Given how advanced interpolation is I doubt that eliminating the colour value for one of the pixels would have a significant impact on image quality.

Sorry, but that (bolded) is such a ludicrous, laughable comment, I'm just flabbergasted. An AA filter DOES NOT cut resolution "in half". That is blowing things SO FAR out of proportion it may be one of the most ludicrous things I've read on these forums. OLPFs, optical low pass filters, are designed to affect high frequencies only, and only around the nyquist limit at that. You lose a TINY amount of resolution...but it doesn't matter, because the "resolution" your losing just contains nonsense anyway. OLPFs blur very high frequency data that nearly or exactly matches the spatial frequency of the sensor's pixels just enough such that they the information doesn't alias. That's it. Aliased information is a REAL loss of information. Technically speaking, OLPFs PRESERVE information...they save information that can be saved, and discard information that cannot be correctly interpreted by the sensor anyway. On top of that, a very light application of unsharp masking can effectively reverse the blurring, and improve the resolution of that high frequency data, without actually bringing back all the nonsense.

Quadrupling resolution and removing the AA filter is only an option if your lenses cannot resolve that much detail. With the resolving power of Canon's current lens lineup at faster apertures, I'm not so sure that cutting pixels into quarters is actually enough to avoid any kind of aliasing. At narrower apertures, like f/8, diffraction already blurs information enough that it can't alias, but that's a really narrow aperture for a lot of work, not everyone uses it. There are very few applications where removal of an AA filter will not cause aliasing of some kind, and pretty much anything artificial is going to have repeating patterns that, depending on distance to camera, can create interference patterns (moire).

This whole "Remove the AA filter" craze is just that...a craze. It's a "thing" Nikon started doing to be different, to get some "wows", and maybe bring in some more customers. Ironically, given that removal of an AA filter is really NOT a good thing...it's worked. Nikon's marketing tactics have sucked in a whole lot of gullibles who don't really know what an AA filter does or how it works, or how to work WITH it, and now we have a whole army of "photographers" who want AA filters removed from all cameras. Personally, I REALLY, TRULY, HONESTLY DO NOT want Canon to remove the AA filter. It is NECESSARY, it PRESERVES preservable data and eliminates useless data, and I LIKE THAT.

And anything that is lost? It's MINIMAL. In the grand scheme of how much resolution you have...you maybe lose a percent or two of really high frequency information...but you really don't have that information anyway because it is similar in frequency to noise...so again, moot.

1269
I was surprised by how often Canon's data changes from version to version of a CR2 raw file. Once I understood that, it made total sense why Adobe has such a hard time keeping ACR/LR up to date and working with every new model camera that Canon releases...they have to figure out what changed (which is NOT well documented, sometimes not documented at all, in Canon's stuff.

Interesting to know - and it is about the same on the hardware side, I know from ML: the cameras seem to be the same from the outside (digic4, 18mp sensor) but there are lots of small differences and changes that break simpe copy+past compatibility across all models.

Btw I also recently had a deeper look at the CR2 metadata as I wrote a watermarking and metatdata filter script (what to keep, what to delete for what purpose)... exiftool does an amazing job at decoding even the more obscure Canon tags, but I wouldn't want to imagine how long it took to get all this exif/iptc/xmp/makernotes stuff in order.

Yeah, it's kind of a mess. It really isn't any better for other manufacturers. Some of them have even more radical generational changes in their metadata than Canon does.

1270
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 19, 2014, 07:12:03 PM »
Quote
This may be one of Canon’s best kept secrets as it’s apparently going to be more than an “evolutionary” technology.

Does anyone have an example of what Canon call revolutionary?

Perhaps you do not remember when Canon introduced a FF CMOS sensor, but Nikon held on to their APS-C sensor!  That was pretty revolutionary at the time.

Diffractive optics. Everyone else thought it was literally impossible to make a lens with diffractive optics. Canon persisted, and they have the most compact 400mm FF DSLR lens in the world. Canon has had PLENTY of revolutionary technological advances and improvements to their technology.

Apparently, a mere two years since the release of the D800 is enough to forget the technological LEADERSHIP that Canon has demonstrated for decades. It's only been TWO YEARS, Dilbert...a camera generation is usually closer to FOUR years...so it's no surprise that Canon hasn't leapfrogged the competition yet.

1271
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 19, 2014, 05:53:23 PM »
Assuming it's a Bayer sensor with multiple pixels under each microlens (like the 70D), there's not a lot they can do to improve sensor performance that's outside the realm of read noise.  There are several ways to attack that one, and some of them involve doing clever things with the multiple pixels per microlens, such as reading out each one at a different ISO and then combining them, sort of like what Magic Lantern has done to increase DR.

Either that - or, it might not be a Bayer sensor in the first place.

By the look of things, the so called dual-pixel tech is actually quad-pixel already.
See my previous post on the topic here.

With a quad-pixel design, rather than having a single color filter per pixel, it's theoretically possible to have individual color filters for each of the four sub-pixels.
These color filters don't need to be monochromatic R/G/B filters anymore.
Instead, these could be a combination of di/poly-chromatic filters, from which the full color of a pixel can be derived.
That's better than a Bayer sensor, where two of the pixel colors need to be interpolated from neighboring pixels. 

So, you never know. The 7DII could have the first non-Bayer sensor in a DSLR.
If they use a combination of dichromatic filters for each sub-pixel, they could achieve maybe 1 stop of ISO improvement vs a Bayer sensor.
I think Canon will inevitably implement this sooner or later, given that they have gone the quad-pixel route already.
The question is, will the 7DII be the first camera to have it - or will we have to wait more for that.

I debunked your theory on this before. You are looking at the BACK side of the sensor, near the PERIPHERY, where readout connections and the like go. What you are looking at in that ULTRA TINY chipworks image is NOT the sensor. It is a stamp on the back side edge of the sensor...that's all! Canon does not have QPAF technology. You are wildly misinterpreting something you do not understand, and purpetrating a falsehood.

Canon has multiple patents for DPAF...they have ZERO patents for QPAF. As it stands, no one actually has a patent for any kind of quad pixel focal-plane AF system.

Further, for everyone else who continues to perpetrate the myth that somehow the two halves of the pixels, which are under not only one microlens, but also under one color filter block, could somehow magically be used to expand dynamic range "for free" are fooling themselves, and anyone who listens to them. Magic lantern either uses two FULL sensor reads (vs. half sensor reads), or they do line interpolation for half the resolution, to achieve their dynamic range. There is no free increase to dynamic range, and DPAF isn't going to somehow allow more dynamic range for free. The problem with the idea of using one half of the AF photodiodes for an ISO 100 read, and the other half for an ISO 800 read, is that is HALF the light! That is not the same as what ML does, which involves the FULL quantity of light, or else half the light AND half the resolution.

There is no magical dynamic range enhancement with Canon's DPAF. The name is even misleading, as it isn't dual "pixels"...it's dual photodiodes per pixel. That should tell you something about the true nature of DPAF right there.

1272
EOS Bodies / Re: New Sensor Tech in EOS 7D Mark II [CR2]
« on: June 19, 2014, 03:47:14 PM »
This may be one of Canon’s best kept secrets as it’s apparently going to be more than an “evolutionary” technology.

Reminds me of the newly developed sensor for the 6d (inc. the newly-developed 11-point af system :-))... which is a certainly nice, but it's all the same general sensor generation for a long time.

My bet: They won't release a "revolutionary" iq technology in a crop camera, but would target the high-end ff market first. Much more likely it's in the direction of on-sensor af, evf/ovf hybrid and video-stills combination for ultra-high fps.

None of those other technologies were ever rumored to be more than evolutionary, though. And "new" is what every brand tacks onto all their "evolved" technologies...that's par for the course. Given how old the original 7D is, Canon has to know they can't spit out some mediocre evolutionary improvements, especially with everyone in Canon's camp scrambling for more DR, after such a long wait.

Canon has a lot of good technology, and a lot of patents for really good technology that I haven't seen implemented in any of their sensors (not even their video sensors, which is what many of the patents are for.) I hope that the 7D II will be the camera that they finally actually EMPLOY some of their cool sensor technology with.

1273
I understand normalization perfectly.

It seem like you do.

yadda yadda blah blah

You keep missing the point. Your locked into your limited notion of what is "comparable" and what is not. I'm choosing to compare something you have decided is not comparable. Sorry, I disagree. I've always disagreed, I always will disagree. I suspect your in the same position, so this the last I'll say on it in this particular thread.

In the context I'm always referring to, the same context I've been referring to for years, I'm not interested in how the images look in the end. I'm interested in what I can do with the RAW files. I'm interested in the editing potential...the latitude with which I can push and pull exposure and white balance and color around. RAW files are not scaled. You always work with them at their native size. Scaling does not play a factor when it comes to editing RAW files. I don't care what the final outcome looks like. That is ARBITRARY. I can output the same images DOZENS of times at different sizes, for different prints, all with different amounts of dynamic range, all with different SNRs. But when I'm sitting in front of Lightroom, that's all the last thing on my mind. We ALL sit in front of lightroom, pushing exposure around...all the time, day in and day out, year in and year out.

Just because DXO says I get 14.4 stops of DR at an 8x12" 300dpi size specifically doesn't mean that's what your going to be sizing to in the end. You may downsample it more, you may downsample it less, you may ENLARGE! DXO's Print DR is an arbitrarily chosen standard FOR THE PURPOSES OF comparing ONLY within the limited context of DXO's web site. It doesn't tell you anything about actual, real-world results as if your sitting in front of your computer, using Lightroom to actually WORK with the RAW files those cameras output. It just tells you what you could get IF you downsampled to EXACTLY that size. That's all. And that's fine and dandy...when I'm browsing around DXO's site selecting cameras to compare with their little camera comparer, it gives me a contextually valid result.

It's impossible to edit RAW at an other size than 100%. So 100% size is all that matters when you want to know what you can do, as far as lifting shadows for the purposes of compressing 10, 12, 13.2 or 15.3 stops of dynamic range into the 8 stops of your screen, or the 5-7 stops of print. The output dynamic range is arbitrary...it depends on countless factors that ultimately affect it (which, yes, total megapixel count is one of them, but noise reduction routines, HDR merge/enfuse, etc. are others). You may end up with 14 stops of DR, you may end up with 16 stops of DR in a file you were able to perform some epic noise reduction on. The output isn't what matters when your actually sitting in front of Lightroom actually editing the RAW itself. The RAW file itself, at 100% size, is what matters.

blah blah blah

you can't compare cameras that way and say that one is better than another, as I've pointed out that can lead to very misleading results.

You are basically saying that all you ever refer to is comparing cameras at 100% view, but then you talk in generalities, which badly confuses many I bet. It seems very misleading. Again, say some 100MP camera has worse SNR per photosite than some 6MP camera but, viewing images at the same scale from each the 100MP camera has MUCH better SNR, well it would not be fair to say that the 100MP has worse SNR than the 6MP camera, but with your 100% view RAW editing latitude only stance that is the impression you give.

Finding out what 100% view RAW editing latitude you have is fine and good, as a stand alone, but you shouldn't be using that in the context of comparing one camera to another and saying that it's overall better or worse for DR/SNR. That is simply wrong and misleading. There is a reason that you are one of the few people left who try to talk in such a manner.

Not comparing "cameras". Just comparing their editing latitude. For all the rest, existing results from DXO, DPR, etc. are sufficient. I JUST mean comparing editing latitude. Which is what all the DR debates are always about...how much can you lift the shadows. That's it. Please don't conflate that with your assumption that I'm "comparing cameras", I am not. You keep ignoring the fact that there is a CONTEXT within which the debate occurs. There is always a context. The context, here on CR, is that the DR debate always ends up referring to how much editing latitude....how much shadow lifting...you have with Camera A vs. Camera B. That's what I'm always referring to, because that's what the debate is always about.

You are personally concerned about total noise levels, and specifically total noise levels in a normalized context. That is COMPLETELY VALID! I'm not debating that. I don't think ANYONE has ever debated that. It's just a different context. Evaluating the total amount of noise in a downsampled image is different than evaluating the editing latitude of a RAW file.

1274
No one really, truly thinks Canon is dumb enough to alienate their customers like that. I completely expect Canon to fully support their entire current camera body lineup with DPP 4.x eventually.

As in "current" camera line up, or including the 10d which is in dpp3? You've got me there, I believe they might make a cut somewhere sometime. Imho it depends on the camera generation, once they add one 18mp crop camera it's likely they've covered them all.

I disagree about the 18mp crop cameras. Almost every new generation of Canon cameras has companion changes to the CR2 RAW file format. Most of the time, it's in the metadata area, specifically in the manufacturer's custom data block, but the changes are quite often breaking. Canon does not seem concerned about maintaining backwards compatibility in their custom metadata section in EXIF, and data block sizes change, some values are completely removed, etc.

Sometimes the border masking/calibration pixels change, even though the sensor itself is still the same (i.e. the 18mp APS-C).

There is ZERO guarantee that, even if Canon uses the same sensor, that the format of the CR2 RAW file will remain the same.

BTW, I speak from first hand experience here. I wrote a metadata extractor for a personal web site I've been trying to build for myself (still pending, other jobs keep consuming all my time, and the little bits if free time I have, I spend on my photography). I spent a month digging into the specs for Canon's RAW files, for their custom EXIF metadata, etc. I spent another month writing the extractor in C#, and augmenting existing javascript extractors (still an ongoing project). I was surprised by how often Canon's data changes from version to version of a CR2 raw file. Once I understood that, it made total sense why Adobe has such a hard time keeping ACR/LR up to date and working with every new model camera that Canon releases...they have to figure out what changed (which is NOT well documented, sometimes not documented at all, in Canon's stuff...you basically have to download Canons example source code and run a diff on it with prior versions). Once you know what changed, then you have to implement the necessary changes in your own code to detect the CR2 version and route it through the right path in your code to handle and render it properly.

1275
Nice of Canon to hobble the actual utility of their downloads. Why did I bother paying for a 7D?

Maybe some hacking is in order?

Are you seriously complaining about buying a camera that was released YEARS ago, Canon's oldest body ever, in fact, because it doesn't work with a JUST NOW newly released version of DPP? Seriously?

This is just the first release of DPP 4.0. The way Canon's message is worded, the limitations on the compatible bodies is just "at launch", indicating that support for additional bodies will be rolled out as they have the time to implement support for each one. No one really, truly thinks Canon is dumb enough to alienate their customers like that. I completely expect Canon to fully support their entire current camera body lineup with DPP 4.x eventually.

Every time a new camera body is released (from most manufacturers), changes to metadata and sometimes RAW pixel data storage structure, have to be changed. Some of those changes are dictated by the choices in hardware design (i.e. how many masked off and/or disabled border pixels the sensor has, which is not the same from sensor to sensor.) It would be a LOT of work to implement the necessary handlers for each and every variant of the CR2 raw file format. It is therefor no surprise they STARTED somewhere (i.e. their full frame cameras, of which there are few, and for which there are likely to be more similarities than differences.)

Pages: 1 ... 83 84 [85] 86 87 ... 322