July 26, 2014, 01:54:30 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 3kramd5

Pages: 1 ... 5 6 [7] 8 9 ... 21
91
Uh, about those "World’s most AF points1"

The footnote reads: "1. Among interchangeable-lens digital cameras equipped with a dedicated phase-detection AF sensor as of May 1, 2014."

So it has more AF points than any dedicated SIR AF system.

Its own detailed specifications read: "Focus Points : 19 points (11 points cross type)"

So it has a 19-point dedicated phase detection AF sensor. The others, I presume, are similar to the 70D (sensor plane AF). As such, I find that highlight misleading on its face.


92
Landscape / Re: Sunset landscape
« on: April 30, 2014, 12:10:30 AM »
5D Mark II, 24-70/2.8 @ f/22, ISO 100, 3.2 sec

Neglected a tripod so played the ol' balance the camera on the lens hood and hope it doesn't fall game.

93
But then you aren't working raw once in LR, you're working a rastered image.

The general nature of software development is to not go back and add support for current products to years old versions. Subscription based solutions are indeed a nice way to keep up, or they sell LR for 99 bucks quite frequently.

94
Note if you convert to DNG, you won't be able to use Canon's Digital Lens Optimizer. If you want to maintain that ability, you'd need to archive both DNG's and CR2's.

LR5 is a significant improvement over LR3, and it comes at a nominal cost (especially relative to your photo equipment). If you want my $.02, don't hesitate: get it. DNG conversion will work, but it's not a perfect solution.

95
EOS Bodies / Re: dual pixel tech going forward
« on: April 26, 2014, 04:12:44 PM »
Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.

I'm not really sure what kind of sensor design your proposing.

Not proposing, just wondering.

If Canon decided to put different color filters and shrink the microlens size over the two photoiode halves, they would no longer be able to do sensor plane phase detection AF.

Agreed. Giving up sensor level phase detect is part and parcel to what I'm asking. Specifically, COULD they do what you wrote: put different color filters and shrink the microlenses, and read out each photodiode individually rather than binning them?

which would be a little odd to demosaic and might not produce the best quality results.

Interesting? Why could it not produce as good if not better. I'm assuming an array like this:

Code: [Select]
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG
RG-BG-RG-BG-RG-BG
BG-RG-BG-RG-BG-RG

Each pixel would read two colors, either green + either red or blue. That maintains the same color ratio of a bayer type CFA (2G/1R/1B), it's just collocating each green with another color.


Canon might as well just shrink the entire pixel size by a factor of two, drop four times as many pixels on the sensor, and just call it a day if they are going to do that.

Sure, but if they could build one sensor and then have either high resolution OR sensor level phase detect, depending on which CFA is packaged, I figure they could cut their development and production costs in a two-camera offering.

shrug.

96
EOS Bodies / Re: dual pixel tech going forward
« on: April 26, 2014, 11:17:44 AM »
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.

Having two photodiodes per pixel means the photodiode pair exists underneath the CFA filter and the microlens(es). That is actually the only way DPAF really works...to be able to detect a phase differential, you need to check the HALVES of each PIXEL. If you just shrink the pixel size and put different color filters over those smaller pixels...well, now you have smaller pixels (and an odd image ratio), and you no longer have DPAF. It's a tradeoff...resolution or a focus feature, which do you want/need? (Or, as the case may be, you get a cross between both, slightly smaller pixels (i.e. 20mp 70D vs. the 18mp that came before) AND DPAF.)

I know everyone likes to speculate about all the wonderful things that DPAF might potentially bring to the table...but so long as it is Dual-Pixel Autofocus, that's all your really going to get. There really isn't any magic bullet here, no trickery that you can pull of by somehow using one half of the pixels at ISO 100 and the other half at ISO 800 for more dynamic range, etc. Pixel area is pixel area, and phase detect is phase detect. DPAF pixels serve one purpose when read out for AF, and another purpose when the halves are binned and read out for an image. Those are really the only two functions DPAF will ever serve, and while I'm sure the Magic Lantern guys will figure out something cool about the specific mechanism of DPAF's implementation...they will still only be able to work within the bounds of the sensors design. The ML DR increases was ultimately thanks to an OFF-die downstream amplifier that allowed them to control the readout process, not really due to any specific nuance of Canon's actual sensor design.

Assuming Canon does not remove that downstream amp in favor of some kind of on-die parallel ADC and readout system, I honestly don't expect them to be able to do anything more radical with DPAF. They may find a way of doing creative focus things with AF, maybe add the ability to remember AF positions for video purposes, things like that...but the design of DPAF doesn't really mean Canon suddenly has some amazing wildcard on their hands that can give them a significant edge in the stills photography department.

Right. I'm more curious about what else they can do with dual pixels in general than DPAF itself. I am running on the assumption that using the pixel pairs for an IQ (be it resolution, color depth, better demosaicing, etc) would come at the cost of losing sensor level phase AF.

Maybe I'm grasping at straws, it just struck me as potentially a HUGE leap in pixel density for Canon SLRs.

97
EOS Bodies / Re: dual pixel tech going forward
« on: April 25, 2014, 06:41:34 PM »

But don't you double your chances of getting the correct color?

Maybe my understanding is wrong, but doesn't the CFA filter out all but one color (frequency range) per pixel? So a red pixel in blue light won't recieve any charge? 

In the case of red light on red pixels, yah you'll get half as much, but in the case of green light on red or blue pixels, you'll get a reading. So yah, on a per pixel level maybe you'd affect available signal, but it seems like it would average out across the entire array. But I'm not a scientist either :P

And yah, it would likely prevent sensor level phase detection, which is why I suggested it could be for a "1DmkVs" model while the Bayer + dual pixel AF could go to a sports 1DmkV. Two lines, identical hardware except the CFA; different firmware.















98
EOS Bodies / Re: dual pixel tech going forward
« on: April 24, 2014, 01:53:35 PM »
The 'dual pixels' are all split vertically, so if they altered the microlenses and CFA to increase the actual resolution of the sensor, you'd end up with images having a 3:1 aspect ratio.
I think that's not the right way to look at this. It'd be more like having two color channels per pixel in the raw file rather than only one as input to the demosaic.

To be fair, in my initial post I did indeed mean spatially, so neuro's comment was pertinent.

That being said, correct: You wouldn't have say 10,368 × 3,456 (1Dx with twice as many horizontal), you'd have 5,184 × 3,456, but each pair of pixels would be sufficient to yield RGB values rather than every four, meaning the color accuracy could be doubled, which could have a meaningful impact on a subsequent raster.

99
EOS Bodies / Re: dual pixel tech going forward
« on: April 24, 2014, 01:45:06 PM »
Wouldn't you also end up having to deal with a significant drop-off in number of photons hitting the photo-diodes? After all, you're essentially turning one 'pixel' site into 3 sub-pixels, none of which covers the entire area of the 'pixel'. Not that I don't want them to try innovative new things like that, but I don't think it's practical except for maybe some specialized applications.

I don't know if you'd lose any additional light. Right now, there is a color filter immediately covering two diodes. If you had two smaller color filters adjacent to one another, you aren't going to halve the light, though you may move it around. Rather than "all light hitting here is red", it would be "some of the light hitting here is red and some of it is green," and they would have varying intensities. I think. :P

100
EOS Bodies / Re: dual pixel tech going forward
« on: April 24, 2014, 09:56:55 AM »
Good point.

I was thinking along the lines of how bayer filters have twice as much green as either red or blue. So perhaps it's not so much a read resolution increase (like 20MP becoming 40MP) as it's an information increase that comes with the same dimensions.

This way (again assuming they can read/record them individually), they could have as I mentioned red or blue at each pixel, rather than one red and one blue per every four pixels. They still get the 2:1 ratio of green to red and green to blue, but do so without dedicating individual pixels to green - they get green everywhere. It's like 2/3 of a foveon.

101
EOS Bodies / dual pixel tech going forward
« on: April 24, 2014, 09:41:13 AM »
On the 70D, canon has two photo-diodes per pixel almost across the entire sensor. The fact that they can get useable phase information from them suggests that they can read them independently.

So, could they change the bayer filter out and double resolution rather than get sensor level phase detection? Perhaps being co-located they couldn't use a traditional bayer design, but could they for example have green AND either red or blue at every pixel?

If so, that could be a cost-effective way forward to producing 1DmkV and 1DmkVs cameras once DPAF is perfected to the point that it equals or betters SIR AF. The former could have a traditional bayer filter with the second processor dedicated to amazing autofocus; the latter could have double the resolution and use a simpler last-gen SIR AF unit.

I am probably fundamentally misunderstanding the implications of having two photo-diodes per pixel, though. More likely DPAF is their way into high end mirrorless.

102
EOS Bodies / Re: Petition to Canon regarding the EOS 5D Mark III
« on: April 22, 2014, 10:57:18 PM »
Regarding AF point-linked spot metering for the 5DIII, while the AF systems are nearly the same as you state, the metering systems are vastly different.  Here are the 61 AF points superimposed on the 5DIII's 63 zone iFCL metering grid:



The resolution of the 5DIII's metering sensor simply may not be high enough to support spot metering with the AF points, whereas the 100,000 pixel metering sensor of the 1D X can do so.  Even when the 1D X's metering sensor reverts to zone metering (in very dim light or for flash exposure metering), it's divided into 252 zones - 4 times the density of the 5DIII's metering sensor.

I can't say for sure that those tecnhical limitations are absolute, but you might consider the possibility that there are technical reasons for those features being available on the 1D X but not on the 5DIII.  After all, they did add f/8 AF to the 5DIII.


How dare you suggest there is a fundamental reason that the 5D3 and 1Dx don't operate the same way? :P

Personally, I imagine that if the spot meter works at the center, one could use whichever of the 63 segments corresponds to the AF point in question. Given how large the AF sensors are relative to the total frame, in all likelihood each AF point falls on multiple segments with the 5D; and it's almost certainly the case with the 1Dx. I would guess that the 1Dx uses an average reading of all the appropriate segments rather than a single segment. Given the size of the spot meter indicator in the VF, I further guess that the center spot metering uses multiple readings.

The 5D3 could probably do the same thing, but it may negatively affect performance given the substantially lower processing power relative to the 1Dx.

103
"Point"less ;D unscientific data

5D3, EF50mm f/1.4, ISO100, AF point shown in screenshot. Always focused on the 20. Shown are 100% crops of the top right cross, bottom left cross, and center AF point, as well as a manual focus in each setup.


104
"Point"less ;D unscientific data

5D3, EF50mm f/1.4, ISO100, AF point shown in screenshot. Always focused on the 20. Shown are 100% crops of the top right cross, bottom left cross, and center AF point, as well as a manual focus in each setup.

105
To further add, my 1Ds3 and 1Dx never miss at f/1.4 on the outter points.  Never.  Always exact.  No variability.  Nails it everytime and that's even without AFMA.

it is physical impossibility because the AF  measurement angle accuracy in Canons AF is  F-2,8  ( eg.3,4)


That's not correct.

The specified precision is within the depth of focus at the max aperture of the lens for a standard precision AF point, and within 0.33 depth of focus (0.5 for some models) at the max aperture of the lens for a high precision AF point. 

I discussed this issue with Chuck Westfall (Canon USA's technical mouthpiece), and this is part of his response:

"The fact that the AF points are functional with apertures as small as f/5.6, f/4 or f/2.8 respectively depending on the camera model and AF point under discussion does not imply that their measuring precision is limited to the depth of focus at those apertures. The AF detection system has the capability of calculating depth of focus based on the maximum aperture of the lens, whatever it happens to be."


I've been doing some reading and came across a pertinent point. Your post above relates specifically to high-precision AF points. The outer points are not high-precision, unless I misunderstand him here: "...one of the consequences of the TTL-SIR AF system is that except for the center point, AF precision is not proportional to the maximum aperture of the lens in use."

Of course, that's a 70D-centric dialog. Maybe some of the outer 1Dx/5D3 cross points are included.

Pages: 1 ... 5 6 [7] 8 9 ... 21