5D Mark IV: Dual Pixel Raw allows focus in post. Wow!

Mancubus said:
scyrene said:
Mancubus said:
I say this, because focusing is the ONLY factor that is beyond our total control when taking a photo (using viewfinder). Exposure, ISO, aperture, shutter speed, composition...all these factors can be controlled and a good photographer will know how to do it.

Weeelll.... Not quite. We bump into limitations in all these things sometimes. A photographer is not a god - it is not always possible (or even desirable) to control the light.

What I mean is: the other factors are within your control. If the light is coming from one direction, it's up to you how to make the best of it for your shot. In most cases you can move/rotate the source or the subject around in order to get what you want.

With focusing, there is always an error possibility (less with more expensive gear), and there's currently no guarantee that your shot will be in perfect focus unless you are on a tripod manual focusing with the live view.

The AF misses (and misses a lot!) in every DSLR body. It also makes your photo unrecoverable in post processing. You can change exposure, reduce noise, sharpen, crop, remove unwanted distractions....but you can not save that slightly out of focus photo, there is no tool that will move the (mis)focus from the ear to the eye.

Oh I agree :) But we like advances in all areas ;)
 
Upvote 0
5D4 Mark II is rumored to have triple focus pixels that will be able to capture the lightwaves from the past. So you can say go out to the lake at a nice time like 2PM and capture the sunrise that took place at a hideous hour earlier in the day. Sleep in and still get magic morning light. Or say you hit one place and can only stay there a day and it's pouring rain, set the triple focus back 24 hours and capture it on a clear day and you can even dial it in for the evening golden hour lighting from the day before to boot!
 
Upvote 0
neuroanatomist said:
So, you're anticipating the triumph of firmware over physics? Tell ya what, you let us know when that happens, mmmmmkay? ;)

Not at all.
Where you have camera shake you can sharpen excessively to get something better, but that is only possible with small amounts of shake and there is a limit.
Maybe (and it is a maybe) you can manipulate in post processing within limits. When I said that the software may to some extent account for the effect midluk described it is not beyond the realms of possibility that the re-sharpening will mainly be on the centre of the image (maybe the area of the focus point) and be applied in a decreasing fashion further away from that point.
In a similar way (this is the best analogy that springs to mind immediately, even if the technology maybe different) to sharpening where the detail slider applies more sharpening to areas of detail and less sharpening to areas of uniform shade such as a blue sky.
 
Upvote 0
Mikehit said:
neuroanatomist said:
So, you're anticipating the triumph of firmware over physics? Tell ya what, you let us know when that happens, mmmmmkay? ;)

Not at all.
Where you have camera shake you can sharpen excessively to get something better, but that is only possible with small amounts of shake and there is a limit.
Maybe (and it is a maybe) you can manipulate in post processing within limits. When I said that the software may to some extent account for the effect midluk described it is not beyond the realms of possibility that the re-sharpening will mainly be on the centre of the image (maybe the area of the focus point) and be applied in a decreasing fashion further away from that point.
In a similar way (this is the best analogy that springs to mind immediately, even if the technology maybe different) to sharpening where the detail slider applies more sharpening to areas of detail and less sharpening to areas of uniform shade such as a blue sky.

The key point in midluk's argument was that if all the pixels are split in the same orientation, the software has phase information in only one orientation. I think that's a big factor that precludes much of the potential. Dual pixel isn't enough.
 
Upvote 0
Maybe some clarity is needed.

What we know (I think, please correct me if I'm wrong).
When a point in an image is in focus, the various rays arriving at that point in the image (having taken various paths through space and the lens) are in phase. By splitting an autofocus sensor in two and measuring the phase relationship, the degree of focus at that point can be measured. In addition, phase detect sensors can sense the direction and magnitude from an out of focus state to the correct focus state. This additional capability allows for fast open loop autofocus control. (Contrast detecting autofocus sensors need to continue to send data as the correct focus is eventually found - hunting is likely.)
Even if the phase detect sensor knows that correct focus is 1mm of positive lens translation away, there is no information on what the focused image will reveal. Hence, the need to go to correct focus.

Now the speculation.
If every pixel is a dual pixel, and thus a phase detect pixel, and if the per pixel phase relationship is recorded with each image, what can be done with this innovative data?
Map the in focus portion of the image (in focus within some range). Thereby create a more accurate focus peaking display? Select the in focus regions for processing differently than the out of focus regions? (Neuro's speculation)
Map the magnitude and direction of focus error. Combine this map with lens characteristics to create a 3D map of the scene?
Note, dual pixel phase relationship data will not double the file size. A few bits should cover several generations of applications of this data. And BTW, Canon would have INFINITELY more phase relationship dynamic range than Sony or Nikon.

Can we also agree that software based sharpening is not the same as focusing better, or holding steadier?
 
Upvote 0
retroreflection said:
When a point in an image is in focus, the various rays arriving at that point in the image (having taken various paths through space and the lens) are in phase. By splitting an autofocus sensor in two and measuring the phase relationship, the degree of focus at that point can be measured. In addition, phase detect sensors can sense the direction and magnitude from an out of focus state to the correct focus state.

[...]

Now the speculation.
If every pixel is a dual pixel, and thus a phase detect pixel, and if the per pixel phase relationship is recorded with each image, what can be done with this innovative data?

We don't have coherent light here, so I don't think it is correct to talk about "phase" for single light rays and image points. You can of course assign some phase shift value to a single pixel, but only after evaluating all the pixels around it and not with just the values from that pixel.

The phase is relating to the (light+dark) structures in the image. Let's assume we are taking an image of a flat, light pattern on a dark background. When the image is in focus, we get a perfectly sharp image and the partial images from the different parts of the lens have no shift and match perfectly. If the images are out of focus, the pattern in the partial images coming through the different parts of the lens are shifted and this amount of shift can be called "phase shift". Because of the finite lens area going into each partial pixel (actually half the lens each) the shifted partial images will still be blured (although only half as strong as the complete image).
If we have a regular pattern, it will not even be possible to properly determine the phase shift, if the shift is exactly a multiple of the grid spacing in the pattern.

Due to all of this, the final resolution of the "in-focusness" or the distance information generated from the dpRAW will likely be much less than the actual image resolution.
 
Upvote 0