Just Touching the Surface of Dual Pixel Technology? [CR1]

Wow, all I've been reading for months is that camera technology is mature and stale, and now it sounds like we're going to be seeing all sorts of big changes that affect fundamental aspects of using a camera.
Per pixel exposure and ISO? A camera that never blows highlights? Count me in!
 
Upvote 0
I actually don't understand why this aspect has not yet been discussed - not here, nor an Facebook, but eventually I am just wrong:

If you have phase detection capability on *every* pixel of your sensor, which means for *every* pixel in the final picture, it should be easy to get a 3D image from it.

As I understand phase detection AF, you can actually get the *distance* from just one metering.
Buffer the readout of *ALL* dual pixels, render the image from the light, save a "debth map" document alongside to the image and let the software on a PC render the scene in 3D. Or let the camera do it. There are even 3D capable displays that could be used in camera.

What did I overlook on the technical side?

I think, THIS would be a HUGE step in photography. Although I am completely happy with 2D, but 3D movies and TVs showed us where things could lead us. Then 3D images for everyone were just a logical step.

Any thoughts on this?
 
Upvote 0
neuroanatomist said:
"Every single pixel can have a different shutter time! This means the sensor allows a dramatic increase of the dynamic range. What sources didn’t tell me is how exactly this works and if the sensor is going to be first used by Hasselblads new medium format camera or by a new generation of FF sensors. Anyhow, its great news to see that Hasselblad is working on some exciting new tech with Sony!"

Sounds interesting…at least for static subjects.

Would this be similar to the old CCD type - track the time needed to establish a defined signal level and use it to provide luminosity vs. signal level for a given time???

Self Edit- wouldn't it be cool that instead of clipped highlights - regardless of shutter speed - the "shuttered" itself at some % of the set shutter speed i.e. the shutter speed is set at 1/1000 and at 1/2000 the pixel achieves a set value other than blown out white so it stops recording and provides a time value. This time value is then used to predict a clipped highlight if that is what is desired or w/ software apply some sort of sliding scale to recapture all that detail that would have been lost????

this could be really cool at longer shutter speeds...

Got to go, the nurse is here with my meds. :)
 
Upvote 0
neuroanatomist said:
Lawliet said:
neuroanatomist said:
Since you ascribe the same benefit to the 70D, I assume you're not referring to something like using a higher ISO. Can you explain?
Its about the sync speed, the 5D3 is noticable behind there.
I wouldn't have thought 1/3 of a stop would make that much of a difference...

It can, at least when shooting macro with the 100L on 60d - 1/250s is *just* enough to motion-stop something that moves a bit, unfortunately with some ambient light you end up in too high iso regions to get good results.

With the 6d, the 1/180s max. x-sync is too slow so I'm mostly ending up shooting with hss when using macro. You would thing 1/250s for 100mm*1.6x (crop) would be about the same as 1/180s for 100mm*1.0x (ff), but my recent experience is that it isn't.
 
Upvote 0
Marsu42 said:
neuroanatomist said:
Lawliet said:
neuroanatomist said:
Since you ascribe the same benefit to the 70D, I assume you're not referring to something like using a higher ISO. Can you explain?
Its about the sync speed, the 5D3 is noticable behind there.
I wouldn't have thought 1/3 of a stop would make that much of a difference...

It can, at least when shooting macro with the 100L on 60d - 1/250s is *just* enough to motion-stop something that moves a bit, unfortunately with some ambient light you end up in too high iso regions to get good results.

With the 6d, the 1/180s max. x-sync is too slow so I'm mostly ending up shooting with hss when using macro. You would thing 1/250s for 100mm*1.6x (crop) would be about the same as 1/180s for 100mm*1.0x (ff), but my recent experience is that it isn't.

Makes sense.

But...the benefit of the 1D X and 70D over the 5DIII being discusses was 50% more flash battery life, faster flash recycle times, etc.
 
Upvote 0
thome said:
I actually don't understand why this aspect has not yet been discussed - not here, nor an Facebook, but eventually I am just wrong:

If you have phase detection capability on *every* pixel of your sensor, which means for *every* pixel in the final picture, it should be easy to get a 3D image from it.

As I understand phase detection AF, you can actually get the *distance* from just one metering.
Buffer the readout of *ALL* dual pixels, render the image from the light, save a "debth map" document alongside to the image and let the software on a PC render the scene in 3D. Or let the camera do it. There are even 3D capable displays that could be used in camera.

What did I overlook on the technical side?

I think, THIS would be a HUGE step in photography. Although I am completely happy with 2D, but 3D movies and TVs showed us where things could lead us. Then 3D images for everyone were just a logical step.

Any thoughts on this?
http://www.samsung.com/uk/consumer/smart-camera-camcorder/lenses/special-purpose-lenses/EX-S45ADW

Different way of accomplishing what you're after. Both will suffer from nasty looking half cut bokeh from each of the two images used to make up the stereoscopic image, but they go about achieving the same end result in a different way - one blocks off half the lens, the other blocks off half of each pixel.
 
Upvote 0
I don't know what you mean by "nasty looking half cut bokeh", but there is a biiig difference in capturing stereoscopic pictures with two "lenses" set apart to get depth information (two different angles, two pictures) and calculating it from a depth map from just one picture. Actually I don't know if the last one will be better, as you would have to have this one picture splitted again for the human eyes into.... different angles. At best from a source that capured both pictures from a "human eyes distance". But for a PC these depth map would be sufficient. But how to get it done for human eyes? Mh. I do know much too less about 3D. ;-)
 
Upvote 0
jebrady03 said:
I'm ready for QPAF (Quad Pixel).
HDR plus AF.
it seems like a natural evolution to me

I'm not sure DPAF or a hypothetical evolution to QPAF is really a means to achieving HDR. Remember, ML had to cut resolution in half in order to achieve its makeshift approach, not because they did not have dual pixels...but because they had to use both the per-pixel amps as well as a secondary downstream amp. Doesn't matter how many times you dice up a pixel...if you have to use the downstream amplifier to achieve ML's style of "HDR", then diced pixels won't help.

Additionally, HDR implies 32-bit float data storage. Current camera ADCs are still limited to 14 bits int. Canon already has 12 stops of DR...seems a bit extreme to use such a convoluted approach to improving that by a mere two stops, when their problem actually lies in the ADCs themselves. Canon could take a far simpler approach...increase the parallelism of the ADCs, and move them closer to the pixels, to reduce the amount of noise they introduce into the signal. That's what everyone else is doing, and it is quite effective.

Assuming Canon was able to use QPAF to do some form of HDR...unless they increase the bit depth of the ADC, it isn't really going to be HDR. You would still be limited to 14 stops of DR, albeit achieved via a rather convoluted apprach that could be more costly and less effective than simply modernizing their read pipeline architecture. To get true HDR, Canon would need to use 32-bit ADC, and use floats rather than ints. At the very least, to improve DR by a meaningful degree, they would need to move to 16-bit integer ADC, however that wouldn't necessarily be "HDR".
 
Upvote 0