I actually don't understand why this aspect has not yet been discussed - not here, nor an Facebook, but eventually I am just wrong:
If you have phase detection capability on *every* pixel of your sensor, which means for *every* pixel in the final picture, it should be easy to get a 3D image from it.
As I understand phase detection AF, you can actually get the *distance* from just one metering.
Buffer the readout of *ALL* dual pixels, render the image from the light, save a "debth map" document alongside to the image and let the software on a PC render the scene in 3D. Or let the camera do it. There are even 3D capable displays that could be used in camera.
What did I overlook on the technical side?
I think, THIS would be a HUGE step in photography. Although I am completely happy with 2D, but 3D movies and TVs showed us where things could lead us. Then 3D images for everyone were just a logical step.
Any thoughts on this?