The camera with 3d capability is so processor intensive that it is limited to a very few mp, it might tale a supercomputer to handle 20mp.
but it´s not a 3D camera.
the z-buffer would only need to store a 8 or better 16 bit number for each pixel.
that´s what the camera had to store in addition to the normal RAW or RGB pixel information.
i don´t want to get sharp focus after the fact. like a lightfield camera.
it may slows down the camera but in camera HDR does this too.
you could make it an extra shooting mode.
this is used to add DOF, to blur the image in postprocessing.
imagine photographing with a 17-40mm f4 and later give it the DOF characteristics of a 17-40mm f1.2.
or better example shot with a 18-55mm and then later give it the DOF characteristics of a 85mm @f1.2.
i do that all the time with 3D rendered images and videos.
it´s no problem to create an additional depth map with 3D programs.
and later i can choose whatever DOF i like for a video scene or image.
the postprocessing software is there already.
it would only need an additional depth map.
so could this technology tell you that pixel X is 3m away and pixel Y 5m?
or that pixel X is x-times further away then pixel Y.
or does it work completely different and such infos can not be extracted from on sensor phase detection?