That is true and the whole point is that Canon developed a software that gives you a similar effect with a regular lens that is not a dual fisheye.
No, gotta call this one out. The article and the one they linked don't do a good enough job distinguishing or discussing this.
The dual fisheye gives you a stereo image. One you can use with VR/AR to represent the entire scene to an audience as 3D. The distance between each lens aperture is similar to the average distance between pupils.
Dual pixel RAW cannot capture depth for a full scene (it's not a light field camera). And can only give you a depth map within a working range that is going to be different for each lens used, subject, distance to subject, settings used, etc. And it requires more processing. That processing let's you build a model with a mesh and texture of the usable area that is then viewed.
With the dual fish eye you can just display the 2 images or video through a headset (or get creative and generate a cross eye).
As for the overlap capturing process necessary for photogrammetry the dual fisheye lens is useful because you get 2 images of exact known projection. Makes processing more accurate in many cases.
But it's wide angle fisheye too... So an entire other can of worms to deal with for photogrammetry.
As it stands you can capture dual pixel raw and use those images in photogrammetry workflows already. But that's without the special knowledge that they're dual pixel, and without the correlated depth map. It's treated as very highly overlapped images instead - still useful!