Actually, dual pixel autofocus in fact works pretty much like an optical split image range finder. In a split image range finder with which you can focus on vertical structures, there is a horizontal line splitting top and bottom image in your focus area. Your image is in focus if the top and bottom images match up, i.e. the vertical structure in your image that you are trying to focus on has no horizontal offset between top and bottom image but instead runs in one line from top to bottom. If your image is out of focus, then the top image is either shifted to the left or to the right vs. the bottom image, corresponding to focus too close or too far away (the actual direction is specific to the particular implementation in the camera). This is how in a manual focus camera you can determine the direction that you need your focus to change - by the direction of misalignment between top and bottom image. Such rangefinders are realized by including a prism in the focusing screen in a way that the top half shows only rays from the left side of the lens, and the bottom half shows only rays from the right side of the lens (or vice versa, depending on the implementation). Dual pixel autofocus works exactly the same: one subset of pixels receiving light from the left side of the lens, the other subset of pixels from the right side of the lens, by placing appropriate microlenses on top of each pixel. So if you take a horizontal row of pixels (say e.g. 50 pixels) in the image area in which you want to achieve focus, you compare the image (in our example this 'image' is 1 pixel high and 50 pixels wide) you get from the 50 left-sensitive pixels to the image from the 50 right-sensitive pixels. If the two images are shifted to the left or to the right with respect to each other, your image is out of focus, and the direction of the shift determines the direction of the necessary focus change. The left- and right sensitive pixels have essentially the same function as the top and bottom optical split image focusing screen, but the optical split image focusing screen is rearranging the light from left and right to top and bottom in order to achieve both a 'human interpretable' optical focusing information and showing a complete image in the focusing screen. For DPAF of course this is not necessary, the outputs from the left- and right-sensitive lines of pixels are compared to each other and the relative shift between the images is determined for autofocus. Note (an apparently common misconception) that a single pixel in DPAF is NOT enough to perform autofocus. You always need a number of (in current Canon implementations horizontally) adjacent pixels in order to compute the shift between left- and right-sensitive images.
In fact (at least e.g. in the 5D4) it appears to be the other way round: in the manual it says that the non x-type autofocus points are sensitive to horizontal structures. But this could also be due to technical reasons on where a particular type of autofocus sensor can be placed in the presence of everything that is also necessary in a DSLR (mirror box, viewfinder prisma, etc.), and not so much because it is more advantageous from a real-world focusing perspective, in which I believe that accurate focusing on vertical structures might be more important.