I can't see why 25+ stops of EV would even need ND.
Good point! But say you only wanted the brightest half of that band, and it was all values 128-255. You could just manually expand it in Photoshop with the levels dialog to 0-255, and throw away the shadow detail you didn't want, but, in effect you're just using half the brightness values you could be. It wouldn't look quite as good, might possibly have visible banding, AND it's an extra step for you in the processing.
The sensor, from my recollection, has two charge buckets per pixel, and can switch the entire sensor instantly and at a high rate from using charge bucket 1 to charge bucket 2.
To get global shutter: have it start accumulating in charge bucket 1. Switch to 2 and that's the "global" (synchronous) start of exposure in bucket 2. At the end of the shot, switch it globally back to 1, and read out 2, which is your exposure that started and stopped simultaneously: so plane propellers don't look like boomerangs, etc. The minus is that you lose half the dynamic range potential of the shot as you're not using bucket 1 at all, except for some pixels, the first and last couple of microseconds of exposure.
To get ND of say 10 stops: you want 1/1024 of the exposure. Expose into bucket 1 for 1023 microseconds, then bucket 2 for 1 microsecond, then back to 1, and so on. Bucket 2 gets exposed every millisecond or so that way, so for a 1 second shot during daytime when you want to capture movement of cars or something, it will get about 1000 exposures which should make the movement appear continuous. (A one-pixel object moving 1000 pixels would truly look continuous even when pixel-peeping. An object moving all the way across a 45MP screen, 8000 pixels, would look pretty continuous if it was 8 pixels wide, and so on.) The patent doesn't spell out the maximum switching speed but if 1 microsecond is possible, then this kind of continuous shot would mostly work, though there'd be certain very fast-moving objects that wouldn't look quite continuous if you look at full resolution. Still, it'd be a great tool to have. At the end you just throw away exposure 1, which will probably be massively blown out. The minus is also that you'd lose one stop of DR due to throwing away pixel bucket 1.
To get 25 stops of dynamic range (DR, did I say EV earlier? Sorry!) you do basically the same as my ND example, but instead you do an "HDR" combination of bucket 1 and bucket 2. For instance, for a real estate photographer, bucket 1 could capture an indoor room scene while it's blown out for the view out the windows. Bucket 2 would have the view out the windows but the room would be Zone II-III (very dark, almost no detail). Combined, you can see the wood grain in dark wooden furniture inside while details of white clouds outside are also captured. Displaying the resulting image is always a problem, but at least you've captured it. And as you suggested, it's very similar to the ND example except you're also compressing the dynamic range by a factor of two and putting a huge amount of dark shadow detail in there that the photographer may not care about.
To use for a normal exposure: just expose in bucket 1, then switch to bucket 2 halfway through. Read both out and add together. One minus is if something's momentarily bright in the first half (say) of exposure, maybe a glint on a moving car's chrome, it won't totally burn out that pixel as a normal sensor would. You could call this a plus or a minus, depending. I'm sure it'd be bad in some cases.
To use for a high-speed frame rate: expose in bucket 1 only, and have half the work to do to read the sensor data. You lose one stop dynamic range.