Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.
For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.
I can see numerous reasons.
With signal processing, variable time could be used. So stop the sampling as soon as you reach the wanted bit depth for any pixel you freeze the bit number for all other pixels. Then you have the maximum possible resolution for the wanted bit depth that your output format can handle. This will make over-exposure impossible.
Or store the time at each and every photon is arriving, then thru after-processing could dynamic pictures be done that have a selection between speed or clarity or even both from one "video" stream. But none of the now the existing data storage formats could be used as they are based on completely different physics.
With a synchronized light source the distance to the reflection could be determined for each pixel and by so building a 3D model for the surrounding is "easy" with some signal processing. Or just use the easy way to have a hard limit on specific pixels that should NOT get any photons within a specific time frame.
The first CMOS sensor was not very impressive. This is a gigantic leap forward and I think it is a bit unfair to compare a 20-year mature technology with something that is brand new (or a year old..).
I do expect this technology to make a huge impact in machine vision to start with especially for autonomous cars.
But it will move into cameras also within 10 to 15 years and it will change everything. More will be moved from the time we take the "picture" into post-processing as the concept of pictures/video is just thrown out so a huge change of concept. So you as a photographer can have video, high-speed photos, or high-resolution photos and at the same time light range that is way above everything, we have seen so far. And it can all be done in the after-processing.
The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.