Canon Successfully Develops the World’s First 1-megapixel SPAD Sensor

Yes of course, but it was made on bad assumptions on reality, so I did play along.

The biggest problem with AF is not the focusing part it is to understand what to focus on, SPAD will not help out much in solving that problem. I think that twenty or thirty years from now we will have fix-focus cameras/lenses that do all the focusing in post-processing and that we will not care much about having perfect lenses. Any imperfections in the lens system will be compensated for in post-processing. The best part of doing it this way is that this could lead to fantastic lenses with low light performance that is at a price point very few can afford today.

One bit B/W dynamic range is the best thing that could happen (separation in colors can and will be done with filters unless the next stage will be to also determine the energy level and wavelength of photons, as that could lead to a fully linear color range). Back to signal processing, to understand it you must understand at least some basic signal processing.
Dynamic range can be given in a number of different ways. The way we have been having it so far is by taking a defined time (shutter time) and "chipping out electrons from a charged silicon piece". This is how both CMOS and CCD works. The main problem with this is all the noise that comes along with it. After that defined time we basically read (with an A/D converter that is far from perfect) what is left of the charge and say that this is the amount of photon energy inverted that has reached the sensor.
If we instead could count every photon that reaches the surface the noise will be way lower and never reach the "noise floor" or in other words the threshold when the noise will look like a valid signal. So regarding dynamic range, it could be infinite depending on your total or chosen sampling time, but without the noise issue. What would you prefer 24 bits of true signal range and zero noise or 24bit of the signal range where you only use 17 bits and have 7 bits of noise (at least). I prefer the first, every day of the week and twice on Sundays.

A lot can be done with signal processing but it is almost one of two things. Remove the noise with clever methods so it is never part of your data or hiding the noise but you will always have some of it left in your data (artifacts).
 
Upvote 0

COBRASoft

EOS R5
CR Pro
Mar 21, 2014
71
40
45
Oudenburg, Belgium
The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.
IBM already has a memory solution with crystals. Problem is CPU speed, they're not fast enough to even reset the 'memory' when starting up. Those crystals are 3d memory, ideal for scanning video for a specific face or database 'filtering'.
Since a laser system is used going through the crystal, a lot can be read in parallel instead of sequential.

Complete new applications have to be build around these concepts, but it is coming.
 
Last edited:
Upvote 0

TAF

CR Pro
Feb 26, 2012
491
158
Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.

For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.

Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).

The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.
 
  • Like
Reactions: 1 user
Upvote 0
Feb 7, 2019
411
478
UK
Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).

The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.
I have no idea what you do for work, but I wish my work sounded like that!
 
Upvote 0