Canon Successfully Develops the World’s First 1-megapixel SPAD Sensor

slclick

PINHOLE
Dec 17, 2013
4,567
2,909
I do hope that you realize that the Terminator movies were not documentaries.

They can not exist before a new type of energy source has been invented, that is the key question, not vision or computers.
Wait, what?
 

SnowMiku

EOS M6 Mark II
Oct 4, 2020
95
64
In 10-20 years time a higher megapixel version of this sensor could be in cameras and smartphones, just point at the milky way in the dark handheld and have no noise. But I don't understand how sensor technology works so I could be completely wrong.
 
  • Like
Reactions: Adelino
Jun 1, 2021
3
1
Yes of course, but it was made on bad assumptions on reality, so I did play along.

The biggest problem with AF is not the focusing part it is to understand what to focus on, SPAD will not help out much in solving that problem. I think that twenty or thirty years from now we will have fix-focus cameras/lenses that do all the focusing in post-processing and that we will not care much about having perfect lenses. Any imperfections in the lens system will be compensated for in post-processing. The best part of doing it this way is that this could lead to fantastic lenses with low light performance that is at a price point very few can afford today.

One bit B/W dynamic range is the best thing that could happen (separation in colors can and will be done with filters unless the next stage will be to also determine the energy level and wavelength of photons, as that could lead to a fully linear color range). Back to signal processing, to understand it you must understand at least some basic signal processing.
Dynamic range can be given in a number of different ways. The way we have been having it so far is by taking a defined time (shutter time) and "chipping out electrons from a charged silicon piece". This is how both CMOS and CCD works. The main problem with this is all the noise that comes along with it. After that defined time we basically read (with an A/D converter that is far from perfect) what is left of the charge and say that this is the amount of photon energy inverted that has reached the sensor.
If we instead could count every photon that reaches the surface the noise will be way lower and never reach the "noise floor" or in other words the threshold when the noise will look like a valid signal. So regarding dynamic range, it could be infinite depending on your total or chosen sampling time, but without the noise issue. What would you prefer 24 bits of true signal range and zero noise or 24bit of the signal range where you only use 17 bits and have 7 bits of noise (at least). I prefer the first, every day of the week and twice on Sundays.

A lot can be done with signal processing but it is almost one of two things. Remove the noise with clever methods so it is never part of your data or hiding the noise but you will always have some of it left in your data (artifacts).
 

landon

EOS 90D
Jul 26, 2020
167
225
OT: Gordon Laing from Camera Labs has got a new video out on the R3. Oh, and a bunch of other videos as well.
 
Last edited:

COBRASoft

EOS R5
CR Pro
Mar 21, 2014
64
27
43
Oudenburg, Belgium
The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.
IBM already has a memory solution with crystals. Problem is CPU speed, they're not fast enough to even reset the 'memory' when starting up. Those crystals are 3d memory, ideal for scanning video for a specific face or database 'filtering'.
Since a laser system is used going through the crystal, a lot can be read in parallel instead of sequential.

Complete new applications have to be build around these concepts, but it is coming.
 
Last edited:

TAF

EOS RP
CR Pro
Feb 26, 2012
463
147
Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.

For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.

Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).

The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.
 
  • Like
Reactions: Limpan4all

Jasonmc89

EOS 80D
Feb 7, 2019
309
345
UK
Single photon avalanche. Suggesting that it is as sensitive to light as technically possible (can't do better than one photon, right?). Very low light capability, with very fast 'reset' time, in an array. Single pixel devices have had this sort of capability for years, but not an image sensor (to the best of my knowledge).

The Phantom's require a lot of light to work. I use them in the lab, and we use a multi-watt (not milliwatt) laser for illumination.
I have no idea what you do for work, but I wish my work sounded like that!