Canon has once again announced the development of the world's first 1-megapixel SPAD sensor. Back in June of 2020, Canon made a similar announcement.
From Canon Global
AR, VR, driverless vehicles, ultra-high frames-per-second shooting speeds, automated robots…the IT revolution has greatly expanded the limits of what’s possible. One of the key components that will change society as we know it is the “sensor,” a device that changes light into electronic signals. In June 2020, Canon announced that it had successfully developed the world’s first1 1-megapixel single-photon avalanche diode (SPAD) image sensor, drawing attention from industry watchers all over the world.
SPAD sensors are a type of image sensor. The term “image sensor” probably brings to mind the CMOS sensors found in digital cameras, but SPAD sensors operate on different principles.
Both SPAD and CMOS sensors make use of the fact that light is made up of particles. However, with CMOS sensors, each pixel measures the amount of light that reaches the pixel within a given time, whereas SPAD sensors measure each individual light particle (i.e., photon) that reaches the pixel. Each photon that enters the pixel immediately gets converted into an electric charge, and the electrons that result are eventually multiplied like an avalanche until they form a large signal charge that can be extracted.
CMOS sensors read light as electric signals by measuring the volume of light that accumulates in a pixel within a certain time frame, which makes it possible for noise to enter the pixel along with the light particles (photons), hence contaminating the information received. Meanwhile, SPAD sensors digitally count individual photon particles, making it hard for electronic noise to enter. This makes it possible to obtain a clear image.
Until recently, it was considered difficult to create a high-pixel-count SPAD sensor. On each pixel, the sensing site (surface area available for detecting incoming light as signals) was already small. Making the pixels smaller so that more pixels could be incorporated in the image sensor would cause the sensing sites to become even smaller, in turn resulting in very little light entering the sensor, which would also be a big problem.
Specifically, on conventional SPAD sensors, structural demands made it necessary to leave some space in between the different sensing sites on neighboring pixels. The aperture ratio, which indicates the proportion of light that enters each pixel, would therefore shrink along with the pixel size, making it difficult to detect the signal charge.
However, Canon incorporated a proprietary structural design that used technologies cultivated through production of commercial-use CMOS sensors. This design successfully kept the aperture rate at 100% regardless of the pixel size, making it possible to capture all light that entered without any leakage, even if the number of pixels was increased. The result was the achievement of an unprecedented 1,000,000-pixel SPAD sensor.
The SPAD sensor that Canon developed has a time resolution as precise as 100 picoseconds2, which enables extremely fast information processing. This makes possible capture of the movement of objects that move extremely quickly, such as light particles. The sensor can also utilize its “high-speed response” feature to conduct high-precision distance measurements, including three-dimensional distance measurements.
While the Time-of-flight (ToF) method, which involves directing light at a subject and measuring the time taken for it to be reflected back onto the sensor, makes possible precise distance measurements, this method could not be used because the extremely fast speed of light necessitated a light sensor capable of extreme high-speed responsiveness. Canon’s SPAD sensor can detect returning light in nanosecond3 units or less, achieving what previous light sensors could not—making ToF measurements a reality.
The SPAD sensor that Canon has developed is also equipped with a global shutter that can capture videos of fast-moving subjects while keeping their shapes accurate and distortion-free. Unlike the rolling shutter method that exposes by activating a sensor’s consecutive rows of pixels one after another, the SPAD sensor controls exposure on all the pixels at the same time, reducing exposure time to as short as 3.8 nanoseconds3 and achieving an ultra-high frame rate of up to 24,000 frames-per-second (FPS) in 1-bit output. This enables the sensor to capture slow motion videos of phenomena that occur in extremely short time frames and were previously impossible to capture.
Such phenomena include instantaneous natural phenomena or chemical reactions that previously could not be captured accurately, or the damage that occurs to objects when they fall or collide with something else. There are many potential applications for an image sensor that enables the detailed capture of such events, including greater understanding of natural phenomena and assess product safety and durability.
By making distance measurement via the ToF method possible, Canon’s SPAD sensor enables ultra-high-speed image capture at a high resolution of 1 megapixel. This facilitates accurate three-dimensional distance measurement, even in complex scenarios where multiple subjects overlap.
In the fields of AR (augmented reality) and VR (virtual reality), which involve superimposing virtual images on top of real ones, being able to use the SPAD sensor to speedily obtain accurate three-dimensional spatial information enables more precise alignment of positions in real time. There are also high expectations for the application of SPAD sensors in solving one of the greatest challenges in designing driverless vehicles: the measurement of distances between a vehicle and the people and objects in its vicinity.
The successful development of Canon’s 1-megapixel SPAD image sensor also means that 3D cameras capable of recognizing depth information can now do so at a resolution of up to 1 megapixel. One highly anticipated application of this capability is in the high-performance “eyes” of robots and devices that society will rely on in the future.
But before this, it was considered unlikely that a 1-megapixel resolution on a 3D camera could be realistically achieved.
Canon’s research and development efforts increase the possibility that yet-unknown services and products that many people would have never dreamt of, yet hold the potential for great impact, may someday become reality.
Canon Successfully Develops Key Devices for Future Society | Canon Global
but the announcement between epfl and canon was a year ago.
For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.
BTW, SPAD sensor cannot discern color on its own, so for mirrorless camera the sensor will also need to have color beyer filter added just like traditional CMOS sensor.
With signal processing, variable time could be used. So stop the sampling as soon as you reach the wanted bit depth for any pixel you freeze the bit number for all other pixels. Then you have the maximum possible resolution for the wanted bit depth that your output format can handle. This will make over-exposure impossible.
Or store the time at each and every photon is arriving, then thru after-processing could dynamic pictures be done that have a selection between speed or clarity or even both from one "video" stream. But none of the now the existing data storage formats could be used as they are based on completely different physics.
With a synchronized light source the distance to the reflection could be determined for each pixel and by so building a 3D model for the surrounding is "easy" with some signal processing. Or just use the easy way to have a hard limit on specific pixels that should NOT get any photons within a specific time frame.
The first CMOS sensor was not very impressive. This is a gigantic leap forward and I think it is a bit unfair to compare a 20-year mature technology with something that is brand new (or a year old..).
I do expect this technology to make a huge impact in machine vision to start with especially for autonomous cars.
But it will move into cameras also within 10 to 15 years and it will change everything. More will be moved from the time we take the "picture" into post-processing as the concept of pictures/video is just thrown out so a huge change of concept. So you as a photographer can have video, high-speed photos, or high-resolution photos and at the same time light range that is way above everything, we have seen so far. And it can all be done in the after-processing.
The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.
They can not exist before a new type of energy source has been invented, that is the key question, not vision or computers.
LOL a one bit B&W dynamic range is probably best of for the Robots style of photography ;)