Canon Successfully Develops the World’s First 1-megapixel SPAD Sensor

Canon Rumors Guy

Canon EOS 40D
CR Pro
  • Jul 20, 2010
    9,724
    2,396
    Canada
    www.canonrumors.com
    Canon has once again announced the development of the world’s first 1-megapixel SPAD sensor. Back in June of 2020, Canon made a similar announcement.
    From Canon Global
    AR, VR, driverless vehicles, ultra-high frames-per-second shooting speeds, automated robots…the IT revolution has greatly expanded the limits of what’s possible. One of the key components that will change society as we know it is the “sensor,” a device that changes light into electronic signals. In June 2020, Canon announced that it had successfully developed the world’s first1 1-megapixel single-photon avalanche diode (SPAD) image sensor, drawing attention from industry watchers all over the world.

    SPAD sensors are a type of image sensor. The term “image sensor” probably brings to mind the CMOS sensors found in digital cameras, but SPAD sensors operate...

    Continue reading...
     
    Last edited:

    Mr Majestyk

    EOS RP
    Feb 20, 2016
    414
    274
    Australia
    Hmmm, an R1(megapixel) camera doesn't really get my juices flowing - just sayin'
    Well indeed, but maybe, just maybe, they could include a ToF SPAD sensor on the R1. I expect the R1 to still only be around 20MP due to it being global shutter. This and a price tag probaly over $7K makes the R3 the real deal for me unless they gimp that and also make it only 20MP or so.
     
    Upvote 0
    I thought I remembered this, and yes... it's a year old. I'm comparing the articles to see if there is new information.
    Canon released a full article on it via their global site just recently


    but the announcement between epfl and canon was a year ago.
     
    Upvote 0
    achieving an ultra-high frame rate of up to 24,000 frames-per-second (FPS) in 1-bit output. This enables the sensor to capture slow motion videos of phenomena that occur in extremely short time frames and were previously impossible to capture.
    Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.

    For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.
     
    Last edited:
    Upvote 0
    I mp SPAD sensor will have much higher resolution than 1 mp CMOS sensor I suspect so perhaps equivalent to 20-30 mp ?
    That's not how resolution or pixel works. It may have higher dynamic range or low light performance though.

    BTW, SPAD sensor cannot discern color on its own, so for mirrorless camera the sensor will also need to have color beyer filter added just like traditional CMOS sensor.
     
    Upvote 0
    Can anyone explain why 24,000 FPS 1 bit video allow capturing of previously impossible phenomena when commercially available high speed camera with much faster speed had already existed.

    For example the popular Phantom v2512 can capture 1 megapixels (1280x800) at comparable 25,700 FPS but with much higher 12 bit depth, and Phantom’s fastest camera TMX 7510 can capture the same 1 megapixels (1280x800) resolution at 76,000 FPS with also 12 bit output, or at reduced resolution of 1280 x 32 or 640 x 64 at 1.75 million FPS.
    I can see numerous reasons.
    With signal processing, variable time could be used. So stop the sampling as soon as you reach the wanted bit depth for any pixel you freeze the bit number for all other pixels. Then you have the maximum possible resolution for the wanted bit depth that your output format can handle. This will make over-exposure impossible.
    Or store the time at each and every photon is arriving, then thru after-processing could dynamic pictures be done that have a selection between speed or clarity or even both from one "video" stream. But none of the now the existing data storage formats could be used as they are based on completely different physics.
    With a synchronized light source the distance to the reflection could be determined for each pixel and by so building a 3D model for the surrounding is "easy" with some signal processing. Or just use the easy way to have a hard limit on specific pixels that should NOT get any photons within a specific time frame.

    The first CMOS sensor was not very impressive. This is a gigantic leap forward and I think it is a bit unfair to compare a 20-year mature technology with something that is brand new (or a year old..).

    I do expect this technology to make a huge impact in machine vision to start with especially for autonomous cars.
    But it will move into cameras also within 10 to 15 years and it will change everything. More will be moved from the time we take the "picture" into post-processing as the concept of pictures/video is just thrown out so a huge change of concept. So you as a photographer can have video, high-speed photos, or high-resolution photos and at the same time light range that is way above everything, we have seen so far. And it can all be done in the after-processing.

    The only downside I can see is the storage requirements. But on the other hand, 15 years from now will most likely 2 Peta Byte memory devices exist as high-end components in the size of a CF card... Fully utilized will this technology consume huge amounts of storage, and totally new concepts about "lossy" data compression must be invented as compressed "RAW" formats.
     
    • Like
    Reactions: 1 user
    Upvote 0