Canon publishes a paper discussing a new 3.4 μm pixel pitch global shutter CMOS image sensor with dual in-pixel charge domain memory

canonnews

EOS RP
Dec 27, 2017
235
132
Canada
www.canonnews.com
Unless you want HDR with motion blur and without any additional logic trying to stretch the highlight exposure to match the blur of the shadows exposure. WHICH IS THE ENTIRE POINT OF THE PAPER.


Sure. And thanks to the interleaved sandwich partial-exposure method outlined here, you'll have the same blur on your highlight exposure that you do on your shadows exposure.


Read the journal article. It explains EXACTLY how to do it on the sensor, WITHOUT doing motion detection or anything else.
Quote from after Fig. 12: "the moving object is free from both jerkiness degradation and double image degradation without complicated signal processing "


You've completely missed the entire point of the global shutter mechanism: they're controlling the shutter speed down to 23 microseconds: 1/43,478th of a second. How much more finite do you think you need? They show an example of a 1/65th second exposure, and the entire walkthrough of its operation is applicable to a still image as well as a video.
it's an entirely different shutter process for stills than it is for video. but regardless, see what you want out of it. this technology will not make it into a stills camera for Canon.
what you are not considering is the TIME for each exposure - which is from 30s to 1/8000th of a second for stills. if you have a 1s exposure, you're probably effectively taking a .5s exposure and then a 2s exposure to make up the HDR image for a total of 2.5s.
while doing this while you are doing in video, your shutter speed can be finitely more controlled, because, your shutter speed, well is more controlled and far more predictable.
Then you have the complexity of DPAF which aggravates this by a factor of 2.
but really see into it what you want, but there's a reason canon already has global shutter implemented with the single cell memory version of this technology and it's not available on any ILC.
 
Last edited:

Mt Spokane Photography

I post too Much on Here!!
Mar 25, 2011
15,307
569
A company does not expose any of their best and latest technology to the public. The information revealed will have been thoroughly vetted to make sure that nothing confidential is revealed. So, I would place zero confidence that this implementation will happen, they may be using some of the ideas revealed in their latest projects, but this is just a snapshot of the past and likely intended to throw off the competition.

We had global shutters 20 years ago using CCD technology, and still have it. The goal is to achieve it with large CMOS sensors. Tiny CMOS sensors are available with a GS, but there are lots of issues whith a large sensor. The dual memory is a trick that can overcome the need for a super computer to read out 50 milliom photosites instantly.
 

flip314

EOS 80D
Sep 26, 2018
146
166
A company does not expose any of their best and latest technology to the public. The information revealed will have been thoroughly vetted to make sure that nothing confidential is revealed. So, I would place zero confidence that this implementation will happen, they may be using some of the ideas revealed in their latest projects, but this is just a snapshot of the past and likely intended to throw off the competition.
They absolutely do expose their cutting edge ideas, when they patent them. Disclosure is part of the price for patent protection. I guarantee anything novel in this paper has already been patented. Companies' trade secrets are usually only things that are non-patentable, because there is no legal protection for a trade secret if it is disclosed in any way. Companies may defer patenting certain things until 1) they have an actual product in development, or 2) they want to publish some aspect of the design. But generally they patent quickly unless they worry the patent length wouldn't be long enough to cover the bulk of the product's market availability.

It's unlikely that Canon is doing anything to purposely confuse their competition. The competitors are more interested in what Canon has already announced or brought to market. Their competitors will reverse-engineer many aspects of Canon's designs 1) to see what Canon is doing, and 2) to see if Canon is infringing their patents. Canon does the same to their competitors. Companies may extrapolate certain performance numbers to have some expectation of certain future products, but I doubt they draw much from patents and journal papers. So much gets patented/published that's never brought to market.

Believe me, I work for a cell-phone manufacturer, and it is crazy what it's possible to learn about other chip-makers' designs just from poking them from the outside. Just one example, by running certain targeted benchmarks and putting the chip under thermal analysis, you can figure out what most of the chip die is dedicated to, and compare how much area they're spending on certain functions to what you are. Then you know what you're leading or trailing on in the design. (Actually, you can guess a lot of the design just by seeing the die photo. Certain things like caches stick out like a sore thumb.)
 
Reactions: SwissFrank

peterzuehlke

EOS 80D
Oct 1, 2015
106
19
Canon already has a global shutter for video. It's in the C700.
This is a video only sensor, it's not going into a stills camera.
Canon may want to compete with the Sony A9 whose shutter isn't global, but on the way, and does frame rates close to video. If the Canon sports / wildlife camera application and a true merging of video and stills is what they want (for a future mirrorless one series) this might be the direction.
 

SwissFrank

EOS RP
Dec 9, 2018
277
105
it's an entirely different shutter process for stills than it is for video.
Different in what way that is pertinent to this journal article?

this technology will not make it into a stills camera for Canon
Why not? It'd be great to get another 40dB+ dynamic range for no visible downside except perhaps additional cost.

what you are not considering is the TIME for each exposure - which is from 30s to 1/8000th of a second for stills
How am I not considering it? The article itself walks you through a single 1/65th sec exposure, but no detail of their walkthrough says the technique would be limited to exposures no shorter than or no longer than a certain length. For instance if you need a longer total exposure, why would you stop at 5 slices for the highlight exposure and 4 for the shadow exposure? Why not 8001 and 8000 respectively to get a 30 second exposure? On the short end, you don't see motion blur on 1/8000 exposures anyway, on a day to day basis, but why wouldn't the same sandwiching principle work anyway?


if you have a 1s exposure, you're probably effectively taking a .5s exposure and then a 2s exposure to make up the HDR image for a total of 2.5s.
Why say "probably?" You're acting like you're guessing, when we've got the journal article right here. If you want to understand how it works, just read the damn article. Why are you pontificating about it without reading it?

Using the times they state, a 1-sec exposure would be 270 3.68ms exposures going to the shadow-exposure memory, sandwiched between 271 23us exposures going to the highlight-exposure memory, for a total exposure of .9936 sec to the shadow-exposure memory and 6.233ms to the highlight-exposure memory.

So no, it wouldn't be a .5s exposure OR a 2s exposure, OR a total of 2.5s. It'd be the 1 second requested, albeit with an extra 40dB+ of highlight headroom. Of course if you WANTED more shadow detail, you could now safely take a longer exposure without blowing out highlights. 2s, 4s, 8s, 16s... Even 120-sec exposure wouldn't lose highlights you'd have on a standard-type sensor.

I also have no idea why you're supposing the highlight exposure would be 1/4 the shadow exposure, when the article is so clear that the ratio is more like 128:1. HAVE YOU ACTUALLY EVEN READ the article? I ask this because you don't seem to know what it says. It's like you're trying to guess based on the headline or something.


your shutter speed can be finitely more controlled
You keep saying finite. Do you know what finite means?

your shutter speed can be finitely more controlled, because, your shutter speed, well is more controlled and far more predictable
You punctuate like a second-grader, but you're still completely wrong. Read the journal article again. They're controlling the global shutter with a precision of 23 microseconds, and being electronic, it's going to be utterly precise and repeatable whether it's a single image or a video.

Then you have the complexity of DPAF which aggravates this by a factor of 2.
First true thing you've said. But does this make it soooooooooo complicated they can't figure it out? Or is simply the diagram they've already shown in detail, duplicated by two? Omigod! Twice the complexity! It's soooo complex! I can't even imagine what having two of something would be like! Let's declare bankruptcy and see if Sony will buy us out!

there's a reason canon already has global shutter implemented with the single cell memory version of this technology and it's not available on any ILC
Sure. It takes up space on the sensor surface, which would drastically cut the number of photons received and thus light sensitivity. Unless you have the light guides explained in this article, which seem to be a new and possibly expensive technology. And finally, a SINGLE memory merely prevents rolling shutter, which isn't typically a huge issue in most still photography.

So you're right again: there IS a reason. But not because it would be useless or unwanted. Rather, because it'd cut light sensitivity, or require a new technology to make work, or cost more, and a SINGLE memory doesn't give still photographers a huge win.

But this new approach using TWO memories DOES give still photographers a huge win: HDR without multi-exposure artifacts.

Really, man, just READ THE ARTICLE before commenting any further. If there's something you don't understand, just ask me and I'll happily explain it to you. You've already spread way too much disinformation and confusion about what it says.
 
Last edited:

SwissFrank

EOS RP
Dec 9, 2018
277
105
A company does not expose any of their best and latest technology to the public. The information revealed will have been thoroughly vetted to make sure that nothing confidential is revealed. So, I would place zero confidence that this implementation will happen, they may be using some of the ideas revealed in their latest projects, but this is just a snapshot of the past and likely intended to throw off the competition.

We had global shutters 20 years ago using CCD technology, and still have it. The goal is to achieve it with large CMOS sensors. Tiny CMOS sensors are available with a GS, but there are lots of issues whith a large sensor. The dual memory is a trick that can overcome the need for a super computer to read out 50 milliom photosites instantly.
Everything you're saying is nonsense.

All camera inventions are rapidly patented and the patent process makes them public. What possible reason would you have not to discuss something you've got patent protection for?

Sure, we had GS a long time ago. But that was single-memory. The goal is NOT to "achieve it with large CMOS sensors." Instead, the goal is to 1) getting HDR of moving subjects without showing multiple-exposure artifacts, or 2) (not the main point of the article, but can be inferred from the final section) increase video FPS by reading one memory while exposing the other.

The dual memory has nothing to do with reading out a large number of pixels instantly. NOTHING in this paper is about reading out a large number of pixels instantly. Supercomputers wouldn't help you read out a large number of pixels either. Why do you write this stuff when it's so clearly untrue? Do you have no idea what you're talking about, or are you willfully spreading disinformation as a joke?
 

justaCanonuser

Grab your camera, go out and shoot!
Feb 12, 2014
384
208
Frankfurt, Germany
You clearly didn't understand the article. While they're talking about taking an exposure in the context of video, nothing about the method requires it to be a frame of video.
I agree. Such a paper in a physics journal is introducing the principle. The authors use video as main application example because the very fast frame rates are overall a much more demanding task. I am pretty sure that Canon will use such sensors as a platform for different camera models (even more as the already do today), since industrial platform technologies are in general much less expensive than making many special parts for special applications. This is even more probable since any differentiation between stills and video cameras makes less and less sense, at least on the sensor (or electronics) side of life ;). It may make sense in future to stick with different classic form factors of camera bodies for stills or for video, but the core electronics, i.e. the sensor-electronic shutter unit, stuffed into those bodies will be about the same, simply because of the pressure to cut costs.
 
Reactions: SwissFrank

SwissFrank

EOS RP
Dec 9, 2018
277
105
The authors use video as main application example because the very fast frame rates are overall a much more demanding task.
Also, the dual on-sensor memories allows you to flip the switch and start recording to the second set instantly, and take your time reading the image from the first set.

Just my guess, but it's possible they started off with this goal, and someone just happened to figure out that you could use this for HDR as well.

This is even more probable since any differentiation between stills and video cameras makes less and less sense, at least on the sensor (or electronics) side of life
This is the other reason that the Canon News guy's full of it: even if he were right that it was purely for video (and he's not) that hardly means it wouldn't show up in a still camera, all of which going forward have to have video capability. (One of the strongest critiques of the R, other than lack of IBIS, is that the 4k is only partial-sensor. That's really amazing considering 4k wasn't even used for Hollywood blockbusters until a few years ago. So video capability is absolutely being portrayed as a critical function of a nominally still-style body.)

the core electronics, i.e. the sensor-electronic shutter unit, stuffed into those bodies will be about the same, simply because of the pressure to cut costs
Right. Up until now, on-sensor memory may have only had one possible function--avoiding rolling shutter--and thus be something you'd only support on a mainly-video camera. Thx to this paper, they're now dangling 42dB or more extra headroom (basically SEVEN STOPS more highlight detail) and that's something you'd totally pay an extra couple grand for if you really understood it. It's simply staggering.
 
Reactions: justaCanonuser

zonoskar

I'm New Here
Aug 29, 2018
20
19
For HDR you don't need 2 double exposure if you have 2 memory buffers per cell. Just store the value halfway through the exposure to the first memory buffer, then at the end, store the value in the second memory buffer. If there's motion blur, it would be present in the exposure anyways.
 

Timedog

EOS R
Aug 31, 2018
38
26
God, I so don't care about rolling shutter. I've never noticed it in any videos i've taken on canon. Just work on backside illumination pls Canon.

Also I don't get how this does HDR without changing shutter speed for the 2 exposures. Does it lower ISO on one of the exposures?
 
Last edited:

SwissFrank

EOS RP
Dec 9, 2018
277
105
For HDR you don't need 2 double exposure if you have 2 memory buffers per cell. Just store the value halfway through the exposure to the first memory buffer, then at the end, store the value in the second memory buffer. If there's motion blur, it would be present in the exposure anyways.
Right, that's almost the plan.

Except: the second exposure (let's call it the highlight exposure, the short one) isn't half, it's 1/128 or so. Half would only give you one extra stop of dynamic range. 1/128 gives you seven extra stops.

And: while you're right that the whole exposure would have motion blur, in cases where you have a bright highlight and need to use the highlight exposure for it, that would only be the first half of the shot. So you might see highlight detail in the first half of a motion blur but not the second half. So the way Canon proposes to fix that is that instead of making either exposure all at once, they're alternated in very thin slices. Just a few microseconds goes to the highlight exposure, then a few milliseconds to the main exposure. Then a few more photons to the highlight, then more to the main, and so on. The example in the paper is a "sandwich" of 5 thin slices for the highlight exposure and 4 much thicker ones for the regular exposure. That way, BOTH exposures will have the ENTIRE motion trail.
 
Reactions: Normalnorm

SwissFrank

EOS RP
Dec 9, 2018
277
105
Also I don't get how this does HDR without changing shutter speed for the 2 exposures. Does it lower ISO on one of the exposures?
Basically, one exposure is 1/128 (in the example they give) of the total time of other. From a one-second exposure, it's making like a 993ms main exposure and a 6ms highlights exposure. Then the output file normally contains the main exposure's value, but where that is maxed out, it then looks at the highlights exposure for more detail.
 

Normalnorm

EOS 7D MK II
Dec 25, 2012
489
96
One more component to break. Good for Canon, kinda bad for us, I think ... not too sure.
Actually, one less component to break. The sensor thus has a solid state shutter as opposed to a mechanical shutter that is the subject of shutter count queries when time to sell.
 
Reactions: SwissFrank

justaCanonuser

Grab your camera, go out and shoot!
Feb 12, 2014
384
208
Frankfurt, Germany
Just my guess, but it's possible they started off with this goal, and someone just happened to figure out that you could use this for HDR as well.
I agree. That's the way science normally works (I know that because I edit a German physics magazine + from my own experience back when I studied physics).

Right. Up until now, on-sensor memory may have only had one possible function--avoiding rolling shutter--and thus be something you'd only support on a mainly-video camera. Thx to this paper, they're now dangling 42dB or more extra headroom (basically SEVEN STOPS more highlight detail) and that's something you'd totally pay an extra couple grand for if you really understood it. It's simply staggering.
Yes, I'd really love to see that technology to hit the market soon. But the publication in a primary research journal without any patent claims so far indicates that we'll have to wait a while...