Canon to announce at least 6 new RF lenses next week

neuroanatomist

I post too Much on Here!!
Jul 21, 2010
23,865
1,034
But aside from that, what are you talking about? Image Stabilization for Stills is a technique that aims to reduce motion blur. For video, that is not the point of IS - in Video it is supposed to reduce camera motion, but motion blur is often actually desirable.
Assuming you mean subject motion blur, then absolutely not. IS (any type) is all about reducing camera shake and does nothing for subject motion blur (well, nothing good assuming you want to reduce it). To reduce subject motion blur, you need a faster shutter speed (or a slower subject). Optical image stabilization for stills enables using a slower shutter speed than would otherwise be possible when handholding the camera (which will accentuate subject motion – although sometimes that’s good, e.g. waterfalls without a tripod).

Digital Image Stabilization that crops the sensor or frame to keep it similar with regards to framing to the previously captured frames has no effect on motion blur. So, in which context does it even help to talk about implementing for Stills?
Same benefit – you move the camera in any direction, digital IS helps compensate. One obvious extension of that is automatic horizon leveling (same application as compensating for roll). But as I pointed out above, with digital IS that results in a loss of optical resolution, whereas optical IS (sensor shift) does not.
 

Joules

EOS 80D
Jul 16, 2017
132
53
Hamburg, Germany
Assuming you mean subject motion blur, then absolutely not. [...] Optical image stabilization for stills enables using a slower shutter speed than would otherwise be possible when handholding the camera
Well, sorry that I didn't mention which kind of motion blur I was referencing. I didn't mean subject motion. But what you're saying is what I was refering to. While handholding, the camera is moving by small amounts in some direction relative to the subject (assuming that the subject is static, or moving on its own in a direction that is different from the shake direction). And the longer the shutter speed, the more prominent the motion blur resulting from this camera shake ks going to be. Unless it is compensated for - mechanically. I don't see how that could be done otherwise.

I may be missing something, but in the Wikipedia article you mentioned digital image stabilization is also just described for video aplication.

I mean, in the most basic example we have a sensor with 2 pixels and a static subject that fills just one of those pixels. Let's say over the duration of an exposure the camera moves slightly, so that half the light from a static subject falls on each pixel. For digital removal of the resulting blur, the camera would have to know during which intervals of time which pixel received light from the subject? And that is not possible with circuitry that reads each pixel only once the exposure is over.

So you're probably talking about something different, right? Something along the lines of taking multiple stills and using Photoshop's align layers option to make them overlap by cropping and shifting?
 

SwissFrank

EOS RP
Dec 9, 2018
204
69
Image Stabilization for Stills is a technique that aims to reduce motion blur.
I don't think so. Given that it allows far longer shutter times, it usually INCREASES subject motion blur. I'd say the main point it to reduce blur from camera movement.
 

SwissFrank

EOS RP
Dec 9, 2018
204
69
Or are you saying motion blur can be reduced during image capture without having to mechanically move anything?
I wasn't saying it but it seems easy enough. Say I have a steadyish camera and a steady background, but my subject is moving substantially.

Now say the camera doesn't get 1 1/30 full exposure, but instead gets 33 1/1000 sec sub-exposures and stacks them to make the final image.

If you line them up as shot, the background is still while subject has motion blur. But if the camera can detect the subject (e.g, the area in focus, with most detail, etc.) and align the subject in each of the sub-exposures, the subject blur falls 97% while the background blurs.

This would be good for cancelling blur in say a shot of the rising moon (it has motion blur at 1200mm even at 1/180 sec exposure!) or a passing car: subjects that move, but whose individual parts aren't moving with relation to each other. It'd be less successful on say a sports player, whose face might line up but whose arms and legs are at notably different positions in the sub-exposures.
 

SwissFrank

EOS RP
Dec 9, 2018
204
69
Obviously they have reasons for choosing not to implement dgital IS for still images, and just as obviously those reasons do not include lack of technical capability to implement it. I speculated on some of those reasons above
You didn't speculate. You stated as a matter of fact a bizarre scenario that has no understanding of photographers, or of camera firms' understanding of photographers. You said that still photographers were so attached to the exact pixel width of a still shot that they wouldn't accept digital image stabilization, and would dis Canon if it were even offered. That's absolutely ludicrous. The same users that are happy to exchange a few pixels for digital stabilization in movie mode, or for distortion correction in still mode, would joyously give up the same for digital IBIS.
 

neuroanatomist

I post too Much on Here!!
Jul 21, 2010
23,865
1,034
You didn't speculate. You stated as a matter of fact a bizarre scenario that has no understanding of photographers, or of camera firms' understanding of photographers. You said that still photographers were so attached to the exact pixel width of a still shot that they wouldn't accept digital image stabilization, and would dis Canon if it were even offered. That's absolutely ludicrous. The same users that are happy to exchange a few pixels for digital stabilization in movie mode, or for distortion correction in still mode, would joyously give up the same for digital IBIS.
Really? Is that what I said?

Canon quotes the resolution as 6720x4480. They probably feel they should deliver that, which makes perfect sense. Would you be happy with HD video output at 1880x1058?
So to you, statements including ‘they probably’ and ‘would you’ are not speculation but statements of fact?

I’m a big fan of intelligent discussion, but this is like a battle of wits with an unarmed opponent. Trying to engage in intelligent debate with someone who has manifest difficulties with basic reading comprehension and flagrantly misrepresents my statements is pointless. Hopefully the fact that you chose not to reply to my points about IBIS vs. digital IS means you actually learned something here, which would mean my prior posts weren’t a complete waste of time. But any further replies from me would be.
 

Joules

EOS 80D
Jul 16, 2017
132
53
Hamburg, Germany
I'd say the main point it to reduce blur from camera movement.
I was trying to say the same thing.

As to taking multiple sub exposures of short length and combining them in camera to get close to a single long exposure shots, I think there are three reasons why we don't see that in Canon Cameras.

The first one is that full sensor readout seems to be too slow on most Canon sensors to even handle dual digit frames per second - taking tens or hundreds of shots in a fraction of a second just doesn't seem likely at this point.

The second one is that with this kind of stacking, motion artifacts often appear. And I feel like Canon would not implement a feature that would output results where a user who doesn't understand the technique could react with "Ugh, obviously the camera is bad because it produes weird results".

Ant with examples like cars or the moon, where relative motion between subject and camera is indeed constant enough for alignment, but the camera isn't moving itself, aligning is hard. I'm using Photoshop CC, Autostakkert, Sequator and Deep Sky Stacker for this kind of stuff. All of them take quite some time and memory to produces results on a Xeon E3-1230v3 and 16 GB ram PC. It would likely take extremely long to do such conputations with the minimalsitic Hardware inside a Canon ILC.

Apart from that, when imaging under low light, where longer handheld shutterspeeds would be desireable, you actually lose some quality by stacking multiple short exposures instead of taking one long one.

So, if "digital image stabilization for stills" is just stacking, it a) can be done with a number of Freeware software tools and b) is not an alternative to mechanical stabilization I think.
 

flip314

EOS 80D
Sep 26, 2018
103
104


Left i the EF mount and on the right is the RF mount 70-200/2.8 IS.
Interesting choice for the location of the control ring vs. other RF lenses, but I'm glad they didn't shrink the zoom ring to fit it at the other end.

I'm also curious to see how the RF lens compares in size once it's extended. I think it's still shorter, but nobody's showed it side-by-side yet
 

dolina

millennial
Dec 27, 2011
1,978
119
29
34109
www.facebook.com
Interesting choice for the location of the control ring vs. other RF lenses, but I'm glad they didn't shrink the zoom ring to fit it at the other end.

I'm also curious to see how the RF lens compares in size once it's extended. I think it's still shorter, but nobody's showed it side-by-side yet
I’m guessing they’ll protect the extended element with a long lens hood.
 

SwissFrank

EOS RP
Dec 9, 2018
204
69
As to taking multiple sub exposures of short length and combining them in camera to get close to a single long exposure shots, I think there are three reasons why we don't see that in Canon Cameras.

The first one is that full sensor readout seems to be too slow on most Canon sensors to even handle dual digit frames per second - taking tens or hundreds of shots in a fraction of a second just doesn't seem likely at this point.
Joules thanks for the interesting response.

Well, there are several bottlenecks: 1) sensor-to-DIGIC, 2) inside DIGIC, 3) DIGIC to buffer, 4) buffer to storage. And Canon has a patent on a 1000-frame-per-second sensor and sensor read-out. Steps 3 and 4 also wouldn't be needed. Stack two of those for a 1/500sec exposure. 3 is 1/333sec, 4 is 1/250sec, etc. If you're saying it'd need more processing power than it currently has, I'm happy to agree, but that doesn't make it impossible.

The second one is that with this kind of stacking, motion artifacts often appear. And I feel like Canon would not implement a feature that would output results where a user who doesn't understand the technique could react with "Ugh, obviously the camera is bad because it produes weird results".
I'm hardly demanding it be always-on. EVERY special setting of the camera has cases where it provides poor results. Even optical IS has an off-switch. Fixing distrortion has an off switch. Ditto every aberration, ditto digital IBIS the R does for video, log shooting, and and and. But: the questions of what artifacts appear is a factor of the software. I can think of at least some common types of shooting where there wouldn't be any artifacts, but other where there would be. I gave examples of moonrise, but also say just say a nighttime view of the Alps: Milky Way over the Matterhorn, say, a 10 second hand-held exposure at 15mm. You're going to swing the camera 5 degrees or more left,right, up, down, and roll. And yet, with nearly no motion in the photo, the images (with enough CPU power) could be stacked exactly. I grant the outside 10% margin may be unusable, with too few photos in the stack to give a low-noise approximation, but the main 80% of the image could be both utterly rock solid and no noise. Then say it's a city view of Paris from the Eiffel Tower. Now you have the lights of moving traffic, and if any car is moving more than a fraction of a pixel between shots, the headlights would look like Morse Code. But in a wide view from many stories up, they'd surely be smooth even with a 10 second exposure. And finally, if you don't like the results, turn off the mode!

Ant with examples like cars or the moon, where relative motion between subject and camera is indeed constant enough for alignment, but the camera isn't moving itself, aligning is hard. I'm using Photoshop CC, Autostakkert, Sequator and Deep Sky Stacker for this kind of stuff. All of them take quite some time and memory to produces results on a Xeon E3-1230v3 and 16 GB ram PC. It would likely take extremely long to do such conputations with the minimalsitic Hardware inside a Canon ILC.
It's a good example, but a custom chip can often do calculations FAR faster than a general-purpose PC.


Apart from that, when imaging under low light, where longer handheld shutterspeeds would be desireable, you actually lose some quality by stacking multiple short exposures instead of taking one long one.
That may be true in many or most cases today, but I don't see a rule of physics that would make it so. Happy to learn I'm wrong though if you can think of something specific.

So, if "digital image stabilization for stills" is just stacking, it a) can be done with a number of Freeware software tools and b) is not an alternative to mechanical stabilization I think.
Maybe not for 2019 Q1 at least!
 

SwissFrank

EOS RP
Dec 9, 2018
204
69
Will there be any lenses in the Super telephoto range like 40mm and beyond. It would be very attractive for Wildlife and Bird photographers.
I assume you mean 400mm.

I'm pretty sure there's no need for them soon.

Basically, for WIDE-angle lenses, the closer you can put the back of the lens to the film, the simpler the optics get. And simpler means they can be made sharper, as well as cheaper, smaller, more sturdy, and so on. I had the Canon 35/1.4 and Leica M 35/1.4 and the Leica was 1/5 the size it seemed. (Of course autofocus was part of that.) Anyway: this means that 85mm and wider, RF lenses are going to be utterly better than EF. They can either be utterly sharper (50/1.2) or much crazier spec (28-70/2).

But if you look at the back lens of a lot of EF lenses, you'll see they have glass in the rear mount up to about 85mm. As you get longer, though, even the simplest lens formulas don't need glass within 45mm of the film/sensor, so when you switch to mirrorless, there's literally no improvement you can make to the optics other than simple optical technical progress. Now: it'd of course be convenient not to need an adapter for my 600/4IS. And it's possible that an RF lens would be redesigned to have better IS due to better communication via the RF mount than was possible with ther EF. But the lenses won't be markedly smaller, or sharper, etc. (In fact lenses from 135mm and up are already insanely sharp corner to corner. These are easy lenses to design well.)
 

CanonFanBoy

EOS 5D MK IV
Jan 28, 2015
3,027
593
Irving, Texas
Equally clearly, using the whole width. You claim: "It’s only for video because video doesn’t use the whole sensor. Still images do, so there aren’t extra pixels to allow shifting the image." Just how do you think this works?!!? You think Digital stabilization never needs to move picture side to side?



You're saying Canon thinks users wouldn't understand 6720 pixels being cut a bit in still photos, in exchange for in-camera stabilization? Canon already cuts some pixels off stills if you use Digital Lens Optimization. Why would users shrug that off yet not even want the OPTION of digital image stabilization that likewise reduced some pixels? You're apparently aware digital image stabilization likewise costs a few more pixels, yet Canon's clearly offering that as an option to video users: trade off some width for stabilization.



So for the fifth or sixth time, now, why not give the user the option? They can turn on digital stabilization in movies IF THEY CHOOSE. They can turn on distortion correction and the rest of the Digital Lens Optimization suite, IF THEY CHOOSE. There must be some reason Canon doesn't offer digital in-camera stabilization for stills. What could that reason be? Patent? Built-in limitation to protect IS lens sales? Or what? It's absolutely not because they wouldn't gleefully settle for 6600 pixels width in return for in-body stabilization, digital or not.



Given that your explanations about why digital image stabilization isn't offered are so mistaken, I can't really take your word for this. You could be right but I don't trust you right now.
OMG!
 

Joules

EOS 80D
Jul 16, 2017
132
53
Hamburg, Germany
Okay, so a quick warning upfront: There's some friday morning maths coming up, so I'll gladly admit to beeing wrong if you can point out a mistake in my thoughts.

I'm hardly demanding it be always-on. EVERY special setting of the camera has cases where it provides poor results.
With Canon, features don't become available just because they're easy to implement and usefull for some people though. So if there are a lot of possible issues with a feature, or not many people would use it often, it's just not present in Caon cameras. It would just clutter up the menu for little benefit to the larger market. Or why else are so many Magic Lantern features not present by default? For example, a RAW Histrogramm, Focus Trap release, Focus Peaking, AF Focus Stacking, and so on, have been available there for a long time. But most people get about without them, so some features have been added only recently to some mirrorless models, or are still missing (I'd love a RAW Histogramm).

It's a good example, but a custom chip can often do calculations FAR faster than a general-purpose PC.
Yeah, you're right. Its what allows smartphones to handle 4K Video - And it still took Canon quite a while to adapt that, despite it beeing widely available technology. I image a hardware solution for content based image alignment is going to be a good deal more tricky than that. But that's again confusing the topic. I thought we're talking about blur caused by camera motion - which the camera can detect through sensors without having too look to deep into the image content. As they can obviously do that already with video frames, it surely could be done for stacks of still too.

I gave examples of moonrise, but also say just say a nighttime view of the Alps: Milky Way over the Matterhorn, say, a 10 second hand-held exposure at 15mm. [...] I grant the outside 10% margin may be unusable, with too few photos in the stack to give a low-noise approximation, but the main 80% of the image could be both utterly rock solid and no noise [...] That may be true in many or most cases today, but I don't see a rule of physics that would make it so. Happy to learn I'm wrong though if you can think of something specific.
Okay, so I'm mainly drawing info from this resource here: https://jonrista.com/the-astrophotographers-guide/astrophotography-basics/snr/

Based on that, I'm under the impression that an image is composed of signal and noise. Noise comes from different sources: The subject (shot noise), the sensor (dark current noise) and the camera circuitry (read noise). Apart from the read noise, these values all increase proportionally to the exposure time. The ratio between the signal and the sum of noise sources is called signal to noise ratio (SNR) and expresses, how visible the signal is, compared to the noise. So you want you SNR to be as high as possible. For weak sources of signal (low light), a single long exposure is likely to yield a better SNR than many short exposures.

Lets define some variables:
r = stops of image stabilization
n = number of subexposures = 2^r
t = total exposure time [seconds]
t/n = exposure time per subexposure [seconds]
s = signal per time [electrons/second]
dc = darc current per time [electrons/second]
rn = read noise [electrons]

Ignoring the difference between sky and object signal that the linked side makes, I get this formula for SNR:

SNRstack = (n * t/n * s) / sqrt( n * (t/n * s + t/n * dc + rn^2) )
=> SNRstack = t * s / sqrt( t * s + t * dc + n * (rn^2) )

For a regular exposure without stacking, n is 1 so the SNR becomes:

SNRsingle = t * s / sqrt( t * s + t * dc + rn^2 )

To find out, how much higher the SNR of single exposure image is, in comparison to a stack of multiple ones, we can devide the second term by the first one:

SNRrel(t, n, dc, rn) = SNRsingle / SNRstack = sqrt( t * s + t * dc + n * (rn^2) ) / sqrt( t * s + t * dc + rn^2 )

According to the linked page, rn = 3 e- and dc = 0.02 e-/s are decent values to assume for an average modern ILC.

SNRrel(t, n, 0.02, 3) = sqrt( t * s + t * 0.02 + n * 9 ) / sqrt( t * s + t * 0.02 + 9 )

That leaves exposure time and number of desired stops of stabilization. Looking at 2 stops, 3 stops and 5 stops and 0.1, 1, 10 and 100 second exposure times I get these four formulas, which now only depend on signal (now called x), so how bright you subject is:

0.1 second 2 stops = ( 0.1*x + 0.1*0.02 + 2^2 * 9)^(1/2) / ( 0.1*x + 0.1*0.02 + 9)^(1/2)
0.1 second 3 stops = ( 0.1*x + 0.1*0.02 + 2^3 * 9)^(1/2) / ( 0.1*x + 0.1*0.02 + 9)^(1/2)
0.1 second 5 stops = ( 0.1*x + 0.1*0.02 + 2^5 * 9)^(1/2) / ( 0.1*x + 0.1*0.02 + 9)^(1/2)

1 second 2 stops = ( x + 0.02 + 2^2 * 9)^(1/2) / ( x + 0.02 + 9)^(1/2)
1 second 3 stops = ( x + 0.02 + 2^3 * 9)^(1/2) / ( x + 0.02 + 9)^(1/2)
1 second 5 stops = ( x + 0.02 + 2^5 * 9)^(1/2) / ( x + 0.02 + 9)^(1/2)

10 second 2 stops = ( 10*x + 10*0.02 + 2^2 * 9)^(1/2) / ( 10*x + 10*0.02 + 9)^(1/2)
10 second 3 stops = ( 10*x + 10*0.02 + 2^3 * 9)^(1/2) / ( 10*x + 10*0.02 + 9)^(1/2)
10 second 5 stops = ( 10*x + 10*0.02 + 2^5 * 9)^(1/2) / ( 10*x + 10*0.02 + 9)^(1/2)

100 second 2 stops = ( 100 *x + 100 *0.02 + 2^2 * 9)^(1/2) / ( 100 *x + 10*0.02 + 9)^(1/2)
100 second 3 stops = ( 100 *x + 100 *0.02 + 2^3 * 9)^(1/2) / ( 100 *x + 10*0.02 + 9)^(1/2)
100 second 5 stops = ( 100 *x + 100 *0.02 + 2^5 * 9)^(1/2) / ( 100 *x + 10*0.02 + 9)^(1/2)

0_1 seconds.png1 seconds.png10 seconds.png100 seconds.png

Graphs created with https://rechneronline.de/funktionsgraphen/

y axis and x axis are the same scale on all four. Each image shows the graphs for each set of exposure times, which vary only in the number of stops of image stabilization (Or, duration of subexposures if you prefer that). y axis is the ratio between SNR of a single exposure image and a stack of 32 (green), 8 (red) or 4 (blue) images. If this is high, a single exposure will look much cleaner than a stack of multiple shorter ones. The x axis is the subject's signal strength (brigthness).

From the first two graphs, I conclude that for very low light subjects such as your milky way example, stacking multiple short exposures will always result in a visibly more noisy image than just taking one longer one. So this "digital image stablization" would be a tradeoff between noise and blur. For bright subjects or long exposure times, the difference probably becomes small enough to call the result equivalent in terms of noise, meaning the stabilized verision will look better as it is less blurry. Unfortunatley I have not idea, how the subject brightness in electrons per second translates to brightness as we know it. For example, if a subjects emits 200 e-/s, what exposure time would result in a good exposure for that?

So take my analysis with a mountain of salt. And keep in mind that I may have screwed up the calculation and am just talking fancy BS here. But it was fun, and on occasion I'll try to experiment with some actual images. After all, the technique here doesn't have to be applied in camera. As mentioned, there are many software solutions for aligning and stacking out there.
 
Last edited:
Reactions: SwissFrank

SwissFrank

EOS RP
Dec 9, 2018
204
69
take my analysis
Hey Joules, thanks for the detailed response. To summarize in a sentence, it seems to me that read noise would increase as you increased the number of sub-exposures, while other types of noise wouldn't.

So, having the camera stack a bunch of short exposures (say, 33 x 1/1000 sec) would indeed be far noisier than just one longer exposure (say, 1/30 sec). However:

1) the stack would be able to preserve highlights, extending dynamic range (say, reading the brand name printed on a lightbulb that a 1/30 exposure would see as an area full of 255/255/255.

2) by re-aligning the images subtly, subject motion blur could be eliminated or minimized for at least some subjects.

So, there are clearly tradeoffs, whereby you might be happy to take a bit of extra noise for these benefits. How many photons/sec do the sensors pick up from say a daylight scene though? Would even 1000x read noise be visible?

Also again to be fair, there'd be downsides besides read noise. Resolution drops a bit, picture narrows a bit, some cases would give you surprising results (such as car headlights moving >>1 pixel per sub could turn passing car lights into morse code)

All this said, it sounds like a direction digital camera could and would go, unless the read noise would outgrow the scene's brightness.