What’s next from Canon?

There's a whole Cine line of cameras from Canon, why can't film makers just buy those and leave our photographic cameras alone.

I don't understand this obsession with making everything into a video camera.
Because the R5 and to some extent R6 are out of reach for enthusiasts on price, the pros who buy them rather than a Hasselblad are content creators/wedding photographers etc. and those segments are all hybrid photo/video. They often can't afford the Cine range which is more expensive and need to take photos. Also phones take photos and videos, to some extent the competing device with any camera these days, photo or cine.
 
Upvote 0
Because the R5 and to some extent R6 are out of reach for enthusiasts on price, the pros who buy them rather than a Hasselblad are content creators/wedding photographers etc. and those segments are all hybrid photo/video. They often can't afford the Cine range which is more expensive and need to take photos. Also phones take photos and videos, to some extent the competing device with any camera these days, photo or cine.
Careful you could be interpreted as calling the R5 an enthusiast camera aimed at low budget cinematographers (jk)
 
  • Like
Reactions: 1 user
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Fair enough. I didn't have sport in mind, I've never shot it. There's no ideal replacement for the 7D, but wishful thinking on the part of those who'd like an upgrade doesn't make it any more likely.

Agreed. No matter how much we wish for one, apparently we're not getting one. Ditto for the Nikon D500. But there's nothing currently on the market that makes sense as a replacement in terms of "reach", speed, durability, and price. Just the 7D Mark II with a 90D sensor in it would make the vast majority of us very happy, though eye/animal AF would also be nice. IBIS isn't much of a factor for sports/action due to the need for short exposure times.
 
  • Like
Reactions: 1 users
Upvote 0

davidhfe

CR Pro
Sep 9, 2015
346
518
With the mirror locked up in LV (DSLR) or with mirrorless mechanical shutters the sound is much less intrusive than when a mirror is cycling between each frame. Not that I made any comment regarding comparative noise between mechanical and electronic shutters, only about their relative transit times across the sensor.

Point taken—was just rounding out the benefits a bit.

How many bits the sensor readout is coded into doesn't happen until the DAC and doesn't affect the sensor analog readout speed. As for binning, every photosite must be read individually before binning, which is generally done in the main image processor after demosaicing which, again, has nothing to do with sensor readout speed. Line skipping does allow some lines to not be read, but when cropping the center of the frame, all lines that contribute any photosites to the image must be fully read from one side to the other including the parts on either end not used in the cropped image.

I actually might be totally wrong on my understanding of how this works. I thought both methods (binning and read depth) affected performance (thermal and read speed)

With depth, the way I've always thought about it was in terms of math It's easier to add up a lot of low precision numbers, than a few high precision numbers:

300+200+100+200 is faster to add than
683+539

Likewise, you can read 45 megapixels at 10, 12 or 14 bits. It takes the sensor less time to amplify, do the ADC at a lower precision, increasing read speed.

With binning, even though you've got to deal with photons hitting every pixel, the way a CMOS sensor reads you can do a computationally cheap summing of values, before ADC. If you're summing 4 values, that's 1/4 the ADC that needs to be done. As to demosiacing, my understanding is that binned photo sites necessarily need to be *adjacent* so you can bin the 4 red pixels together and then de-mosaic.

The R5 sensor appears to be be able to do:

- Standard/Mechanical - 9fps @ 14bit
- H+ Mode - Mechanical - 12fps @ 13 bit
- Electronic - 20fps @ 12 bit
- DCI Crop - 30fps @ 12 bit
- DCI Crop - 60fps @ 10bit (is this a binned mode?)
- Binned - up to 120fps @ 10bit

So, it appears based off the modes canon has published* both sampling and depth play a part in speed. I'm a designer who reads camera forums for fun, so I'm a nerd about this stuff, but by no means an expert. If anyone has an idiot's guide to this I would love to be pointed to it.

*https://www.canon-europe.com/cameras/eos-r5/specifications/
 
  • Like
Reactions: 1 user
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
It's not rolling shutter if it's electronically read...

The effect might be similar but it's a different phenomenon

It's the exact same effect: there's a temporal shift between when the top and when the bottom (or vice-versa, depending on which way the curtains move or the sensoris read out) of the sensor records the scene.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I've been wondering, and I may as well voice it here, about the IBIS thing. The presss releases said something like, we can do 8 stops because the lens's IS unit is constantly communicating back and forth with the body, but then some non-IS RF lenses can do it too, and it's something to do with a larger projected image circle. Does this imply that EF lenses with larger image circles will have the same outcome? Or do the RF lenses have gyroscopes even without an IS unit, to make it more accurate?

I've not seen any white papers nor other technical accounts of how Canon is doing this with their hybrid IBIS/LBIS system. As the focal length increases, the amount of sensor movement either must increase or the lens based IS unit takes more of the load than with a comparable number of stops in a shorter focal length.

Also, keep in mind that "8 stops" of compensation may mean you can do 1/3 second instead of 1/800 with an 800mm lens, but that does not necessarily translate to 8 seconds instead of 1/30 with a 30mm lens. The longer the total time, the more the maximum limits of overall movement in any single direction come into play. If one shooter can keep the camera within the "cone of correction" for a full 8 seconds, even if they are oscillating at a much higher frequency, they will be more successful than another shooter who may have a much lower rate of movement in a single direction that does not stay within the "cone of correction".
 
  • Like
Reactions: 1 user
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Thanks. It would be great if at some point (eventually) a table could be drawn up with the rough possibilities for each lens/focal length. I also wonder how EF IS lenses would cope - whether one would be obliged to turn off either ILIS or IBIS in order to prevent them fighting each other.

I had been hoping to get a 90D but the prices have remained static, and although the extra resolution still tempts me (for birds), I'm also drawn in the opposite direction with a 1-series equivalent sensor and this extra stabilisation (for macro and miscellaneous photography). The R6 is a good entry point for the R system for me, but there's no rush given there's no stock until August, according to the retailer I'd go with.

Canon has already pretty much said that IBIS can be used with IS in all EF lenses used with an R5 or R6 via a Canon EF→RF adapter. They have not said exactly how much additional benefit may be gained for each specific EF lens with IS in this scenario.

They've also hinted at the idea that some of the latest IS lenses have better support for the RF IBIS system already built-in to them, particularly the "III" versions of the Big Whites. Based on past history, I consider this highly likely.

Canon has often silently included capability for a future planned feature in current products for several years before the future feature is revealed.

When the first IS lens was unveiled in 1995, it turned out that all bodies introduced since 1992 had the capability to control IS already inside of them. When the "self-pointing" 470EX-AI flash was introduced (admittedly sort of a gimmick that hasn't caught on), most bodies rolled out for about three years prior, other than lower end Rebels, already had the ability to work with it.
 
Upvote 0
But what the sensor sees and what the final image looks like aren't necessarily the same thing. For anyone who shoots raw and post-processes expertly to account for the unique properties of a specific scene, WYSIWYG is not that true.

Does anyone know, what the EVF lag / delay is. 110 ms has been in the past. Does R5 or R6 make less lag?
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I keep reading mentions of similar QC issues with the newer "Global Vision" lines everywhere. also, the 56mm is $400+, that's too much to pay for a 3rd party APS-C short tele that uses aspherics and yet has very heavy distortion levels. I believe that Canon can do much better for the same price.

Canon might "be able" to do better at that price, but so far they have chosen not to do so. The 1993 EF 50mm f/1.4 costs as much or more and doesn't perform as well as the Sigma.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I like my old stuff.. We have space and my wife also understands the value of a dollar. If I’m shooting some ridiculous time lapse and pull out the 40D or 70D and leave it clicking away somewhere I’m less worried than if I leave my 5D4 or EOS-R in the same situation.

My only point was that I’m not willing to give my items away for a pittance just so that entity (be it Adorama or whoever) can then turn it for a profit.

Yeah, I won't sell them to for-profit retailers who only offer $200-300 for a body they're currently selling used for $600. Especially when I hear reports (maybe well founded, maybe not) that they almost always lowball the online estimate when they actually receive the goods.

I do loan them out long term to my nephews and the art/photography department at a local high school. I haven't seen my Rebel XTi in several years, nor my 7D in over a year. Ditto for several EF-S lenses and old 90s era EF kit lenses (38-80mm, anyone?). The 7D or EF 28-135mm that's with it will probably be destroyed by some clueless sophomore one day, and I'm OK with that.

But I can't let go of my 50D!
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Point taken—was just rounding out the benefits a bit.



I actually might be totally wrong on my understanding of how this works. I thought both methods (binning and read depth) affected performance (thermal and read speed)

With depth, the way I've always thought about it was in terms of math It's easier to add up a lot of low precision numbers, than a few high precision numbers:

300+200+100+200 is faster to add than
683+539

Likewise, you can read 45 megapixels at 10, 12 or 14 bits. It takes the sensor less time to amplify, do the ADC at a lower precision, increasing read speed.

With binning, even though you've got to deal with photons hitting every pixel, the way a CMOS sensor reads you can do a computationally cheap summing of values, before ADC. If you're summing 4 values, that's 1/4 the ADC that needs to be done. As to demosiacing, my understanding is that binned photo sites necessarily need to be *adjacent* so you can bin the 4 red pixels together and then de-mosaic.

The R5 sensor appears to be be able to do:

- Standard/Mechanical - 9fps @ 14bit
- H+ Mode - Mechanical - 12fps @ 13 bit
- Electronic - 20fps @ 12 bit
- DCI Crop - 30fps @ 12 bit
- DCI Crop - 60fps @ 10bit (is this a binned mode?)
- Binned - up to 120fps @ 10bit

So, it appears based off the modes canon has published* both sampling and depth play a part in speed. I'm a designer who reads camera forums for fun, so I'm a nerd about this stuff, but by no means an expert. If anyone has an idiot's guide to this I would love to be pointed to it.

*https://www.canon-europe.com/cameras/eos-r5/specifications/

Re: Depth. There are no "numbers" until the signal is digitized. Analog signals are just levels of energy between a minimum and maximum value that can be at any number of near infinite levels in between. Those points are not rounded to the nearest integer as analog values. There can be no quantization of the signal until it is digitized. The time it takes to read out a sensor is based on how fast each voltage from each sensel (photosite/pixel well) can be measured just before it is digitized.

Re: Binning. CMOS sensors read out line by line. Each line must be read out from one end to the other for the image processor to be able to make sense out of what photosite was in which location. Now consider that the four sensel RG-GB blocks needed due to the Bayer mask are not in sequence on the same line, they're in lines of RGRGRGRGRGRGRGRGRGRGRGRGRGRGRGRG sensels that are directly above lines of GBGBGBGBGBGBGBGBGBGBGBGBGBGBGBGB sensels.

If you are binning by a factor of four, your first block of four "red" pixels are spread out separated by "green" pixels over the first and third line on the sensor. There are no "red" filtered sensels that are adjacent to another "red" filtered sensel with a standard Bayer mask. They're certainly not four adjacent "red" filtered sensels on the same line. It's my understanding, and I could be wrong about this, that non-adjacent values can't be read out and combined before quantization, particularly if parallel processing is involved.

(I say "red" instead of red because, the near countless diagrams with Bayer masks made up of red, green, and blue squares that appear all over the internet notwithstanding, there are no color filter arrays in modern digital cameras that have filters centered on 640nm light that we perceive as red, just as our long wavelength "red" cones in our retinas are not most sensitive to 640nm light. In the case of our retinas, the L- cones we call "red" are most sensitive to light at around 564nm which we perceive as a kind of greenish yellow. In the case of our Bayer masks, most have "red" filters that are most transmissive at around 590-600nm, which we perceive as an orangish version of yellow. But the trichromatic theories of "Red, Green, and Blue" cones in our retinas, as well as our trichromatic color reproduction systems that use Red, Yellow, and Blue as primary colors, were well established long before we were able to accurately measure exactly which wavelengths of light to which each of the three types of cones in our retinas are most sensitive.)

When sampling and bit depth play a part in speed, it is related to the speed at which the image processor can process the data after it has been digitized as discrete values for each individual photosite on the sensor. It's not the speed at which the sensor can be read out that affects different performance at different bit depths and resolutions, at least not for still images. Remember, the actual uncompressed raw data preserves the individual values for every photosite on the sensor for still images. (I have no idea how that works for video.)
 
Last edited:
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
You are right about that - but I hope they find a formula where the effective aperture position is closer to the front lens in the wide position which would allow a smaller diameter. If I look into my 70-200 f/4 lens I "see" the aperture blades closer to the front element in the 70mm setting compared to 200mm.


What you think you see is due more to differences in magnification between the front element and the physical aperture diaphragm at various zoom settings than the actual distances involved. That's why the "true" aperture is the size of the entrance pupil as measured from in front of the lens, not the actual size of the physical diaphragm inside the lens.
 
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,569
4,109
The Netherlands
[..]The R5 sensor appears to be be able to do:

- Standard/Mechanical - 9fps @ 14bit
- H+ Mode - Mechanical - 12fps @ 13 bit
- Electronic - 20fps @ 12 bit
- DCI Crop - 30fps @ 12 bit
- DCI Crop - 60fps @ 10bit (is this a binned mode?)
- Binned - up to 120fps @ 10bit

So, it appears based off the modes canon has published* both sampling and depth play a part in speed. I'm a designer who reads camera forums for fun, so I'm a nerd about this stuff, but by no means an expert. If anyone has an idiot's guide to this I would love to be pointed to it.

*https://www.canon-europe.com/cameras/eos-r5/specifications/

Nice find on the 13-bit! I'm very curious how that affects the signal on the low end, my current struggle is a variant of 'coal bbq next to white awning: bumblebee on flower in direct sunlight. The black 'fur' on bumblebees is really dark, trying to do ETTR tends to blow out one or more of the colour channels.
My current workaround is having white flowers, blowing out the highlights is less noticable, especially when they're outside the DoF.
 
Upvote 0
Why wait? IBIS will probably give you at least two or three stops Tv when using the EF 135mm f/2 L via an EF→RF spacer thingy.

Ef135 f2 is 24 years old.

Lots of things has happened.

How about dual af motor: usm and stm?

How about no focus breathing?

How about competition doing 1.8 ?
 
  • Haha
Reactions: 1 user
Upvote 0
Mar 4, 2020
122
128
You won’t have that concern anymore when you hold the R5. I’ve held one and it’s as well built as you’d expect from Canon.
Why would a mirrorless camera be less rugged? There are less moving parts (mirror system), no separate focusing sensor, etc. It does have a small display for the EVF, but I have not heard anyone with concerns regarding that as far as durability. I know many mirrorless have been built small, light and not rugged or weather sealed, but that doesn't mean there is an inherent problem with building them tough.
 
  • Like
Reactions: 1 user
Upvote 0