Canon EOS R5 Specifications

Hi, I've mixed replies to several your messages together to avoid spamming.

If that was all it did it would still alter SNR. But that's not an accurate summary of modern NR algorithms. And it's completely wrong for color NR, color noise being arguably the most intrusive component.

I'm not saying noise reduction shouldn't be used because of the loss of detail, I'm just saying it shouldn't be used in sensor performance comparison.

In the real world it plays out this way: the D850 owner does a hard shadow push and prints. The 5D4 owner does a hard shadow push, maybe bumps LNR/CNR a bit, and prints.

But the D850 owner can also bump LNR/CNR and print certain range of shadows where 5DIV produces a mess even after the NR. Sometimes I struggle with unrecoverable shadows on 5DIV, I can lift them to a certain level but beyond that level they become a mess. I could've lifted them a bit more on D850. It doesn't happen too often, but why shouldn't I desire more from a next-gen very expensive camera? It was happening on 70D all the time and 5DIV was a significant leap for me. I want improvements from R5 too.

Altering the view size simply trades spatial information for SNR. And it doesn't have to be through 'digital manipulation.' Make a print where the shadow noise seems unacceptable to you nose-on-print. Now view it from 10 ft away.

Yes. As I've said before, the point is, this is an arbitrary normalisation, that's why absolute values from DxO or PTP are meaningless. Also those figures are not very usable in the field. In practice I'm more interested in the per-pixel DR, not the 'photographic' DR.

Of course you do. The sensor captured that data. Given the resolution of today's sensors if anything one could argue that DxO's print scores are more relevant than their screen scores...or Photons to Photos graphs...because that's how people will view the image.

Who on earth will be viewing my images like that? It's a very specific normalisation. PTP also use a similar normalisation as if the image was printed and viewed at a certain distance. But they have different absolute values. Therefore, those absolute DR values are meaningless in practice; if DxO shows a 15 stop DR for my 14-bit camera, I can't shoot real scenes with 15 stops DR, because at the same time PTP says my sensor's DR is only 13 stops.
But again, they can be used for comparison between the sensors.

One could argue that the 'absolute DR measurement' of a single photoreceptor is meaningless when evaluating a sensor with many millions of receptors.

A single pixel DR in my opinion is more usable in the field. It affects how you interpret your histogram and how the image will look like when viewed 1:1. It also affects how much you'll need to downsample in order to get satisfactory shadows.

Not arguing that at all. But the fact that it can work...sometimes...tells us that the 1ev difference is not due to Canon's ADC design. It's due to the dual pixel arrangement.

I totally agree, maybe it's not the only reason, but the dual pixel design definitely contributes to the DR decrease. I'm just not taking it as an excuse from Canon, I don't care as a consumer why they lag behind, I want them to improve. From the graphs I quoted in one of the messages above, they still have some significant read noise, while Sony has it literally at 1 electron. That's probably a room for improvement for Canon despite the dual pixel arrangement.

Not a practical problem since today's sensors are 14-bit devices and we have both 16-bit and 32-bit processing on the desktop.

You still can't capture more than 14 stops with a 14-bit sensor and view 1:1. You can convert it to 16 or 32 bits but you don't gain any additional information, you only reduce quantisation errors in the further processing. However almost any processing after that will at a cost of information loss, almost any slider movement in Lightroom means information loss (in the final image! - Lightroom changes are additive and kinda applied on top of each other every time you change anything, so the original image is kept intact).
Downsampling to gain DR is also a lossy change, we obviously sacrifice the resolution.
 
Upvote 0
Actually I disagree. The R glass is quite different than EF mount due to the much closer flange distance. I am no optical engineer but I understand that new lens designs that cannot be done on EF mount will be available for R mount.

Yes they can achieve focal lengths/aperture combos not available before. It's still an electronic controlled autofocusing IS capable lens. Until they can make RF glass take 3D hologram pictures, it's still a minor evolution and EF glass is now and will be for a while, still relevant. They may not make many news ones, but they will all still work exceptionally well and seamlessly on RF bodies with adapters
 
  • Like
Reactions: 1 users
Upvote 0
Dynamic range is a characteristic of a transmission system passing (some types of) signals. A signal is not just a sampled result of some measurement, but a message encoded in the measured value and containing information that is interesting to us. For the same transmission channel on the physical layer, different kinds of signals of our interest correspond to different values of DR.

That's exactly why absolute DxO or PTP values are meaningless - they're based on arbitrary choices for normalisation. If the measurement depends on one's arbitrary interest and definition of the DR, you can't do it scientifically. You get drastically different results on DxO and PTP (up to two stops I believe). The DR of your sensor reduces as you come closer to the print. There's something intrinsically wrong about it.
Per-pixel DR however is invariant.
 
Upvote 0
I'm not saying noise reduction shouldn't be used because of the loss of detail, I'm just saying it shouldn't be used in sensor performance comparison.

No one is debating measured sensor performance, they're debating the practical meaning and relevance of those numbers.

But the D850 owner can also bump LNR/CNR and print certain range of shadows where 5DIV produces a mess even after the NR.

That's just not true between these two cameras. The gap is not that wide. If you're at the end of a 5D4 shadow push the D850 is not going to give you a perfectly clean, award winning print with vast increases in shadow detail. On either camera you're at the point where you should be blending two or more exposures if you care about shadow IQ. At the point the 5D4 is falling apart the D850 is starting to fall apart, quality wise.

As I've said before, the point is, this is an arbitrary normalisation,

Opening your eyes and looking at a photograph is an "arbitrary normalization." At least the arbitrary choice in DxO's print score is closer to your arbitrary choice when viewing photographs than measuring a single pixel would be.

Who on earth will be viewing my images like that?

You just asked "who on Earth will be viewing my images at anything less than 1:1 magnification?" And unless you've cornered a very unique niche in the art market, the answer would be "everyone."

I totally agree, maybe it's not the only reason, but the dual pixel design definitely contributes to the DR decrease. I'm just not taking it as an excuse from Canon, I don't care as a consumer why they lag behind, I want them to improve.

Where is Neuro when you need him? Canon's behavior and marketshare suggests that their consumer base doesn't care at all about 5D4 vs D850 DR. It's a point of obsession on photography forums for some reason, but that seems to be all.

If Canon were still stuck at the 5D3/6D2 level then it might start to impact their sales.

You still can't capture more than 14 stops with a 14-bit sensor and view 1:1.

No one but the original photographer views 1:1, and that's only when editing. For a 2D image DR is relative to view size. There's no way around that. Our sensors have millions of pixels and you literally cannot give an accurate measurement for the sensor without first stating the view size. And exchanging spatial sampling for SNR is not only something you can do, it is something that will happen for every viewer to a greater or lesser degree based on monitor/print size and viewing distance.
 
Upvote 0
Hi, I've mixed replies to several your messages together to avoid spamming.



I'm not saying noise reduction shouldn't be used because of the loss of detail, I'm just saying it shouldn't be used in sensor performance comparison.



But the D850 owner can also bump LNR/CNR and print certain range of shadows where 5DIV produces a mess even after the NR. Sometimes I struggle with unrecoverable shadows on 5DIV, I can lift them to a certain level but beyond that level they become a mess. I could've lifted them a bit more on D850. It doesn't happen too often, but why shouldn't I desire more from a next-gen very expensive camera? It was happening on 70D all the time and 5DIV was a significant leap for me. I want improvements from R5 too.



Yes. As I've said before, the point is, this is an arbitrary normalisation, that's why absolute values from DxO or PTP are meaningless. Also those figures are not very usable in the field. In practice I'm more interested in the per-pixel DR, not the 'photographic' DR.



Who on earth will be viewing my images like that? It's a very specific normalisation. PTP also use a similar normalisation as if the image was printed and viewed at a certain distance. But they have different absolute values. Therefore, those absolute DR values are meaningless in practice; if DxO shows a 15 stop DR for my 14-bit camera, I can't shoot real scenes with 15 stops DR, because at the same time PTP says my sensor's DR is only 13 stops.
But again, they can be used for comparison between the sensors.



A single pixel DR in my opinion is more usable in the field. It affects how you interpret your histogram and how the image will look like when viewed 1:1. It also affects how much you'll need to downsample in order to get satisfactory shadows.



I totally agree, maybe it's not the only reason, but the dual pixel design definitely contributes to the DR decrease. I'm just not taking it as an excuse from Canon, I don't care as a consumer why they lag behind, I want them to improve. From the graphs I quoted in one of the messages above, they still have some significant read noise, while Sony has it literally at 1 electron. That's probably a room for improvement for Canon despite the dual pixel arrangement.



You still can't capture more than 14 stops with a 14-bit sensor and view 1:1. You can convert it to 16 or 32 bits but you don't gain any additional information, you only reduce quantisation errors in the further processing. However almost any processing after that will at a cost of information loss, almost any slider movement in Lightroom means information loss (in the final image! - Lightroom changes are additive and kinda applied on top of each other every time you change anything, so the original image is kept intact).
Downsampling to gain DR is also a lossy change, we obviously sacrifice the resolution.
 

Attachments

  • images.jpg
    images.jpg
    8 KB · Views: 82
  • Like
  • Haha
Reactions: 2 users
Upvote 0
That's exactly why absolute DxO or PTP values are meaningless - they're based on arbitrary choices for normalisation. If the measurement depends on one's arbitrary interest and definition of the DR, you can't do it scientifically.

Choosing a view size is not an arbitrary definition of DR. It's an arbitrary but necessary input into the formula for DR.

The DR of your sensor reduces as you come closer to the print. There's something intrinsically wrong about it.

There's something intrinsically wrong about *** satellite clocks ticking at a different rate than clocks on Earth. But tick away they do. Think of DR as the theory of relativity for photography, only in this case the DR depends on the observer's distance from the image. It probably depends on their speed to. An observer passing your print at 100mph is unlikely to notice as much shadow noise as one passing it at 1" per hour.

It could be worse. If it was like quantum physics then nobody could say if your Canon had more or less DR than a Nikon until you photographed a cat in a box.
 
  • Like
Reactions: 1 users
Upvote 0
I'd really be surprised if an RF 400mm f/2.8 L IS is introduced before an RF 300mm f/2.8 L IS and RF 500mm f/4 L IS.

The EF 400mm f/2.8 L IS II and EF 600mm f/4 L IS II both got total redesigns to "III" versions in 2018.

The EF 300mm f/2.8 L IS II and EF 500mm f/4 L IS II are 2011 designs. They'll be the first RF great whites.

I get your logic, but the 400mm f/2.8 is the biggest money lens. I still bet that is the first.
 
Upvote 0
There's not one stop more DR to be found between current technology and theoretically perfect.

Explain R6 then. Who on earth, owning 6D or 6DII, would go with the R6 (if Canon will try to claim it is kind of 6DIII) just because Canon thought they might reuse 1DX III sensor, while going down from 26 to 20 mpx? I can imagine a special low light machine as a complement to R5, even going down to 14-18mpx, but it would have to be in a 2-3 stops balpark. If new R6 sensor is not much better than Rp or R, I can see a hard time for ppl going for R6, unless it is extremly cheap or serves some other yet unknown purposes ....
 
Upvote 0
I have said that having two different camera lines and two different lens lines makes no sense.

The R lenses that are out now and coming out this year will comprise the most popular focal lengths of EF lenses, so while there may not be 80 R Lenses, it will cover probably 75% of EF focal length models purchased with the exception of long glass. There also wont be T&S lenses in R mount for some time.

Once there is a decent market penetration of R cameras, they will cease making the duplicate EF lenses. For example there will never be a compact 70-200 2.8 in the EF mount.

Under the other poster's scenario of 10 year switch over, you would have to make these duplicate lenses for a long time. That doesn't make economic sense.

That other 25% is still worth millions of dollars in sales. EF won't go away until those millions are not there anymore. I doubt it will be ten years (though I think many EF lenses will still receive service/support in 2030), but it will be far more than a year or two. It will be at least five years before Canon no longer sells EF lenses, probably longer, and they'll service whatever they sell for around seven additional years.
 
Last edited:
Upvote 0
I get your logic, but the 400mm f/2.8 is the biggest money lens. I still bet that is the first.

I'd be very surprised if Canon sells significantly more EF 400mm f/2.8 lenses than EF 300mm f/2.8 lenses. The 400mm lenses may be more popular with birders, but around sports shooters one tends to see more 300/2.8 lenses than 400/2.8, though both are common enough.
 
Upvote 0
Explain R6 then. Who on earth, owning 6D or 6DII, would go with the R6 (if Canon will try to claim it is kind of 6DIII) just because Canon thought they might reuse 1DX III sensor, while going down from 26 to 20 mpx? I can imagine a special low light machine as a complement to R5, even going down to 14-18mpx, but it would have to be in a 2-3 stops balpark. If new R6 sensor is not much better than Rp or R, I can see a hard time for ppl going for R6, unless it is extremly cheap or serves some other yet unknown purposes ....
The A7S was the most successful model from the original A7 series (A7SII also became popular until the A7III has more or less taken its place for now).
Canon has more or less left this segment with the 5D Mark III.
Now finally Canon is back with a more advanced equivalent model, that will compete against the A7SIII and S1H (and later on Nikon might join in as well). These higher-end video-focused stills cameras will continue occupy a significant portion of the video market as it is moving towards bigger sensors. The gap between this and the C500 Mark II is still huge.

And it will have a flip-screen, EVF, IBIS, smaller size and weight, a much more flexible RF-mount (with the option of an V-ND EF adapter), all of which are missing from the 1DX Mark III in favour of ultimate speed and durability. Since that one costs 6500$ this still has to be over 4000$ (depends on which codec options they are going to keep) which is pretty expensive, but if we consider what was in the 1DC and 1DX Mark II, it is a significant step forward.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Actually I disagree. The R glass is quite different than EF mount due to the much closer flange distance. I am no optical engineer but I understand that new lens designs that cannot be done on EF mount will be available for R mount.

The end of EF will not be determined by what RF lenses are not available in EF versions, it will be determined by what EF lenses are no longer not available in RF mount. (I know, I know. Double negative and all, but it's the most compact way to say it.)
 
Upvote 0
indoors 400 is a bit too long, outdoors 300 is a bit too short. :)

Not really. At venues where there are armys of photographers who use 300/2.8 or 400/2.8 lenses, there's more than enough light to use a 1.4X with a 300. It's a LOT lighter, a LOT cheaper, more flexible, and works almost as well for the few longest distance shots to carry a 300/2.8 + 1.4X as it does to carry a 400/2.8. In the U.S., outdoors we're talking primarily baseball and american football. The pace of both of those and the way one shoots them give plenty of time to decide whether one wants 300mm or 420mm on the "long" body. The other bodies have 70-200s and/or 16-35s hanging on them. A 400 does make more sense for baseball, where 400 + 1.4X can be useful at times. But college baseball gets very little coverage, and there are only 30 MLB teams, compared to 32 NFL and 254 Division 1 (129 FBS + 125 FCS) football teams.

Not many I know or have seen shoot in indoor gyms with 300s for most sports. They're too long too, unless one is shooting from the rafters in a very large arena (or from the upper seats in the end of a mid-size gym like with volleyball). Maybe gymnastics, but there are a LOT more shooters on the baselines for basketball and on the sidelines and end lines for football, even with mid-level colleges, than the number shooting even major college gymnastics and volleyball. I don't know what hockey shooters use. But again, there aren't near as many hockey shooters in the U.S. as there are shooters covering football, basketball, and baseball.
 
Upvote 0
Explain R6 then. Who on earth, owning 6D or 6DII, would go with the R6 (if Canon will try to claim it is kind of 6DIII) just because Canon thought they might reuse 1DX III sensor, while going down from 26 to 20 mpx? I can imagine a special low light machine as a complement to R5, even going down to 14-18mpx, but it would have to be in a 2-3 stops balpark. If new R6 sensor is not much better than Rp or R, I can see a hard time for ppl going for R6, unless it is extremly cheap or serves some other yet unknown purposes ....

The 6D, 6D Mark II, and RP are not really current sensor technology. They haven't shown any real improvement over what has been available from Canon since 2012 when the 6D was introduced. Both the 5D Mark IV (and R) and the 1D X Mark II have much better low ISO DR than the 6D Mark II does. When normalized for size, so do the 80D and 90D.

We haven't yet seen how the 1D X Mark III and R6 sensor performs. I'm guessing the answer to your question once we've seen those sensors is, "Anyone who wants to update to current sensor technology."
 
  • Like
Reactions: 1 user
Upvote 0
Explain R6 then. Who on earth, owning 6D or 6DII, would go with the R6 (if Canon will try to claim it is kind of 6DIII) just because Canon thought they might reuse 1DX III sensor, while going down from 26 to 20 mpx? I can imagine a special low light machine as a complement to R5, even going down to 14-18mpx, but it would have to be in a 2-3 stops balpark. If new R6 sensor is not much better than Rp or R, I can see a hard time for ppl going for R6, unless it is extremly cheap or serves some other yet unknown purposes ....

I really feel like we don't have enough to go on to determine the so-called R6's purpose, and I've said it numerous times in these threads. Is it a 6D successor? Is it a video-centric body? Is it an ultra-low-end body? We simply don't know at this time, though by the laws of probability, surely one (or several of us) have guessed right by now, somewhere over the last hundred pages :rolleyes:;)
 
Upvote 0
However I'm struggling with banding sometimes and would also like to see improvements in this area too.
But I'm ok with that. More disturbing thing is the banding which appears in the shadows in long exposures at ISOs 400-1600 (and even detectable at ISO 100). Presumably it's thermal noise. It almost disappears when raising ISO to 3200, but that ISO is very hard to deal with (talking about the asto/nightscapes).
I have observed this banding in my 80D multiple times already, although only in extreme shadow pishs necessary because I screwed up the exposure or tried to avoid blending multiple ones.

Anyway, I think this is an issue of the past. The EOS R already got a firmware update a while back that addressed the banding, which it inherited from the 5D IV. Unfortunately, I think no DSLR will now get that update. But the newer ones seem superior anyway. Despite searching, I have not found any reports in banding in the M6 II or 90D yet, and the 1DX III sensor will surely improve it even further.
 
  • Like
Reactions: 1 user
Upvote 0
Yes they can achieve focal lengths/aperture combos not available before. It's still an electronic controlled autofocusing IS capable lens. Until they can make RF glass take 3D hologram pictures, it's still a minor evolution and EF glass is now and will be for a while, still relevant. They may not make many news ones, but they will all still work exceptionally well and seamlessly on RF bodies with adapters

Taking 3D hologram pictures is not a limitation of the lens but the sensor. All it takes is a plenoptic filter on the sensor, very much like the one already used for dual-pixel autofocus but covering many more subpixels.
Described in sufficient detail in this thesis, which was also the basis for the original Lytro camera: https://drum.lib.umd.edu/handle/1903/18735

I'm not buying an R5 even if it has the tilty-flippy screen that was the only thing not filled on my 5DmkIV wish list. Since my 5DmkIV is such a performance monster for my purposes anyway. However, if the R5mkII were to have a "dual pixel" autofocus based on micro lens arrays covering >256 subpixels and each of those could be read out individually from the raw file... then I could construct true 3D images and that would indeed be a reason to buy a new camera! All focus misses could be adjusted in post, just like DPRAW but it would actually have a chance to work properly. All it takes is a sensor with about 1Gpx...
 
Upvote 0