The Canon EOS R5 Mark II coming in Q2, 2023? [CR2]

Because it's comfortable.
Canon in 1995 release EOS DCS 3 with hard drive inside. Today 2022 I don't want insert ssd 2230 in to the box, which has a little more thickness. I want to have ability insert ssd m2 2280 direct in to the camera with full temperature compatibility.
Today size of the ssd is small, and it's possible to make such solution. I want to have additional battery grip which can be used not only as battery holder but can contain 2 or 3 ssd.
If canon want release camera with 8k60 how I can use it ? 5 minutes per 1 card ?
8k30 DCI raw internal can currently fit 51 minutes on a 1TB card (see R5 advanced user guide) so 8k60 would be ~25minutes on a 1TB card or ~50 minutes on a 2TB card. External power would be needed. Thermal limit would be interesting for 8k60 raw.
Delkin currently have a 2TB card for USD700. That said, I doubt that anyone will record long clips of 8k60.

8k60 could be recorded externally. HDMI 2.1 can support it and I would be surprised if Canon sticks to HDMI 2.0 instead of upgrading the port.
If one of the USB-C ports are thunderbolt 4 then it can handle 8k60 10 bit transfer (Display Port 2/80Gbps) and you can have all the storage you want within the range of the 3m max length cable :)

Note that it isn't clear whether the 8k60 would be based on cropped sensor ie 45mp 3:2 aspect or oversampled from the full width of 60mp 3:2 aspect ratio. Cropped would use far less processing power/heat.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Absolutely. Basic control theory tells us that the higher the sample frequency, the faster the loop response can be. The catch is that an R7 with a stacked sensor would be at a price point that would be a hard sell for the masses. At the current price, it will sell very well.
By the R series standards it's a bargain!
 
Upvote 0
Oversampling is better than pixel binning. 2 processors to handle the additional computing load of the oversampling algorithms.
cRaw is lossy and although oversampling isn't technically "raw" but would be much better than pixel binning for recording smaller resolutions than native resolution.
Can you please explain - what is pixel binning, and what is oversampling? I thought I understood but now I'm confused again.
 
Upvote 0
Can you please explain - what is pixel binning, and what is oversampling? I thought I understood but now I'm confused again.
Short answers from Wikipedia:
Pixel binning, often called binning, is the process of combining adjacent pixels throughout an image, by summing or averaging their values, during or after readout.
Charge from adjacent pixels in CCD image sensors and some other image sensors can be combined during readout, increasing the line rate or frame rate.
In the context of image processing, binning is the procedure of combining clusters of adjacent pixels, throughout an image, into single pixels. For example, in 2x2 binning, an array of 4 pixels becomes a single larger pixel,[1] reducing the number of pixels to 1/4 and halving the image resolution in each dimension. The result can be the sum, average, median, minimum, or maximum value of the cluster.[2]
This aggregation, although associated with loss of information, reduces the amount of data to be processed, facilitating analysis. The binned image has lower resolution, but the relative noise level in each pixel is generally reduced.

The other option is oversampling where the entire sensor (at least the width) is used and providing different final file sizes.
"In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements. "
 
  • Like
Reactions: 1 users
Upvote 0
Can you please explain - what is pixel binning, and what is oversampling? I thought I understood but now I'm confused again.
Longer answer it but I am sure that others will have additional information/correction/clarification.

A sensor's native resolution can be chopped up to provide smaller files and different aspect ratios.
ILCs/35mm sensors are 3:2 aspect ration eg R5 = 8192 x 5464 pixels.
Using the R5 in crop/APS-C mode gives ~17mp in 3:2 aspect ratio and the remainder of the pixels outside of that segment are dropped. You can do this in post processing as well but the difference is the initial file size.

Video uses different aspect ratios to 3:2 eg DCI 8k uses the full width of the sensor @ 8192 pixels but only 4320 pixels high at 17:9 aspect ratio. The rest of the lines on the sensor are dropped/not recorded. If UHD 8k is recorded then 7680 x 4320 pixels are recorded (16:9 aspect ratio). The simple choice to crop the sensor's width (dropping 512 pixels of the sensor's width) which slightly changes the field of view of your lenses.

For 4k and smaller video implementations:
- Crop the sensor so that only the middle portion is recorded. The field of view will change accordingly
- Line skip on the R5 where every 2nd line of the sensor are not recorded (up to the required 2160 lines for DCI aspect ratio. The field of view does not change in this case.

In all these options, you can consider the files as raw as there is no computation to change the per-pixel measurement. That said, Canon uses codecs for resolutions less than DCI 8k so the files are not raw.

When you line skip, the still files end up having a weird aspect ratio ie 3:1 vs 3:2 so you need to drop/bin alternative horizontal pixels as well to bring it back to 3:2. Similar process for video formats.

So the 4kHQ (high quality) option on the R5 is where the processor takes the 8k raw data but oversamples it and outputs a 4k resolution video file. It uses the processor for the calculations and hence the initial thermal limitations vs the non-thermally limited line skipped option.
The Sony A1 does the same with its 60mp sensor to output 8k video codec. It doesn't / can't output 8k raw video. If they could then they would need CFe Type B cards and there would be a significant crop (changed field of view) from the camera.

Oversampling needs processing power/heat/battery life etc so simpler techniques such as line skipping were used in the past and as an option today. It should give you a sharper image and reduce moire/colour artifacts significantly.

Still shots all use compression/oversampling except for full raw. High resolution is great except for file sizes (buffer/card capacity/post processing/storage).
Options:
Jpeg have the lowest quality and has settings for compression. 8 bit colour but ubiquitous for output file formats.
HEIF format have newer compression and handles 10 bit dynamic range/16 bit colour and ~half the file size of jpg. Still to become a standard
Canon decided that their cRAW slightly lossy format is preferable to the 5DS/r mRAW/sRAW formats. I am not sure exactly what the difference is though. Both have full resolution and smaller files and must use oversampling.

Pixel binning (especially for these >100mp phone cameras) refers to using 2x2 (four in one) or 3x3 (nine in one) groups of pixels to represent 1 pixel. Light sensitivity for night vs high res for day shots. My DJI M3P has a 48mp sensor but no one can really see any advantage for using the full resolution vs 12mp stills (4 in 1 pixel binning). The only resolution options using this definition is 4:1, 9:1 etc.
 
  • Like
Reactions: 1 user
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,614
4,191
The Netherlands
[…]
Canon decided that their cRAW slightly lossy format is preferable to the 5DS/r mRAW/sRAW formats. I am not sure exactly what the difference is though. Both have full resolution and smaller files and must use oversampling.
[…]
mRAW/sRAW are debayered, downscaled TIFF images. cRAW isn’t debayered and is only full size.
With cRAW still in the original bayer grid, RAW converters can still work their colour, sharpness and denoising magic on them.
From what I’ve read, craw compression works by reducing bit depth from areas that don’t need it, like the sky or bokeh. I’d love to see a proper analysis of craw, since it’s like magic for me :)
 
  • Like
Reactions: 3 users
Upvote 0
Yes, much could be achieved via firmware. I think pre-burst would probably require more processing power though.
Why do you think so? It's basically just filling up the buffer earlier? I see no reason why this should not work in the recording limits (cpu power, write speeds, buffer size, etc.) of the R5 hardware of course
 
  • Like
Reactions: 1 user
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,614
4,191
The Netherlands
Why do you think so? It's basically just filling up the buffer earlier? I see no reason why this should not work in the recording limits (cpu power, write speeds, buffer size, etc.) of the R5 hardware of course
The M6II had that feature already, using an SD card, a smaller buffer and previous-gen Digic 8. I don't think it's a hardware limitation that prevents the R5 from having it. Canon almost never adds features from either lower end or more recent cameras to existing cameras. I think the original R still doesn't have focus stacking, while literally every other R camera has it.
When I buy electronics I pretend that firmware updates aren't a thing, that sets the expectation level correctly for nearly all electronics. And allows for pleasant surprises :)
 
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,424
22,817
The M6II had that feature already, using an SD card, a smaller buffer and previous-gen Digic 8. I don't think it's a hardware limitation that prevents the R5 from having it. Canon almost never adds features from either lower end or more recent cameras to existing cameras. I think the original R still doesn't have focus stacking, while literally every other R camera has it.
When I buy electronics I pretend that firmware updates aren't a thing, that sets the expectation level correctly for nearly all electronics. And allows for pleasant surprises :)
It's their planned obsolescence programme. I like Canon's gear but with the full knowledge they are ruthless.
 
  • Like
Reactions: 1 user
Upvote 0
Longer answer it but I am sure that others will have additional information/correction/clarification.

A sensor's native resolution can be chopped up to provide smaller files and different aspect ratios.
ILCs/35mm sensors are 3:2 aspect ration eg R5 = 8192 x 5464 pixels.
Using the R5 in crop/APS-C mode gives ~17mp in 3:2 aspect ratio and the remainder of the pixels outside of that segment are dropped. You can do this in post processing as well but the difference is the initial file size.

Video uses different aspect ratios to 3:2 eg DCI 8k uses the full width of the sensor @ 8192 pixels but only 4320 pixels high at 17:9 aspect ratio. The rest of the lines on the sensor are dropped/not recorded. If UHD 8k is recorded then 7680 x 4320 pixels are recorded (16:9 aspect ratio). The simple choice to crop the sensor's width (dropping 512 pixels of the sensor's width) which slightly changes the field of view of your lenses.

For 4k and smaller video implementations:
- Crop the sensor so that only the middle portion is recorded. The field of view will change accordingly
- Line skip on the R5 where every 2nd line of the sensor are not recorded (up to the required 2160 lines for DCI aspect ratio. The field of view does not change in this case.

In all these options, you can consider the files as raw as there is no computation to change the per-pixel measurement. That said, Canon uses codecs for resolutions less than DCI 8k so the files are not raw.

When you line skip, the still files end up having a weird aspect ratio ie 3:1 vs 3:2 so you need to drop/bin alternative horizontal pixels as well to bring it back to 3:2. Similar process for video formats.

So the 4kHQ (high quality) option on the R5 is where the processor takes the 8k raw data but oversamples it and outputs a 4k resolution video file. It uses the processor for the calculations and hence the initial thermal limitations vs the non-thermally limited line skipped option.
The Sony A1 does the same with its 60mp sensor to output 8k video codec. It doesn't / can't output 8k raw video. If they could then they would need CFe Type B cards and there would be a significant crop (changed field of view) from the camera.

Oversampling needs processing power/heat/battery life etc so simpler techniques such as line skipping were used in the past and as an option today. It should give you a sharper image and reduce moire/colour artifacts significantly.

Still shots all use compression/oversampling except for full raw. High resolution is great except for file sizes (buffer/card capacity/post processing/storage).
Options:
Jpeg have the lowest quality and has settings for compression. 8 bit colour but ubiquitous for output file formats.
HEIF format have newer compression and handles 10 bit dynamic range/16 bit colour and ~half the file size of jpg. Still to become a standard
Canon decided that their cRAW slightly lossy format is preferable to the 5DS/r mRAW/sRAW formats. I am not sure exactly what the difference is though. Both have full resolution and smaller files and must use oversampling.

Pixel binning (especially for these >100mp phone cameras) refers to using 2x2 (four in one) or 3x3 (nine in one) groups of pixels to represent 1 pixel. Light sensitivity for night vs high res for day shots. My DJI M3P has a 48mp sensor but no one can really see any advantage for using the full resolution vs 12mp stills (4 in 1 pixel binning). The only resolution options using this definition is 4:1, 9:1 etc.
Thanks for all this. So my understanding of binning was correct, but I'm still finding oversampling a challenge so I'd best go off and do some more reading :)
 
  • Like
Reactions: 1 users
Upvote 0
I think the original R still doesn't have focus stacking, while literally every other R camera has it.
Just as a point of interest, you mean focus bracketing, right? I have the R6 and it will shoot focus bracketed sequences but it won't stack the resulting images, whereas the R6II can do both (the latter referred to as 'depth compositing'). (It's a not inconsequential distinction for me, I don't have a computer so I can't do focus stacking - having it in-camera would be fantastic!).
 
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,614
4,191
The Netherlands
Just as a point of interest, you mean focus bracketing, right? I have the R6 and it will shoot focus bracketed sequences but it won't stack the resulting images, whereas the R6II can do both (the latter referred to as 'depth compositing'). (It's a not inconsequential distinction for me, I don't have a computer so I can't do focus stacking - having it in-camera would be fantastic!).
Yes, I meant bracketing, not compositing, sorry for the confusion. The R3, R7, R10 and R6II can do in-camera compositing to give you a ready-to-print JPEG. The RP, R5, R6 and M6II can only do the bracketing. And as @neuroanatomist pointed out in a differend thread: the R3 can actually use flash in that mode, all the others can't since it's electronic shutter only and their readout speed is too slow to capture the flash.
 
  • Like
Reactions: 1 users
Upvote 0
Jul 21, 2010
31,182
13,036
Canon almost never adds features from either lower end or more recent cameras to existing cameras.
Just as a point of interest, you mean focus bracketing, right? I have the R6 and it will shoot focus bracketed sequences but it won't stack the resulting images, whereas the R6II can do both (the latter referred to as 'depth compositing'). (It's a not inconsequential distinction for me, I don't have a computer so I can't do focus stacking - having it in-camera would be fantastic!).
Just as a relevant exception that makes the point, at release the R3 could only collect the focus bracketed images. The Depth Compositing feature was added to the R3 with the v1.2.1 firmware update (along with the ability to use a flash with focus bracketing).
 
  • Wow
Reactions: 1 user
Upvote 0
Yes, I meant bracketing, not compositing, sorry for the confusion. The R3, R7, R10 and R6II can do in-camera compositing to give you a ready-to-print JPEG. The RP, R5, R6 and M6II can only do the bracketing. And as @neuroanatomist pointed out in a differend thread: the R3 can actually use flash in that mode, all the others can't since it's electronic shutter only and their readout speed is too slow to capture the flash.
I didn't realise the R7 could do it; that's an increasingly tempting second body for me!
 
Upvote 0

RMac

R6ii 5DSR 5Diii 7D M5 C300
Longer answer it but I am sure that others will have additional information/correction/clarification.

A sensor's native resolution can be chopped up to provide smaller files and different aspect ratios.
ILCs/35mm sensors are 3:2 aspect ration eg R5 = 8192 x 5464 pixels.
Using the R5 in crop/APS-C mode gives ~17mp in 3:2 aspect ratio and the remainder of the pixels outside of that segment are dropped. You can do this in post processing as well but the difference is the initial file size.

Video uses different aspect ratios to 3:2 eg DCI 8k uses the full width of the sensor @ 8192 pixels but only 4320 pixels high at 17:9 aspect ratio. The rest of the lines on the sensor are dropped/not recorded. If UHD 8k is recorded then 7680 x 4320 pixels are recorded (16:9 aspect ratio). The simple choice to crop the sensor's width (dropping 512 pixels of the sensor's width) which slightly changes the field of view of your lenses.

For 4k and smaller video implementations:
- Crop the sensor so that only the middle portion is recorded. The field of view will change accordingly
- Line skip on the R5 where every 2nd line of the sensor are not recorded (up to the required 2160 lines for DCI aspect ratio. The field of view does not change in this case.

In all these options, you can consider the files as raw as there is no computation to change the per-pixel measurement. That said, Canon uses codecs for resolutions less than DCI 8k so the files are not raw.

When you line skip, the still files end up having a weird aspect ratio ie 3:1 vs 3:2 so you need to drop/bin alternative horizontal pixels as well to bring it back to 3:2. Similar process for video formats.

So the 4kHQ (high quality) option on the R5 is where the processor takes the 8k raw data but oversamples it and outputs a 4k resolution video file. It uses the processor for the calculations and hence the initial thermal limitations vs the non-thermally limited line skipped option.
The Sony A1 does the same with its 60mp sensor to output 8k video codec. It doesn't / can't output 8k raw video. If they could then they would need CFe Type B cards and there would be a significant crop (changed field of view) from the camera.

Oversampling needs processing power/heat/battery life etc so simpler techniques such as line skipping were used in the past and as an option today. It should give you a sharper image and reduce moire/colour artifacts significantly.

Still shots all use compression/oversampling except for full raw. High resolution is great except for file sizes (buffer/card capacity/post processing/storage).
Options:
Jpeg have the lowest quality and has settings for compression. 8 bit colour but ubiquitous for output file formats.
HEIF format have newer compression and handles 10 bit dynamic range/16 bit colour and ~half the file size of jpg. Still to become a standard
Canon decided that their cRAW slightly lossy format is preferable to the 5DS/r mRAW/sRAW formats. I am not sure exactly what the difference is though. Both have full resolution and smaller files and must use oversampling.

Pixel binning (especially for these >100mp phone cameras) refers to using 2x2 (four in one) or 3x3 (nine in one) groups of pixels to represent 1 pixel. Light sensitivity for night vs high res for day shots. My DJI M3P has a 48mp sensor but no one can really see any advantage for using the full resolution vs 12mp stills (4 in 1 pixel binning). The only resolution options using this definition is 4:1, 9:1 etc.
So if this is a 61MP camera, by my reckoning that'd be about 9600 pixels across. So I guess that means that 8K would be cropped or subject to either binning or oversampling? Or would they maybe go with 10K, which would match the 9600-pixels-across spec? And I'd guess that if it does offer 8K raw output, that would necessarily involve a modest crop to get to a 1:1 pixel readout?
Also, I think the A1 is 50MP, and it's advertised as oversampling 8K footage from an 8.6K readout. The A7RV is the 61MP Sony body, and glancing at their marketing materials they don't say what they do on that camera for 8K (I gave up on trying to stay abreast of Sony specs long ago).
 
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,614
4,191
The Netherlands
So if this is a 61MP camera, by my reckoning that'd be about 9600 pixels across. So I guess that means that 8K would be cropped or subject to either binning or oversampling? [...]
Binning works on every pixel equally, so you'd get divide-by-integer results: 4800, 3200, 1920, 1600 or fewer pixels across. Downscaling can work with arbitrary output sizes, some sizes yielding better results that others, depending on the algorithm.
 
Upvote 0