Can you please explain - what is pixel binning, and what is oversampling? I thought I understood but now I'm confused again.
Longer answer it but I am sure that others will have additional information/correction/clarification.
A sensor's native resolution can be chopped up to provide smaller files and different aspect ratios.
ILCs/35mm sensors are 3:2 aspect ration eg R5 = 8192 x 5464 pixels.
Using the R5 in crop/APS-C mode gives ~17mp in 3:2 aspect ratio and the remainder of the pixels outside of that segment are dropped. You can do this in post processing as well but the difference is the initial file size.
Video uses different aspect ratios to 3:2 eg DCI 8k uses the full width of the sensor @ 8192 pixels but only 4320 pixels high at 17:9 aspect ratio. The rest of the lines on the sensor are dropped/not recorded. If UHD 8k is recorded then 7680 x 4320 pixels are recorded (16:9 aspect ratio). The simple choice to crop the sensor's width (dropping 512 pixels of the sensor's width) which slightly changes the field of view of your lenses.
For 4k and smaller video implementations:
- Crop the sensor so that only the middle portion is recorded. The field of view will change accordingly
- Line skip on the R5 where every 2nd line of the sensor are not recorded (up to the required 2160 lines for DCI aspect ratio. The field of view does not change in this case.
In all these options, you can consider the files as raw as there is no computation to change the per-pixel measurement. That said, Canon uses codecs for resolutions less than DCI 8k so the files are not raw.
When you line skip, the still files end up having a weird aspect ratio ie 3:1 vs 3:2 so you need to drop/bin alternative horizontal pixels as well to bring it back to 3:2. Similar process for video formats.
So the 4kHQ (high quality) option on the R5 is where the processor takes the 8k raw data but oversamples it and outputs a 4k resolution video file. It uses the processor for the calculations and hence the initial thermal limitations vs the non-thermally limited line skipped option.
The Sony A1 does the same with its 60mp sensor to output 8k video codec. It doesn't / can't output 8k raw video. If they could then they would need CFe Type B cards and there would be a significant crop (changed field of view) from the camera.
Oversampling needs processing power/heat/battery life etc so simpler techniques such as line skipping were used in the past and as an option today. It should give you a sharper image and reduce moire/colour artifacts significantly.
Still shots all use compression/oversampling except for full raw. High resolution is great except for file sizes (buffer/card capacity/post processing/storage).
Options:
Jpeg have the lowest quality and has settings for compression. 8 bit colour but ubiquitous for output file formats.
HEIF format have newer compression and handles 10 bit dynamic range/16 bit colour and ~half the file size of jpg. Still to become a standard
Canon decided that their cRAW slightly lossy format is preferable to the 5DS/r mRAW/sRAW formats. I am not sure exactly what the difference is though. Both have full resolution and smaller files and must use oversampling.
Pixel binning (especially for these >100mp phone cameras) refers to using 2x2 (four in one) or 3x3 (nine in one) groups of pixels to represent 1 pixel. Light sensitivity for night vs high res for day shots. My DJI M3P has a 48mp sensor but no one can really see any advantage for using the full resolution vs 12mp stills (4 in 1 pixel binning). The only resolution options using this definition is 4:1, 9:1 etc.