Here are some crazy Canon EOS R1 specifications [CR0]

Talys

Canon R5
CR Pro
Feb 16, 2017
2,127
451
Vancouver, BC
my wedding was shot in 360i 30+ years ago as it was standard for the day. Pretty average when played back now. 720p would have been very high end at that time. The A1's 8k record rate of 200mb/s means that it can be used as a original source without major storage drama. In 30 years time 8k will be standard delivery/streaming and display for sure although we are reaching limits of our eyes and distance to screen. The R5's firehouse data transfer in 8k raw and 4k/120 cause the overheating problem as they can't be recorded externally as the HDMI interface is limited but the CFe card will support it. If Canon had released the R5 with cinema lite codecs and could record externally without overheating then the major issue would have been overcome. Raw internally for special purposes in <20 minutes but compressed raw eternally.
I dream of more for videos thirty years from now. Hopefully, it's a matrix of palm sized drones that can create an immersive where you can see anything from anywhere. May holographic or virtual reality recreation :)

And yet, I suspect, 30 years from now, stills will be stills and capturing the moment, creating a story from a single image that can spark the imagination.
 
Upvote 0
I've used the 4K120 and granted, it's not summertime (but we do have spring conditions right now in sunny Southern California), and I haven't had any issues recording 4k120. I am mindful of not just recording continuously in that mode. But I haven't hit the limit. I think when people record for slow motion, they aren't recording for long periods of time. No one wants to sit through a wedding in slo-mo so why would you recording 4k120 for half an hour for anything.

And after learning that you can get 4KHQ quality with regular 4K if you enable crop mode, then I don't worry at all about overheating because 4K in crop mode won't overheat under any kind of usage. You still get the great oversampling down to 4K (6K to 4K?).

It would be nice to record in 8k raw or 4k120 without having to think about things. But if you are aware and plan for it, the time limits aren't that bad anymore. and 200mb/s for 8k seems awfully low. At least I don't have to worry about sunlight causing overheating on the A7S3 (see Dan Watson's videos, just bizarre).
I've used 4k/120 for short clips - mostly for underwater shooting where there is a lot of movement (you/subject etc). I wouldn't record a wedding in 4k/120... that would be painful and sounds isn't recorded anyway. Recording in 8K would be more realistic I think if there Canon was to release the awaited cinema lite compression options - especially for external recording. At my daughter's wedding a couple of years ago, I setup my iPhone and let it 4un for 90 minutes recording in 4k/30 with an external mic. Pretty good quality as the lighting was reasonable in the church and the iPhone was slightly warm afterwards. No issues with 30 minute limitations either.
I'm sure that the 60gb file will sit somewhere for a long time doing nothing but there will be a time that they will look at it or parts of it again :)
If I understand correctly, the A1's 8k mode records at 200mb/s. The slots are dual USH-II/CFe type A cards. Continuous 200mb/s is possible with good (mostly V90 rated) USH-II cards and easy for CFe type A cards. The R5's 8k/30 raw and 4k/120 are 2600/1880Mb/s respectively which is huge in comparison. That said, the R5's 8k/IPB video can be written to a USH-II card V60 (8 bit) and V90 (10 bit).
 
Upvote 0
I dream of more for videos thirty years from now. Hopefully, it's a matrix of palm sized drones that can create an immersive where you can see anything from anywhere. May holographic or virtual reality recreation :)

And yet, I suspect, 30 years from now, stills will be stills and capturing the moment, creating a story from a single image that can spark the imagination.
Images have stood the test of time so far and image depictions for 10s of thousands of year. A still image still crystallises a moment in time for people to process.
You are right that humans don't cope with comprehending exponential growth so what has happened in the last 30 years is light years from what is possible in 30 years' time.
Video will move towards realism - whether fps, resolution, 3D, hologram etc and maybe different input methods ie not via our limited eyes.
 
Upvote 0

GoldWing

Canon EOS 1DXMKII
Oct 19, 2013
404
279
Los Angeles, CA
en.wikipedia.org
Indeed.

How do you put an OVF on a mirrorless camera, unless, of course, you're willing to tolerate it being off-axis?

Which, if such a thing were tolerable, would have meant no one would have bothered to invent such a kludgy thing as an SLR.
To my point. No technology today can replace an OVF for professional sports where a gimbal or monopod would suffice to follow the action.
 
  • Like
Reactions: 1 user
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,574
4,110
The Netherlands
ISO 160? I would go to ISO 25... although I am not sure how that can be implemented
It would open up possibilities for very large aperture lenses on sunny days for still and video, but I found ISO 25 hard to use for casual things. The picture in my avatar was shot on Rollei RPX 25 ISO 120 film, outside in the sun :)
 
Upvote 0
Mar 26, 2014
1,443
536
I dream of more for videos thirty years from now. Hopefully, it's a matrix of palm sized drones that can create an immersive where you can see anything from anywhere. May holographic or virtual reality recreation :)

And yet, I suspect, 30 years from now, stills will be stills and capturing the moment, creating a story from a single image that can spark the imagination.
That goes too deep into simulacrum territory for me, crossing the line from memory aid to fake-reliving the moment.
 
  • Like
Reactions: 1 users
Upvote 0
Probably because to date, all raw processing software (at least that I know of) only sees Bayer arrays, and that would be Canon's default output. Even now, with being able to save dual pixel raw, Canon only uses that info so you can do AF micro adjustments after the fact, not actually generate a file with more resolution from the two sub-pixels.

Going to quad pixel AF still allows very easy bayer output, and if done, does allow very easy spatial resolution bumps by not combining the sub pixels, but does significantly bump up the post processing requirements. I suspect part of the reason why canon went to the CR3 file format over the CR2 format is to make it easier to store non-standard pixel arrays. You can save dual pixel raw files in CR2 files (like the 5DIV does), but it basically stores it as two bayer array images in sub-chunks. The CR3 format stores each color discretely in it's own chunk and the raw processor has to then read each color chunk, then combine it into a bayer array, then demosaic it. The CR3 format is a pretty big deviation to how Canon stores its sensor data over the CR2 format.

I also suspect Canon has very good reason to go quad pixel AF because it allows them to have more than 2 output gains. This is how they were able to get the DR increases and noise improvements in recent dual pixel bodies. Each sub-pixel is actually 1 stop different than the other one. The way they store it in CR2 files, again, is less than ideal, as they store the first bayer array as they normally would with both sub pixels combined, and the second bayer array with just the output of the second sub-pixel. With a quad pixel array, they'd have pretty good reason to store each sub-pixel by itself if saving quad-pixel files as it would mean a lot more flexibility when generating a full color image. That, and they could have quad gain structure where each sub pixel had 1 stop more gain than the next, giving a combined 4 stop spread between the sub pixels with which to generate an image from. This would be how they could get to 15.5+ stops (if outputting a ~24MP bayer array where all the sub pixels are combined). They could keep a 12 or 14 bit AD, and have 4 gain outputs. If they stored each sub pixel separately, they don't even have to store 16 bits per pixel, they could still do 12 or 14 bits, then when generating the full color image after the fact in their DPP software, store the full resulting RGB as a 16 bit TIFF file. I wouldn't be surprised in the least if it was just a straight 12 bit ADC (for speed), and they just use the multi-gain to get the DR.
The microlenses above each pixel (divided into 2 for dual-pixel or - in the future - 4 sub-pixels for quad-pixel AF) are designed so that each of the subpixels covers exactly the same surface area on an in-focus subject. That is the whole point of DPAF or QPAF. Therefore, you cannot increase the resolution by simply outputting the information from every one of the sub-pixels, because you'd simply end up with 2 (resp. 4) identical images of the in-focus areas. The out-of-focus areas of the image appear shifted (to the left and right for dual pixel and to top-left / top-right / bottom-left / bottom-right for quad pixel), and the amount of shift is dependent on the distance of the out-of-focus point to the focus plane - the further away from the focus plane, the larger the shift. Furthermore, each of the sub-images only covers half (or a quarter) of the aperture. So the sub-images have less background blur than you'd get from what the aperture actually is set to, and the background (or foreground) blur in the sub-images will be shifted w.r.t. to each other (see bokeh shift function on DPRAW files). By adding the individual sub-images you get the image you'd also get without having split the pixels in the first place. I don't see that there's anything to gain w.r.t. resolution.
 
  • Like
Reactions: 2 users
Upvote 0

Bert63

What’s in da box?
CR Pro
Dec 3, 2017
1,072
2,335
60
The point I was trying to make is even if you're watching in 720 it was likely filmed in higher resolutions. You don't even need to do that for the whole video. If you have a segment that needs the higher resolution you can . Making it easier to do some magic in post.

Locally at the moment it's almost impossible to buy anything but UHD TVs. The few FHD models left are either small or very cheap. Less than €200. Often from no name brands.

The problem with streaming is that the companies often have to pay extra for each version. Locally Amazon at times doesn't even buy the English language rights to Hollywood movies. OTOH if you watch something produced for Amazon like American Gods it's in full UHD.
“American Gods”

But still looks like crap while burning ridiculous bandwidth..
 
Upvote 0
Jan 29, 2011
10,675
6,121
Have you actually tried an A9/A9ii/A1? You might be surprised.
I have, I still don’t get on with EVF’s as well as OVF’s. I accept there are some benefits to EVF’s but after looking through an OVF for 40 odd years I still find looking through an EVF is nauseating after several hours, I get eye strain and the latency, even on high refresh rate EVF’s, doesn’t match what my brain expects.

Personally I think it is down to my age and generation, I believe newer and younger photographers can embrace EVF’s without the brain memory (like muscle memory but different!). I have used 1 series cameras since forever and so have been very spoilt with OVF’s, but I will be very interested to see what Canon can do with the EVF in the R1.
 
  • Like
Reactions: 1 users
Upvote 0

Bert63

What’s in da box?
CR Pro
Dec 3, 2017
1,072
2,335
60
I have, I still don’t get on with EVF’s as well as OVF’s. I accept there are some benefits to EVF’s but after looking through an OVF for 40 odd years I still find looking through an EVF is nauseating after several hours, I get eye strain and the latency, even on high refresh rate EVF’s, doesn’t match what my brain expects.

Personally I think it is down to my age and generation, I believe newer and younger photographers can embrace EVF’s without the brain memory (like muscle memory but different!). I have used 1 series cameras since forever and so have been very spoilt with OVF’s, but I will be very interested to see what Canon can do with the EVF in the R1.

I’m 57, have crap eyes, and love my EVF cameras more every time I pick them up.
 
  • Love
Reactions: 1 user
Upvote 0

slclick

EOS 3
Dec 17, 2013
4,634
3,040
How often do you look through them for hours at a time? Genuinely hours at a time.

I'm not quite 57, I also have pretty crap eyes, I just don't get on with EVF's the same.
Same, 56, semi crap eyes with a retinal scarring (from AMPPE )and EVF's really mess with me. #1 reason I'm still using my 5D3. I keep trying out new bodies but more than a couple minutes and I have a terrible adjustment period with the camera away from my eye. OVF? I can shoot and shoot with only the minimal issues.
 
  • Like
Reactions: 1 user
Upvote 0

Bert63

What’s in da box?
CR Pro
Dec 3, 2017
1,072
2,335
60
How often do you look through them for hours at a time? Genuinely hours at a time.

I'm not quite 57, I also have pretty crap eyes, I just don't get on with EVF's the same.

You’d have to define ‘hours at a time’ because I have never held a camera of any kind up to my eye continually for hours at a time.

On a typical shooting day - out hunting wildlife or whatnot - I’ll be out anywhere from 2 to 12 hours. Depending on what I’m looking for, my camera will be up and down from my eye constantly all through that period. Sometimes for minutes at a time, sometimes only for seconds. I don’t know how that equates to what you’re saying. It’s really hard to equivocate.

The EVF transition was truly seamless for me - I honestly didn’t know I was supposed to have trouble until I started reading about it on the internet.

The first things I notice when I shoot my 5D4 or 7D2 now is how dull and dim the OVF is, and I have to remind myself to take test shots to make sure my settings are close before I start trying to move in on a subject. With the EVF I just lift the camera and keep walking and adjust as I’m going knowing beforehand what I’m going to get as I go...
 
  • Like
Reactions: 1 user
Upvote 0