The Canon EOS R5 likely won’t be announced next week

May 7, 2020
3
3
agreed, no idea why they just arent announcing it. The hype is dying down and ppl want to know the specifics.
I have a feeling, that the real R5 probably wont live up to the expectations (maybe low noise performance or dynamic range not on par with competition, or sensor resolution is lower than hoped for and 8K is just done by using IBIS,...). So i guess Canon wants to keep potential buyers interested for as long as possible until larger quantities of the product are available. If they let the cat out of the bag right now, they could lose their interest.
 

SecureGSM

2 x 5D IV
Feb 26, 2017
1,944
865
I have a feeling, that the real R5 probably wont live up to the expectations (maybe low noise performance or dynamic range not on par with competition, or sensor resolution is lower than hoped for and 8K is just done by using IBIS,...). So i guess Canon wants to keep potential buyers interested for as long as possible until larger quantities of the product are available. If they let the cat out of the bag right now, they could lose their interest.
+++++I have a feeling, that the real R5 probably wont live up to the expectations....sensor resolution is lower than hoped for and 8K is just done by using IBIS,..

8K done by IBIS? ....... I have a feeling, that someone have no idea what he or she is talking about.
 
May 7, 2020
3
3
+++++I have a feeling, that the real R5 probably wont live up to the expectations....sensor resolution is lower than hoped for and 8K is just done by using IBIS,..

8K done by IBIS? ....... I have a feeling, that someone have no idea what he or she is talking about.
Canon could use the concept of a high resolution mode for video. So basically using a 120fps 4k video and shifting the sensor for each frame to compute a 30fps 8K video.
 
May 7, 2020
3
3
That seems like it would take much more tech than to just read 8K out from the sensor
First of all, this is wild speculation and it is not even my assumption (I think it was some post on reddit). But I do not see why it would take more tech to use sensor shift for high resolution imaging - which is already implemented in other cameras (for stills though). But Canon could even just store the RAW 4k 120fps stream (with sensor shift) and reconstruct the 8K movie offline on a PC.
 

derpderp

Pixel Peeper
Jan 31, 2020
122
121
First of all, this is wild speculation and it is not even my assumption (I think it was some post on reddit). But I do not see why it would take more tech to use sensor shift for high resolution imaging - which is already implemented in other cameras (for stills though). But Canon could even just store the RAW 4k 120fps stream (with sensor shift) and reconstruct the 8K movie offline on a PC.
Why bother reconstructing when you could just have actual 8K from the sensor? (which is actually confirmed by Canon) No need for you to cripple the R5, when Canon themselves aren't doing it.
 

SteveC

M6 mk II
Sep 3, 2019
749
564
there is a physical reason. sensor would have to be shifted 120 times per second. moving Parts. 120 quarter frame to be assembled in 8K frames 120 per second. Alignment, heat, wear and tear. no, not for video.
How often does the sensor shift just doing normal image stabilization?

Unless those barriers you mention are at least a "we need ten times the processor" or "we need ten times the heat dissipation" someone* is probably thinking they'll try it in about three to five years with newer tech. Of course by that time we'll be talking about 16K or even 32K video and it will have become harder.

*"someone" does not necessarily mean "someone sane."
 

amorse

EOS 7D MK II
Jan 26, 2017
614
700
www.instagram.com
there is a physical reason. sensor would have to be shifted 120 times per second. moving Parts. 120 quarter frame to be assembled in 8K frames 120 per second. Alignment, heat, wear and tear. no, not for video.
Not just shifted - shifted, stopped, capture a frame, then repeated 120 times per second. Imagine the vibration coming off of that thing!
 

amorse

EOS 7D MK II
Jan 26, 2017
614
700
www.instagram.com
How often does the sensor shift just doing normal image stabilization?

Unless those barriers you mention are at least a "we need ten times the processor" or "we need ten times the heat dissipation" someone* is probably thinking they'll try it in about three to five years with newer tech. Of course by that time we'll be talking about 16K or even 32K video and it will have become harder.

*"someone" does not necessarily mean "someone sane."
I think not only would it be technically challenging (putting it mildly) to use sensor shift to capture higher resolution video, it would be a whole lot easier and provide a far better result to just do it without shifting the sensor using a sensor designed for the resolution you need...

Putting aside all the mechanical or processing reasons why that would be difficult, consider what would be required to output a higher resolution video using sensor shifting. For 30fps output, you'd need 4 frames per each frame of video, so you need to capture at 120 fps. That means the shutter speed of each frame needs to be faster than 1/120th of a second (leaving some amount of time for the sensor to move from one position to another), and each output frame may be a bit blurry because each output frame is actually 4 frames stitched together where subjects in the video may have moved between each. So that means that you couldn't use a a shutter speed less than 1/120, or more like 1/240, which will make the video look more jittery on output.

So even if there were no mechanical, power, wear and tear, or processing limitations, you'd still be left with video which will be blurry and jittery as a best case scenario. I'd bet making a sensor which can output that video resolution properly would be much less technically challenging and produce a far better result in the end.
 

SteveC

M6 mk II
Sep 3, 2019
749
564
So even if there were no mechanical, power, wear and tear, or processing limitations, you'd still be left with video which will be blurry and jittery as a best case scenario. I'd bet making a sensor which can output that video resolution properly would be much less technically challenging and produce a far better result in the end.
That makes some degree of sense. The other stuff--someone (not necessarily someone sane) will regard as a technical challenge, but having to stich together four pictures not taken at quite the same time will leave you with, basically, crap image quality.
 

SecureGSM

2 x 5D IV
Feb 26, 2017
1,944
865
How often does the sensor shift just doing normal image stabilization?

Unless those barriers you mention are at least a "we need ten times the processor" or "we need ten times the heat dissipation" someone* is probably thinking they'll try it in about three to five years with newer tech. Of course by that time we'll be talking about 16K or even 32K video and it will have become harder.

*"someone" does not necessarily mean "someone sane."
I would imaging that IBIS typically being engaged for a few seconds while you are focusing or tracking. Not continuously for extended period of time. 30 min? That’s crazy..
 
  • Like
Reactions: amorse

Doug7131

EOS 7D
Jul 21, 2019
20
41
Not just shifted - shifted, stopped, capture a frame, then repeated 120 times per second. Imagine the vibration coming off of that thing!
No saying canon would ever do this, but projectors have been using this technique in reverse for years to get a 4K image from 1920x1080 DLP/LCD chips. So, it would certainly be doable in a camera. Again, not saying it’s what Canon has done here, just saying it’s not anywhere near as farfetched an idea as you seem to think.
 

amorse

EOS 7D MK II
Jan 26, 2017
614
700
www.instagram.com
No saying canon would ever do this, but projectors have been using this technique in reverse for years to get a 4K image from 1920x1080 DLP/LCD chips. So, it would certainly be doable in a camera. Again, not saying it’s what Canon has done here, just saying it’s not anywhere near as farfetched an idea as you seem to think.
I actually had no idea that projectors did that - very interesting. I learned something! With that said, I believe there is a considerable difference in doing that for projection of video versus capture of video.

In projection, the projector is repeating parts of the the same source frame at a refresh rate faster than human perception - it plays 4 parts of each frame that line that line up perfectly per frame of video. In capture, the 4 pieces of each frame are not the same because the subject in the video can move during capture. Also, to capture 8K 30 fps, you'd be limited to probably a shutter speed of faster than 1/120th of a second (four 4K frames per one frame of 8K) which which doesn't line up with common practice. I've always heard that the typically desired shutter speed is twice as fast as the frame rate - so for 30 fps you'd likely want 1/60th of a second exposure per frame, not the minimum of 1/120th of a second here (or faster from a practical perspective).

Also, those projectors move millions of micro mirrors to a limited number of set positions to achieve that outcome. Since digital projectors use a moving micro mirror system anyway, adding additional positions to each micro mirror was not likely a quantum leap in technology. The above proposal for capture was to move the whole sensor using the IBIS system to capture the extra pixels, which would be a very different proposal as you're moving a whole lot more machinery than millions of tiny mirrors.

But let's assume you could overcome all those issues. To output 8K at 30fps you'd need the same data throughput as 4K 120 fps. To use sensor shift to capture the four 4K frames you'd need to make one 8K frame, that 4K sensor would have to record 4K at 120fps. So using a sensor shift to capture a higher resolution video wouldn't actually reduce the data throughput requirement, it would only reduce the sensor resolution requirement.

So not only would it be super hard to get a camera to capture in this way, and likely produce an inferior output, there would be no data throughput benefit to doing it, and the only savings would be sensor size. That is a lot of engineering to overcome for a pretty limited benefit all things considered (in my opinion anyway!).
 

Doug7131

EOS 7D
Jul 21, 2019
20
41
Also, those projectors move millions of micro mirrors to a limited number of set positions to achieve that outcome. Since digital projectors use a moving micro mirror system anyway, adding additional positions to each micro mirror was not likely a quantum leap in technology. The above proposal for capture was to move the whole sensor using the IBIS system to capture the extra pixels, which would be a very different proposal as you're moving a whole lot more machinery than millions of tiny mirrors.
Both the DLP and LCD systems I know of use an optical actuator between the image chip and the lens to perform some or all of the image shifting. Although you can use multiple positions on the DMD chip mirrors, you could only ever double the resolution since the mirrors can only move on one axis (usually diagonally). Doing 4 position requires the image to shift in 2 axis so can't be done with just wobulation. LCD projectors have no choice since they don’t use moving mirrors.

https://www.ti.com/lit/ml/ssnb002/ssnb002.pdf?&ts=1589472572152 - Ti document showing the optical path.

How realistic this would be in a camera is obviously debatable. As you said, the shutter speed would have to be either 2X or 4X your desired shutter speed depending on how many shifts you do. I believe a few cameras can do this for stills using the IBIS system. And as you said the camera still has to process a 8K image so the only possible saving would be in the sensor.
Overall, I don’t think anyone will ever do this, its just not practical. But it is doable.
 
  • Like
Reactions: amorse

amorse

EOS 7D MK II
Jan 26, 2017
614
700
www.instagram.com
Both the DLP and LCD systems I know of use an optical actuator between the image chip and the lens to perform some or all of the image shifting. Although you can use multiple positions on the DMD chip mirrors, you could only ever double the resolution since the mirrors can only move on one axis (usually diagonally). Doing 4 position requires the image to shift in 2 axis so can't be done with just wobulation. LCD projectors have no choice since they don’t use moving mirrors.

https://www.ti.com/lit/ml/ssnb002/ssnb002.pdf?&ts=1589472572152 - Ti document showing the optical path.

How realistic this would be in a camera is obviously debatable. As you said, the shutter speed would have to be either 2X or 4X your desired shutter speed depending on how many shifts you do. I believe a few cameras can do this for stills using the IBIS system. And as you said the camera still has to process a 8K image so the only possible saving would be in the sensor.
Overall, I don’t think anyone will ever do this, its just not practical. But it is doable.
Interesting! The world of projectors is new to me so it's interesting to see how they increase resolution.

I agree with you: it could be done in theory, but there would be very limited practical use to do it this way and I would suspect just creating a sensor which can read that fast and use the native resolution would likely be easier and produce a superior result than making this system work.

Pixel shift for photos, for instance, does exist in a number of cameras but it fails to increase quality if the subject is moving or if the camera isn't completely still. Tony Northrup did a video on this system in the A7RIII which can do it, but he stresses that movement between captures creates problems. For a static image like a still life where you've got complete control over the subject/environment/light the system can do a great job, but for an image where there is any movement there will be ghosting or blurring. In video, where motion is the point, I would suspect that artifacting from movement would defeat the purpose of increasing resolution. Who knows though!