What will Canon bring to the table with the EOS R1?

Chig

Birds in Flight Nutter
Jul 26, 2020
545
821
Orewa , New Zealand
--- I AM ----NOT---- A PROFESSIONAL ENGINEER --- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Ergo, it's my opinion and it can be as crappy, illogical and wrong as I want it to be ..... i.e. You're THE Engineer! YOU fix it!

I'll go have my double mocha and let the eggheads work it all out! As the saying goes .... IT AIN'T MY PROBLEM!

I'm just the graphics programmer!


:) :) ;-) ;-) :) :)
All good Harry and my head is very pointy !:geek::coffee:
 
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
Yes a slightly oversize hexagon wafer would probably easier to manufacture with HexaPixelAutoFocus for 3 axis focusing of course .
In effect the image would still be circular to match the lens image circle to the pixels would only be in the circular active part with blank wafer outside this :cool:
Wow - If one dimensional Dual AF is good, and two dimensional Quad AF is better, why not 3 dimensional Octa-Pixel AF? !!!
That's it!!!
Why didn't I think of it?!!!

Hey Harry, you got this all covered in next week's announcement, right? :ROFLMAO:
 
Upvote 0

Chig

Birds in Flight Nutter
Jul 26, 2020
545
821
Orewa , New Zealand
Wow - If one dimensional Dual AF is good, and two dimensional Quad AF is better, why not 3 dimensional Octa-Pixel AF? !!!
That's it!!!
Why didn't I think of it?!!!

Hey Harry, you got this all covered in next week's announcement, right? :ROFLMAO:
Hope Canon Engineers read this site
Maybe 4 dimensional time shifting auto focus or is that too much ?
 
  • Haha
Reactions: 1 user
Upvote 0

wickedac

I'm old here.
Jul 19, 2017
22
32
Worcester, MA
This is something many of us identify as a shortcoming of the R5 and it's rather upsetting. I hope that Canon reconsiders the importance of cRAW and can bring back an mRAW format that keeps things in the 12-15 and 20-26mp sweet spots. The argument of always shooting at the maximum resolution isn't exactly true for all of us. When I cover events I can shoot thousands of images a day for 3-4 days at a time and have to turn those around same day...and I still want the benefits of RAW, just not the resolution or file size. Yes, cRAW is roughly the size of a standard 20mp RAW out of the R6, but those images do NOT process easily in software like 20mp, as it chugs along to read the 45mp file format, versus blazing through a normal CR2/CR3 RAW...not sure why, but please give us back smaller RAW.

I am so with you on this - I couldn't believe it when I found out the R5 didn't have s/mRAW. That huge resolution and no option for lower res? Every camera I've had since my 40D has had sRAW. NOW they drop it, when they come out with a 45MP cam? Makes zero sense.

It's FIRMWARE. Just give me the OPTION!
 
  • Like
Reactions: 1 user
Upvote 0

Chig

Birds in Flight Nutter
Jul 26, 2020
545
821
Orewa , New Zealand
I am so with you on this - I couldn't believe it when I found out the R5 didn't have s/mRAW. That huge resolution and no option for lower res? Every camera I've had since my 40D has had sRAW. NOW they drop it, when they come out with a 45MP cam? Makes zero sense.

It's FIRMWARE. Just give me the OPTION!
[/QUOTEH
Have you tried using heif format ?
I think you can still edit it like a raw file but only the size of a jpeg
Not sure if Lightroom etc. support it yet though
 
Last edited:
Upvote 0

Aussie shooter

https://brettguyphotography.picfair.com/
Dec 6, 2016
1,188
1,857
brettguyphotography.picfair.com
For the cost, size & weight of having an embedded bottom dual grip in every R1, imagine if Canon saved all that and instead put it towards a square sensor so that you did not have to rotate the camera anymore? They could size the square sensor to fit the existing image circle (ok, which I agree is not 100% ideal) for consumer cameras. They could also allow their top end R1 camera to take the full size FF shot in both orientations by designing a 36 x 36 mm chip in place of the 36 x 24 one. Yes it would be very expensive to do this (and we all know it won't happen) but it is something that could happen. You would get a better ergonomic R1 with a (probably) slightly smaller & lighter body (and more expensive initally). But imagine if they later came out with a few $$$ pro lenses to fit the larger 36 x 36 sensor image circle - That'd be awesome (and well, really expensive, but still awesome!). And some of their lenses are said to have an unusually large image circle so that they might even fit the larger image circle already!
A 36x36 sensor would require a larger mount. So cannot and willnot happen on an RF body.
 
Upvote 0
The only reason the A1 made me happy is that I think it definitely speeds up the timeline on the R1, no matter if it's a rough year for sports.

I'll admit, I would be slightly disappointed if the R1 is higher than 40 megapixels. I already have 45 megapixels on my R5, so I would far prefer 60-120 FPS still photographs at 30 megapixels than 30 FPS stills at 50 megapixels. That said, if the market is truly changing so drastically in this way(24 megapixels has not caused any issue on the 1D or A9), I guess I would appreciate both of my cameras being similar in resolution like the 5D3 and 1DX2 were.


The R1 seriously can't come fast enough. I'm ready to sell all of my EF glass and EF bodies to complete the switch the moment the pre-orders go live. My R5 has been simply incredible and somehow makes my 1DX2 feel like a dinosaur.

I'm not quite sure why this would matter much IMO. Feel free to correct me if I'm wrong, but couldn't one simply adjust the resolution (file size) in the settings as per their use case? That should impact FPS, shouldn't it? Especially if FPS is affected by how fast the data can be written to the SD cards.

Personally I would love to have the higher resolutions as I do intend to print large. However, I would reduce the file size as necessary if speed was a necessity, and if the default FPS for the highest resolution was too slow for my needs.

If the FPS is fixed then I guess I have to agree that it is disappointing to have the slower FPS - but I don't have any issues with the higher resolution as that should be adjustable in the settings.
 
Upvote 0
Yep, I can confirm that if someone writes the forbidden D word it gets changed to "Canon the best."

Ha-ha, yup, I don't care for the video
AND please ALSO NOTE Fuji's introduction of the GFX-100s camera for $5999 USD which offers a full 100 megapixel MF sensor!

See review:

Comparing the Sony A1 to the Canon R5 and the Fuji GFX-100s, I feel soooooo sorry for Sony that showed an UNDERWHELMING CAMERA when compared to the $3899 proper DCI-8K Hollywood Cinema/Netflix/Amazon/Apple-play production friendly 8129 x 4320 pixel Canon R5 camera and the 100 megapixel Medium Format Fuji GFX-100 for $5999 US.

The ONLY THING that gets me going for the $6498 USD Sony A1 is the 30 fps Burst Rate at the full 3:2 still photo size image frame and the 120 fps 4K video!

---

For the Best-Bang-For-The-Buck race, the winner is the Canon R5 for being $2590 cheaper than the Sony A1 and having full DCI 8K Hollywood Cinema production friendly video capture.

For the megapixel and light sensitivity race, the Fuji GFX 100s with 100 megapixels for $5999 WINS HANDS DOWN!

For compactness and toughness, the it's-a-steal for $349 GoPro Hero-8 Black with 60 fps 4K and 240 fps 1080p video is clipped to my helmet!

The Sony has 30 fps full-size still photos as 3:2 and 120 fps 4K video -- That's it!
Is that worth the $1500 Price Premium over the Canon R5?

If you're a TRUE Sony fan, then YES it's MUCH better camera than the Sony A92 but that $6499 price just kills me!

I someone gave be $6500 today, which camera would I buy of the above?

The answer is the Fuji GFX-100s cuz I want a truly LIGHT SENSITIVE MF sensor AND I WANT 100 megapixels!

V

I'm really excited about the GFX-100s as well! I'm happy to see the prices are starting to go down, although it is still a bit too rich for my blood.

As someone who uses both analog and digital, I've been waiting for medium format to become more affordable. This is a positive step in that direction.

I don't care much for the video capabilities as I'm primarily into stills, but the R5 specs (even with the overheating limitations) are very impressive. I can't wait to know what the supposed R1 and R5S will compare with specs and price (if those are even going to be the model names).
 
Upvote 0
[/QUOTE]
RAW is a misnomer! It's almost ALWAYS RLE (Run-Length Encoded) and using a WinZIP-like lossless LZW compression algorithm to get you ABOUT 2:1 compression. And whenever you see 2:1, 3:1, 4:1 or 5:1 RAW as the output file format, it usually means they are throwing away or moving X-number of low-order bits on every 2nd or 3rd pixel sample or at pixels with similar luminance values and THEN RLE/LZW compressing that bitmap to get a form of compressed RAW which is actually kinda like printer-based "dithering" and "error diffusion" but used in a compression-specific manner rather than for display.

V

First off, google translate can’t figure out what language this was written in so my apologies I’m unable to respond.

Second, you seem to be missing my point, explain how this camera is worth dishing out another $2800? If you ask me, Sony just made the R5 a bargain.
 
Upvote 0
First off, google translate can’t figure out what language this was written in so my apologies I’m unable to respond.

Second, you seem to be missing my point, explain how this camera is worth dishing out another $2800? If you ask me, Sony just made the R5 a bargain.


I was trying to say the RAW image data isn't really RAW at all and all that image data may NOT be truly uncompressed ULTRA HIGH QUALITY stills or video but rather have some data missing which may be an issue for people who want ONLY the best image quality possible. Basically, they ain't getting what they paid for when it says 3:1 or 5:1 RAW!

--

And right now YES I do agree that Sony made the Canon R5 a bargain as you get nearly the same still/video features AND you get Hollywood production friendly TRUE DCI 8K video at such a low price that there is enough money left over between the differences in prices that you could (i.e. SHOULD!) get the Canon R5 and the BEST f/1.2mm Lens for Astrophotography and night-time Street Photography ever created!

Canon RF 50mm f/1.2L USM Lens: ($2299 USD)

Now you have your pro-level Camera + BEST LENS EVER! powerhouse photo combo!

V
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
8K on a FF sensor requires around 8192 x 5461 min. sensor, or 44.7MP. With IBIS I could see this approaching 50MP. With a quad pixel technology per pixel it will take so much detail that I don't really think it will happen, but it might.
6K on a FF sensor requires around 6144 x 4096 min. sensor, or 25.2 MP. With IBIS I could see this more like 28-30MP. I could definitely see this happening with quad pixels.

Why would the presence or absence of IBIS make any difference to the resolution?
 
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
Why would the presence or absence of IBIS make any difference to the resolution?
I think I read that because of the possible drift of the user pointing the lens, they have to match the drift up to a point they can't continue, and then reposition it and repeat, and having extra pixels beyond the normal required gave them the ability to do this without having the corners go towards black. But the more I think about this, the less I'm sure it makes sense (yes, sometimes that happens to me! :ROFLMAO: ). So if you see any documentation on this, I would (sincerely) like to see what they say.:)
 
  • Like
Reactions: 1 user
Upvote 0
I kind of now know WHY they are using dissimilar compounds to throttle/expand a thermal path and i ALSO KNOW that what they do has applications in consumer electronics. (i.e. which they have demonstrated via the building of a 16K camera body!)

The key with using a POOR thermal conductor is similar to using valves in piping. You are throttling thermal throughput in specific ways to ensure waste heat gets stored and/or moved around via a specific pathway at specific times using a passive means of heat movement regulation.

You can certainly use thermal conduction paths to keep heat away from certain areas; for example, you wouldn't want to build a high conductance pathway from the processor directly to the handgrip but there's no reason you wouldn't want the best possible heat conduction path from your source (processor) to your desired sink (generally the chassis). Stainless steel is most likely used for tripod plates simply because of it's mechanical properties and manufacturing cost, not specifically for its thermal properties.

Outside of exotic applications like electronics that are expected to function in extremely cold environments (say for satellites and probes), cost saving measures, or product differentiation you don't want to restrict heat flow for electronics. In consumer products, thermal design is generally pretty far down the list of priorities in general. In your cooking pot example, for instance, the reason they use stainless steel, or aluminum, has nothing to do with thermal design and everything to do with keeping costs down and non-reactivity. Copper and cast iron cookware have much better thermal characteristics, which manifest as a more even cooking surface, but tend to require more maintenance. Most copper pots, for instance, have a thin stainless steel layer bonding onto the actual cooking surface to try and get the best of both worlds but those pots tend to be quite expensive. I know there are full copper pots on the market but all the ones I've seen very clearly warn not to use them for general cooking and only for things like melting sugar, if you cook anything acidic you can quickly destroy the cookware.

It is INSIDE the camera where magnesium, aluminum and copper blocks will be used to initially ABSORB HEAT and a heat pipe will TRANSFER that heat out to a slower acting ceramic or composite heat sink block and outwards to atmosphere via a metallic plate or finned metal style of heat dissipation system. Look at modern desktop computer GPU graphics cards and their array of heat sinks/heat pipes for inspiration as to what is NEEDED to cool the 300+ watts being used by high end graphics cards!

Most of what you wrote here is fine, it's just the bolded portion that is generally incorrect. There are situations where your heat sink may have poor thermal conductivity, but that's usually because the sink works via phase change. I did the early turbomachinery design for the now defunct Google X's Project Malta where they were using molten salt as a thermal battery for storing solar energy. That's a pretty different scenario from what you see in consumer electronics though. In astro applications, you can indeed see scenarios where you want to very precisely tune heat transfer rates radiated out of the system and I'm guessing that's probably what you're thinking about but that also bears little resemblance to consumer electronics.
 
Upvote 0
I think I read that because of the possible drift of the user pointing the lens, they have to match the drift up to a point they can't continue, and then reposition it and repeat, and having extra pixels beyond the normal required gave them the ability to do this without having the corners go towards black. But the more I think about this, the less I'm sure it makes sense (yes, sometimes that happens to me! :ROFLMAO: ). So if you see any documentation on this, I would (sincerely) like to see what they say.:)


Technically, you COULD use IBIS as a form of pixel-shift using the PRECISE TIMING of the peizo-electric 3D-XYZ sensor movement system to take RGB readouts during half-pixel shifts in the CMOS sensor position and use fancy software to create a best-guess set of RGB pixel values during each IBIS up-down/left-right movement that could increase resolution to as much as 4x the actual sensor pixel count.

This means your 8192 by 4320 video file could be turned into beautiful-looking DCI 16K resolution imagery at 16,384 by 8640 pixels without having to do too much work. So NOW with a bit of firmware update trickery, your Canon R5 is now a 140+ Megapixel Medium Format monster!

The faster the IBIS motors are, the more one-half pixel shift or even one-third pixel shift RGB readouts could be done with merely a FIRMWARE UPDATE to the CURRENT MODELS of the Canon R5 or the 1Dx mk3 camera!


V
 
Last edited:
  • Haha
Reactions: 1 user
Upvote 0
--- I AM ----NOT---- A PROFESSIONAL ENGINEER --- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Ergo, it's my opinion and it can be as crappy, illogical and wrong as I want it to be ..... i.e. You're THE Engineer! YOU fix it!

I'll go have my double mocha and let the eggheads work it all out! As the saying goes .... IT AIN'T MY PROBLEM!

I'm just the graphics programmer!


:) :) ;-) ;-) :) :)

It never ceases to amaze me how much time and effort it takes to build GUI interfaces; it's so simple conceptually but the actual implementation is very laborious.
 
  • Like
Reactions: 1 user
Upvote 0
Technically, you COULD use IBIS as a form of pixel-shift using the PRECISE TIMING of the peizo-electric 3D-XYZ sensor movement system to take RGB readouts during half-pixel shifts in the CMOS sensor position and use fancy software to create a best-guess set of RGB pixel values during each IBIS up-down/left-right movement that could increase resolution to as much as 4x the actual sensor pixel count.

This means your 8192 by 4320 video file could be turned into beautiful-looking DCI 16K resolution imagery at 16,384 by 8640 pixels without having to do to much work. So NOW with a bit of firmware update trickery, your Canon R5 is now a 140+ Megapixel Medium Format monster!

The faster the IBIS motors are, the more one-half pixel shift or even one-third pixel shift RGB readouts could be done with merely a FIRMWARE UPDATE to the CURRENT MODELS of the Canon R5 or the 1Dx mk3 camera!


V

It might look nice at first glance but there are some laws of physics which is difficult to overcome - mass of sensor to move, inertia and what amplitude of force pulse should be applied to move sensor in requred position in required slice of time.
Applying strong and very short pulse of force to piece of glass ( sensor) could damage it (create cracks) or create volume ultrasonic waves instead of moving it. This is why pixel shift could not be done too fast which could be ideal for handheld shooting using pixel shift mode.
By the way ultrasonic sensor cleaning is done by creating utrasonic waves on the surface of the sensor by special patterned electrodes drive on one side of the sensor surface.
But on sensor with fast global shutter or extremely fast sensor readout pixel shift could be done without even moving sensor at all.
Take four shots quickly and combine them using camera firmware - just put each image on top of other with one pixel shift in each direction and do required merging. You just will lose one line of pixels at each side of the sensor. This also could be done in external software. I do not know why Sony did not do this in latest Sony A1, this is also applicable for night/low light mode - just multishot without pixel shift - the same as done in Canon 1DX, 1DXm2 and 1DXm3 though not fast and requiring tripod - I had article for this here many years back.
As for a low light mode Sony has nice feature which it holds possibly as there was not enough competition on the market. This is available on A9 sensor but not used - low noise mode for low light. I always wished that they enable this via some firmware update if it is not hardware dependant. With processing power in A1 and future Canon R1 a lot of computational things could be done inside the camera - similar to what is done in smarphones.
 
Upvote 0
It might look nice at first glance but there are some laws of physics which is difficult to overcome - mass of sensor to move, inertia and what amplitude of force pulse should be applied to move sensor in requred position in required slice of time.
Applying strong and very short pulse of force to piece of glass ( sensor) could damage it (create cracks) or create volume ultrasonic waves instead of moving it. This is why pixel shift could not be done too fast which could be ideal for handheld shooting using pixel shift mode.
By the way ultrasonic sensor cleaning is done by creating utrasonic waves on the surface of the sensor by special patterned electrodes drive on one side of the sensor surface.
But on sensor with fast global shutter or extremely fast sensor readout pixel shift could be done without even moving sensor at all.
Take four shots quickly and combine them using camera firmware - just put each image on top of other with one pixel shift in each direction and do required merging. You just will lose one line of pixels at each side of the sensor. This also could be done in external software. I do not know why Sony did not do this in latest Sony A1, this is also applicable for night/low light mode - just multishot without pixel shift - the same as done in Canon 1DX, 1DXm2 and 1DXm3 though not fast and requiring tripod - I had article for this here many years back.
As for a low light mode Sony has nice feature which it holds possibly as there was not enough competition on the market. This is available on A9 sensor but not used - low noise mode for low light. I always wished that they enable this via some firmware update if it is not hardware dependant. With processing power in A1 and future Canon R1 a lot of computational things could be done inside the camera - similar to what is done in smarphones.

===

I think I may not have made myself very clear.

The IBIS is ALREADY moving the sensor left/right and up/down in its duty to STABILIZE images.

The CPU/DSP just needs to RESCHEDULE (i.e. via a hard interrupt call) the pixel readouts so that one set of Still Photo frame readouts occurs at 1/20th of a second and the second set occurs at 2/20ths of a second and then combines the two images which would contain TIME-based differences in 2D-XY and 3D-XYZ movement that could then be interpolated to create a "Virtual Pixel Shift" that is considered a 1/10th of a second frame when the odd/even frames are combined together (i.e. stacked together and virtually pixel-shifted!)

Basically, you are sub-dividing your frames-per-second still photo or video capture rate into TWO segments. The first frame (i.e. odd numbered frame) and the second frame (i.e. even numbered frame) gets combined together as a stacked photo with ONE MAJOR DIFFERENCE!

In this method, the IBIS movement is taken into account so we can determine the amount of TIME that has elapsed per each frame capture so we can then create a TIME difference map that tells us where an IN-BETWEEN PIXEL SHIFT should have occurred in the real world. That time difference is translated into interpolated RGB pixel luminance and colour values that are HIGHLY ACCURATE because we take into account both TIME and the 2D/3D movement of the IBIS in order to create new "Virtual In-Between Pixels" that represent what a real pixel-shift SHOULD BE in terms of brightness and colour!

For the 20 fps Burst rate of the Canon R5, the even and odd frames have a SMALL time difference which can be measured as 1/20th of a second. By combining the first and second frames and interpolating the IBIS movement on the X, Y and Z axis, you could create a set of internal software rules that says for this set of two Burst Rate Still Photo Frames, the IBIS moved X, Y and Z-number of Microns on each axis which can be divided up into a set of HALF-PIXEL, ONE-THIRD-PIXEL or even ONE QUARTER PIXEL SHIFTS which can then be interpolated to create IN-BETWEEN RGB pixels that would be converted to incredibly accurate percentage-based differences in luminance and colour for each output pixel.

While this method does turn your 20 fps camera into a 10 fps camera, you can create at least TWICE the pixel resolution on each axis for at least FOUR TIMES the pixels count which means the R5 becomes a 180 megapixel medium format monster of a stills camera!

The IBIS is already moving. We don't have to change anything! We just need to TIME the pixel readouts with precision and then measure the AMOUNT of movement on each IBIS axis so we can create a PROPER interpolation equation for each in-between pixel of our virtually pixel-shifted photo!

You usually do this interpolation as a division by 2, by 3 or by 4 to get you one, two or three interpolated pixels that represent 2x, 3x and even 4x pixel resolution increases on the horizontal and vertical axis of your camera's image output!

This type of virtual pixel shift works only on cameras WITH IBIS and that means the Canon R5 which means it's very likely just a simple firmware update to make it all work!

V
 
Last edited:
Upvote 0

Chig

Birds in Flight Nutter
Jul 26, 2020
545
821
Orewa , New Zealand
A 36x36 sensor would require a larger mount. So cannot and willnot happen on an RF body.
Why would it need a larger mount ? Even if it did then just make the square sensor slightly smaller to match the image circle or better still a round sensor of exactly the same size as the image circle .
 
  • Like
Reactions: 1 user
Upvote 0

Sporgon

5% of gear used 95% of the time
CR Pro
Nov 11, 2012
4,722
1,542
Yorkshire, England
This type of virtual pixel shift works only on cameras WITH IBIS and that means the Canon R5 and 1Dx3 which means it's very likely just a simple firmware update to make it all work!

V

Steady on Harry, you're really getting carried away now........unless you know something about the 1DXIII we don't
 
Upvote 0