Are these the 7 RF lenses Canon will be announcing in 2020? [CR1]

Ruiloba

80D + R5
Feb 13, 2020
24
34
Spain
Weird to have 1.4/2x TC without lenses to use them on ie only the 100-500mm would be available and @1000mm f/14 (2x) would have few use cases.

Can we confirm that the RF TCs are RF to RF? If they are RF to EF TCs then it would make some sense ie to avoid RF-EF adapter + EF 1.4/2x TC and use the existing big whites (and EF 70-200mm :) )View attachment 188885View attachment 188886

This is an interesting and probably successful comparison. It matches perfectly
 
Upvote 0
There is no room for the extender, RF70-200 has its rear element very close to the mount, unlike the other 70-200s.
Interesting. I hadn't seen a photo of the extenders and thought maybe it was a design that didn't protrude so much. That is curious. There must be an RF Big White coming.
 
  • Like
Reactions: 1 user
Upvote 0
DIGIC is most definately a SoC - system on chip more than just an ARM processor.

Yes it may have additional instructions/specialised processing modules, but it doesn't change anything in what I said in a couple of previous messages, it doesn't help to explain why the EVF/LiveView lag can be longer that one frame duration. Multithreading does but with caveats. I'd be interested to see the actual design of the sensor-to-EVF pipeline.
 
Upvote 0

tron

CR Pro
Nov 8, 2011
5,222
1,616
Interesting. I hadn't seen a photo of the extenders and thought maybe it was a design that didn't protrude so much. That is curious. There must be an RF Big White coming.
The issue is what and when? Unless it is a DO design it will not be of much interest to me.
However, I would say there is a chance that they will not start from this (unfortunately) but from something maybe more … basic like a 300 2.8. I hope it is a high mm DO though...
 
  • Like
Reactions: 1 user
Upvote 0
Yes it may have additional instructions/specialised processing modules, but it doesn't change anything in what I said in a couple of previous messages, it doesn't help to explain why the EVF/LiveView lag can be longer that one frame duration. Multithreading does but with caveats. I'd be interested to see the actual design of the sensor-to-EVF pipeline.
the entire system could be lagged. frame to frame may only be part of the problem.

ie:

consider three frames.

Code:
   F1 ----------F2---------F3

At the EVF it arrives at:

                           F1 ----------F2---------F3
something like that. still 1/30th of a second in between frames there's a systematic time offset from the moment of invocation.

The EVF video is not a steady stream. It can't be. you have to take a picture using the same sensor. so there's going to be some starts and stops along the way on the EVF/LCD feed.
 
Last edited:
Upvote 0
My point was, if you only have one thread, that diagram doesn't explain the question. With one thread, the average processing time must be less than 1/30s, or the lag will grow indefinitely. But if it's less than 1/30s and the system lags like on your diagram (because of a temp spike 4ex), we can just skip a few frames and catch up to the shortest possible delay.

the entire system could be lagged. frame to frame may only be part of the problem.

ie:

consider three frames.

Code:
   F1 ----------F2---------F3

At the EVF it arrives at:

                           F1 ----------F2---------F3
something like that. still 1/30th of a second in between frames there's a systematic time offset from the moment of invocation.

The EVF video is not a steady stream. It can't be. you have to take a picture using the same sensor. so there's going to be some starts and stops along the way on the EVF/LCD feed.
 
Upvote 0
My point was, if you only have one thread, that diagram doesn't explain the question. With one thread, the average processing time must be less than 1/30s, or the lag will grow indefinitely. But if it's less than 1/30s and the system lags like on your diagram (because of a temp spike 4ex), we can just skip a few frames and catch up to the shortest possible delay.
there's frames skipped no matter what, but the lag is the delta T from real life to the first frame being seen in the viewfinder. again the video feed is not continous. that is impossible. it's "virtually" seamless.
 
Upvote 0

koenkooi

CR Pro
Feb 25, 2015
3,610
4,190
The Netherlands
[..]
But as far as I understand the Canon's architecture, DIGIC is the one who executes all of those DLO, debayering etc. steps. As a side note, is DLO even applied to EVF/LV?
No, Canon specifically separates vignetting correction, distortion correction and DLO in the menu and manuals. On digic 8 cams you only get DLO in the jpegs, not in the film clips or evf.
This might change with digic X, the 1dx3 manual might have some wording on it for liveview and movie mode.
 
  • Like
Reactions: 1 user
Upvote 0
there's frames skipped no matter what, but the lag is the delta T from real life to the first frame being seen in the viewfinder. again the video feed is not continous. that is impossible. it's "virtually" seamless.

But I never claimed it was continuous. :) T time is measured from the beginning of the exposure till the image is rendered in the EVF. It obviously includes the exposure itself, readout, internal processing and data transfer, render of HUD interface and copying the rendered frame to the EVF buffer. Or maybe it renders right in the buffer.

What I'm saying is if the processing is single-threaded and sequential (and it's mostly sequential), the T time must not be longer than 1/30s or you inevitably get the lag growing indefinitely. It simply means there's not enough processing power to render at 30fps. There are ways to optimise it such as doing rendering during exposure, but that's not the point here as we're trying to explain how T can be more than 1/30s, with EVF still rendering at 30fps.
 
Upvote 0

Joules

doom
CR Pro
Jul 16, 2017
1,801
2,247
Hamburg, Germany
But that's not the point here as we're trying to explain how T can be more than 1/30s, with EVF still rendering at 30fps.
I bet the answer could be found if one is willing to dig through enough posts on the Magic Lantern forum. Those guys had LiveView figured out, at least for the models that still use older Digics.

Without this insight there's just speculation left. What really matters is how the situation looks on the R5.
 
Upvote 0

SecureGSM

2 x 5D IV
Feb 26, 2017
2,360
1,231
Well smaller than a traditional 70-200 f/2.8 which I don't regard as XL, price will be high but will it be dearer than a 70-200 f/2.8 and size will be much shorter than a 70-200 f/2.8 and with smaller front element. I'm not sure it'll be heavier than 28-70 f/2 which would be a more complex design.
Front element: estimated 77-82mm Likely 82 (A.M.). Size and weight. around EF 70-200/2.8 Give or take. so... it is a trade off again: shorter focal range for an extra stop of light.
In majority of my use cases: I would certainly benefit of extra stop of light in 28-70 range, but I need 70-200 covered. 135 is way to short for events. I am forced to shoot from a distance and need that 200mm end. So.. Looks like I will end up with a pair of R5 with 28-70/2.0 + 70-200/2.8 attached hanging around my heaps.
Let’s look at the bright side: this is going to be a much better balanced combo as lenses will be of similar size and weight. Oh, dear :))
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Indeed, but my concern is more that there's nothing on this list that will be cheap enough for some people, including those on this forum, who would like to move into the R ecosystem without spending a lot of money. IS drives up the cost, as I'm sure we all know, and I don't know how valuable it is in a 50mm 1.8 design that should, but is not guaranteed to, come in under $150.

But there is where I have to say you are wrong.

Most RF bodies come with an adapter included. Anyone who wants a cheap way to get in should just get the EF 50 stm. You can have it for around $80. I have been saying for a while... I am not interested in an RF 50 f1.8 that is just the same as the EF version. And suggesting that people buying into the RF system should have the option of a cheap (performance wise as well) $100 f1.8 and a $2.5k f1.2 50 mm lens is just silly.

We buy into the RF camera because it is supposed to be new. It is canon's future vision of their next camera system right? We expect good sharpness wide open even for their f1.8 lenses. They don't have to be at the same price level as the nikon S lenses, but they shouldn't be like any ol' EF lens you can simply adapt to the EOS R system.

Let me put it this way. If I as an entry user had to choose between a native mount lens that does exactly the same as the EF version, then I would be better served buying the EF lens from the second hand market at a greatly reduced price. Take the EF 70-200f2.8 III as an example. Current new prices place it at around $1800 (though its MSRP was about what the current RF version is). If it weren't for the smaller size of the RF version, and the fact that I would like to have HSD setting, I would simply get the EF and adapt and save myself $600. Where I go to 2nd hand it would likely be close to $800 if not more savings.

Therefore going full budget considering what the other options are doesn't exactly make sense. Looks like the RF 50 f1.8 macro will essentially be like the 35 f1.8 macro. But since 50s tend to be easier to make, I won't be surprised if it comes out at around $250 (maybe 300-350), which in my opinion is definitely doable. And the way canon tends to price things over time it would probably come down to $200 or so... again quite doable in my opinion. Better the 35 f1.8 performance in terms of sharpness and IQ for $250, than the performance of the EF 50 f1.8 which needs to be stopped down anyways, since it is noticeably soft wide open. Canon seems to add the macro for an extra which I think does make quite a different in versatility. Entry users wouldn't necessarily have to ever get a macro lens to dabble in close up shots.
 
Upvote 0
Apr 25, 2011
2,519
1,898
I believe DIGIC is just ARM with extended instruction set and any DSP modules are in its firmware;
DSP is specialized hardware. AI module is another specialized hardware.

but wherever the DSP modules are, the execution should be parallelised/threaded if the the EVF lag is longer than 1 frame.
It doesn't help. If the time needed to process one frame given the existing hardware is 1/15 s, you can do nothing about the lag. But you still may be able to increase the fps if your hardware is heterogenous and parts of it can be used only at specific stages.

Or do you mean "let's use two DIGICs instead of one"? Then, yes, processing can be made faster, if power dissipation and battery capacity is not a problem. But that alone won't make the EVF lag one-frame. You will also need to decrease the lag of sensor and EVF interfaces.
 
  • Like
Reactions: 1 user
Upvote 0
I bet the answer could be found if one is willing to dig through enough posts on the Magic Lantern forum. Those guys had LiveView figured out, at least for the models that still use older Digics.

Without this insight there's just speculation left. What really matters is how the situation looks on the R5.

Good point. I've skimmed through magic lantern forum and tried to search for 'live view' etc. but couldn't find anything useful. Maybe there is, but I'm not that deep into this matter. :)
On the practical side, I haven't seen the R's EVF, and too lazy to go to the shop and pretend I want to buy it in order to check it out. Will wait for R5 to come out. I'm more into landscapes and the EVF lag isn't a huge concern for me.
 
Upvote 0
Every single time I read the specs on the 100-500 I die a little inside.

It could have been amazing. Considering the 500mm F4 I want is $10K, I'd have paid a pretty penny for a nice 100-500.

This thing is a doorstop.

Actually, I think the 500 f/4 is more suited to holding doors open, being bigger and heavier :p
 
  • Like
Reactions: 2 users
Upvote 0
DSP is specialized hardware. AI module is another specialized hardware.
I'm not familiar with the Canon's architecture (especially architecture of an unreleased camera), but specialised or not, it doesn't really matter. What matters is whether it runs in parallel.

It doesn't help. If the time needed to process one frame given the existing hardware is 1/15 s, you can do nothing about the lag. But you still may be able to increase the fps if your hardware is heterogenous and parts of it can be used only at specific stages.

Nope, the point is, you cannot increase the FPS number if processing is sequential and takes longer than 1/30s on average. But if some modules run in parallel (any sort of multithreading), you can improve on FPS while having the lag > 1/30s.
 
Upvote 0

Joules

doom
CR Pro
Jul 16, 2017
1,801
2,247
Hamburg, Germany
Nope, the point is, you cannot increase the FPS number if processing is sequential and takes longer than 1/30s on average. But if some modules run in parallel (any sort of multithreading), you can improve on FPS while having the lag > 1/30s.
Could the wording around sequential and parallel be the issue here? I don't think I'm disagreeing with you.

But one way a big lag can be constructed is by having n small processing steps that each run on a specific unit, each requiring t/n time to finish their task. An image has to go through all n steps before it is displayed, meaning each image takes time t to go from the first to the last step. And the steps are applied to the image in a fixed sequence, one after the previous one.

But the first processing unit doesn't have to wait for the last step to finish, before it can start work on the next image. Each processing unit is only present once, but they are all active at the same time. So they can be seen as parallel, as you say. It is just not the kind of parallel one may talk about with more general processing.

So what you get is a finished image each t/n seconds, but each image has a delay of t seconds.

If they use a similar technique, it may not necessarily all be processing for the sake of display, things like metering and object recognition for tracking could also be incorporated into this pipeline.
 
Upvote 0
Apr 25, 2011
2,519
1,898
I'm not familiar with the Canon's architecture (especially architecture of an unreleased camera), but specialised or not, it doesn't really matter. What matters is whether it runs in parallel.
There is nothing Canon-specific. Any pipeline is both parallel and sequential at the same time. For example, the sensor acquiring a new image in parallel to the EVF displaying an older one.
 
Upvote 0