Are these the 7 RF lenses Canon will be announcing in 2020? [CR1]

Interesting comment from you

Have you also used in the past the EF 35mm f2 IS?

If yes, how do you compare the image quality between the two lenses?

I am asking as I own the EF 35mm and I am very happy with it, almost an "L" quality in my opinion...

Hi, I have NOT used the EF 35 2.0 IS but I had the chance to get the RP WITH the RF 35 "for free" during a very good offer. I had no stabilized wide angle lens, and I have seen a need for vloggin so that offer just came in time.

I am shure both lenses perform similar which can be seen on


while that comparison is a little bit tricky because both lenses are testet on different cameras. I try to "average" EF lens images between 5Ds and 1Ds mark iii for a better comparability - just looking at both sensors output.

Optically the EF version seems to be a little bit crisper in the corners and if you do not need the 1:2 macro mode and do not need minimum size it might be good to stay with the EF version which works on M cameras too. The first RF lens brings more incompatibilitys in the bag!
 
Upvote 0
Apr 25, 2011
2,509
1,884
It's the time period between two distinct sequential images rendered in the EVF/LiveView.
Then the question sounds like "why not slow down the frame rate from one the camera is naturally capable of - to one that would better represent the total capturing, processing and displaying lag?"

And the answer would be: "Eh? What for?".

If you claim your camera does 30fps, you should capture frames 30 times a second.

The delta time between the beginnings of exposures must be 1/30s.

Your system should be able to process the raw data and feed the rendered frame to the EVF before the next capture.
Yes. Yes. No (that would be wishful thinking).

Ok. So you suggest to do the exposure but hold it in the sensor because there's previous capture in memory.
No. I suggest to start the exposure right in time to finish it at the moment we can read it again.
 
Upvote 0

SecureGSM

2 x 5D IV
Feb 26, 2017
2,360
1,231
I just wish it was 70-150 F2 but I guess 70-135 is still pretty amazing F2 Zoom rage and would probably be all you would ever need for portrait work.

speaking of which... I would be absolutely delighted with 90 -180/F2.0... R5 + 28-70/F2.0 - on the left and R5 + 90-180/ F2.0 - on the right.
 
  • Like
Reactions: 1 user
Upvote 0
Then the question sounds like "why not slow down the frame rate from one the camera is naturally capable of - to one that would better represent the total capturing, processing and displaying lag?"

What is this 'natural capability'?
Camera capability of frame rate depends on the exposure time, readout, processing raw data and feeding the EVF. If you suggest that processing can be done during the exposure - it could be, if camera architecture allows that. It may depend on the available memory buffers and the way the sensor gets reset before the capture. I mentioned it myself earlier. But it doesn't matter too much for this discourse. If processing can overlap with exposure, the overlap reduces the processing time, still there will be a part of processing that doesn't overlap (where processing = conversion of raw data to RGB, applying filters and gamut, HUD, and preparing the frame for the EVF).
Again, if you want to do 30fps, you have to do 30 exposures per second and display 30 frames in the EVF, and digitally process 30 frames. If you can't process 30 frames, you can't do 30fps, it's pretty obvious isn't it? That means, each frame, including processing, should be taking less than 1/30s.

If you optimise it so that it takes less than 1/60s, fine, you can increase the frame rate now. But you'll be switching between the fixed frame rates. As in my previous message, most likely you don't want to have a variable frame rate as it hurts eyes.

Yes. Yes. No (that would be wishful thinking).
"No" to the ability to feed the EVF? I thought it was quite obvious, but apparently not for everyone. It's not a wishful thinking, it's a technical requirement. If you don't feed the EVF, you have the previous frame displayed again which means stuttering. If it happens too often, your EVF is screwed and Sony users are laughing.

No. I suggest to start the exposure right in time to finish it at the moment we can read it again.
Errm what's the purpose of that? You don't adjust the beginning of exposure to some arbitrary moment. On the contrary. You want to capture frames in regular intervals and display them in regular intervals at the same rate. So you're trying to fit processing in between the regular frame captures, not vise versa.
 
Last edited:
Upvote 0
Again, if you want to do 30fps, you have to do 30 exposures per second and display 30 frames in the EVF, and digitally process 30 frames. If you can't process 30 frames, you can't do 30fps, it's pretty obvious isn't it? That means, each frame, including processing, should be taking less than 1/30s.
Why would the frame rate have anything to do with the processing time and hence the lag between capture and display?
Frame capture and frame display can both be held at a constant 30 frames per second, even if the processing in between takes an arbitrary amount of time. There could be a 1 second delay between capture and display, and still both could operate at a constant frame rate of 30 frames per second without dropping any frames.
In this case the processor would have to simply hold 30 consecutive frames in memory simultaneously, each of which is at a different stage of processing, where every stage of processing is running in parallel, (i.e., e.g. in its own thread). After one stage has finished it hands the frame on to the next stage. Each of these stages, of course, would have to take less than 1/30 of a second (provided that each stage always operates on a full frame), to be ready for the next frame in time. I would assume that neither holding frames in memory, nor splitting up the processing into small chunks with a duration of less than 1/30 of a second could be a problem. Maybe some stages don't even have to operate on the whole frame at once but only on a smaller region, in which case more than one processing stages could operate on one image in memory simultaneously.
Of course a 1 second delay would be not very useful in practical use - that was just to illustrate the point. But I don't see the absolute requirement that processing a frame takes less than 1/30 s when capture and display operate at 30 frames per second. When you let water run through a pipe at a certain flow rate (of course with an identical rate going in and coming out), you can always add more pipes to it and make the time longer that a single water molecule spends in the pipe, while keeping the flow rate the same.
 
Upvote 0
There could be a 1 second delay between capture and display, and still both could operate at a constant frame rate of 30 frames per second without dropping any frames.
While it's feasible in theory, as I showed it myself earlier in this thread here and also here, it's unlikely to be happening in the actual in-camera implementation.

In this case the processor would have to simply hold 30 consecutive frames in memory simultaneously, each of which is at a different stage of processing, where every stage of processing is running in parallel, (i.e., e.g. in its own thread).

Yep, if you read a couple of pages back, I was writing the same thing. In theory it's possible if we have multithreading and we parallelise multiple different frames. If all processing is done in one thread, it's not possible at all.

But I think it's not the case in real world cameras. I don't think they run parallel processing of different frames at the same time for the EVF/LiveView. All I could find was this knowledge base https://chdk.fandom.com/wiki/Digic_6_Porting, also a short description of LiveView buffers https://chdk.fandom.com/wiki/Frame_buffers#Viewport
They certainly don't have 30 threads for processing of 30 consecutive frames at the same time. Max 2 frames/threads, but I doubt even 2 frames.

The thing is, LiveView works with autofocus and videorecording which will be taking all possible cores. They'd rather parallelise video recording, not LiveView. Processing for the LiveView should be fairly straightforward, there's no compression needed, just simple downsampling. It should be taking less than exposure time.
 
Last edited:
Upvote 0

Rivermist

Mirrorless or bust.
Apr 27, 2019
118
166
Houston
I dunno.

I read “ultra-wide prime L” and got rather happy about it.

But, none of these seem to be an ultra-wide prime by my reconning.

Guess some will make the case for the 35, but to me it would be a 24 or less, preferably.

I was thinking that an ultra-wide prime might drive the sale of a few Ra’s.

And all that.
Out of curiosity, are you interested in it being a prime for optical reasons for astrophysics or star-pictures? Is it that there are fewer glass elements, delivering a sharper picture or something? For a time in the EF system I tried traveling with the 24-105, a telephoto zoom and the 14mm f:2.8 L as that ultra wide prime. Truth is it was not compact or light, and in many situations 14mm was not the right focal length. I have since had the 16-35 IS and more recently the 11-24 (OK, very heavy and very bulky), but in the realm of super-wide angle focal lengths I find that a zoom really helps explore the framing of a picture without swapping lenses.
 
  • Like
Reactions: 1 users
Upvote 0
Out of curiosity, are you interested in it being a prime for optical reasons for astrophysics or star-pictures? Is it that there are fewer glass elements, delivering a sharper picture or something? For a time in the EF system I tried traveling with the 24-105, a telephoto zoom and the 14mm f:2.8 L as that ultra wide prime. Truth is it was not compact or light, and in many situations 14mm was not the right focal length. I have since had the 16-35 IS and more recently the 11-24 (OK, very heavy and very bulky), but in the realm of super-wide angle focal lengths I find that a zoom really helps explore the framing of a picture without swapping lenses.
I would guess it's about maximum aperture that will usually be larger in a prime as compared to a zoom.
 
Upvote 0
Apr 25, 2011
2,509
1,884
What is this 'natural capability'?
The highest possible FPS rate of a pipeline is the highest possible FPS rate of its slowest stage.

Again, if you want to do 30fps, you have to do 30 exposures per second and display 30 frames in the EVF, and digitally process 30 frames. If you can't process 30 frames, you can't do 30fps, it's pretty obvious isn't it?
It is pretty obvious that each stage of the pipeline must be able to do its part of work for 30 frames per second.

That means, each frame, including processing, should be taking less than 1/30s.
That "should" is wishful thinking. There is no requirement for that, and in practice, it takes longer.

"No" to the ability to feed the EVF?
"No" to the ability to have 1/30s of maximum total delay in a multi-stage pipeline whose slowest stage is only 30fps.

I thought it was quite obvious, but apparently not for everyone.
It's quite obvious that the Earth is flat, but apparently not for everyone.

Errm what's the purpose of that?
To increase the FPS, to reduce the lag, and to have less shot noise in the EVF.

You don't adjust the beginning of exposure to some arbitrary moment.
What makes you think that the moment is "arbitrary"?

On the contrary. You want to capture frames in regular intervals and display them in regular intervals at the same rate. So you're trying to fit processing in between the regular frame captures, not vise versa.
You obviously don't understand how a pipeline works.

You are not "trying to fit processing in between", you are trying to do the processing in parallel.

And please don't start telling me that instead of the sensor doing exposure, the DIGIC doing processing and the EVF doing displaying for each frame, you would make each of them doing a third of exposure, a third of processing and a third of displaying for the same frame.
 
Upvote 0

Phil

EOS R, RF24-105 f4, RF35 1.8, RF50 1.2, RF85 1.2
Oct 17, 2018
40
33
speaking of which... I would be absolutely delighted with 90 -180/F2.0... R5 + 28-70/F2.0 - on the left and R5 + 90-180/ F2.0 - on the right.
Yeah I’m actually debating whether to keep my 1.2 primes or sell them to buy the F2 zooms. I use to have 2.8 zooms and do miss their versatility, really tough decision. Initially I thought the 28-70 F2 was to big and heavy but after having the 85 1.2 I don’t thing the extra size and weight will make much difference and unfortunately I can’t afford the primes and zooms so have to pick one or the other. Lol first world problems to the Max.
 
  • Like
Reactions: 1 user
Upvote 0

Phil

EOS R, RF24-105 f4, RF35 1.8, RF50 1.2, RF85 1.2
Oct 17, 2018
40
33
No doubt a fast prime has it's place for sure. As a portrait/fashion/model boot camp shooter I also know the value of a fast zoom in high pressure production situations. f/2 is no slouch when it comes to portrait work. Of course, you will pick what you know you need. For me f/2 that covers a wider focal range in a single lens than a prime is invaluable and much more convenient than swapping around three different lenses when I need a different perspective. Especially when I can't keep an eye on a bag full of gear. In my situation, f/2 is kind of a sweet spot. $3k would be very worth it to me.
I think you have confirmed what I’ve been thinking and I’m going to sell my primes and get the F2 zoom. It’s the first time I’ve had mostly primes and I’m really missing the flexibility of zooms.
Question what are your thoughts on the coming 70-135 F2 vs RF 70-200 2.8 for portrait work?
 
Upvote 0

MadScotsman

EOS R / RP
Sep 9, 2019
45
82
Out of curiosity, are you interested in it being a prime for optical reasons for astrophysics or star-pictures? Is it that there are fewer glass elements, delivering a sharper picture or something? For a time in the EF system I tried traveling with the 24-105, a telephoto zoom and the 14mm f:2.8 L as that ultra wide prime. Truth is it was not compact or light, and in many situations 14mm was not the right focal length. I have since had the 16-35 IS and more recently the 11-24 (OK, very heavy and very bulky), but in the realm of super-wide angle focal lengths I find that a zoom really helps explore the framing of a picture without swapping lenses.

Absolutely astro photography. Common wisdom holds that primes are superior to zooms for the sharpness coveted by the Milky Way and star photo guys, and the wider the better, and the faster the better.

The current EF version is ~$2,100.

I'd rather put that money toward an RF version if the wait was going to be short. Samyangs cheapee gets mixed reviews and some of them are pretty harsh.

Unfortunately, I suffer from a genetic abnormality common among men of Scottish descent.

I have deep pockets, but extraordinarily short arms. :)
 
  • Like
Reactions: 1 users
Upvote 0

Ozarker

Love, joy, and peace to all of good will.
CR Pro
Jan 28, 2015
5,933
4,336
The Ozarks
I think you have confirmed what I’ve been thinking and I’m going to sell my primes and get the F2 zoom. It’s the first time I’ve had mostly primes and I’m really missing the flexibility of zooms.
Question what are your thoughts on the coming 70-135 F2 vs RF 70-200 2.8 for portrait work?
Hi Phil,
For me, the 70-135 f/2. I have never used the RF 70-200mm f/2.8L so I cannot speak to how the bokeh is. On the EF 70-200mm f/2.8L IS II the bokeh was busy or nervous looking at times. That doesn't mean I didn't like the lens, I really did. It was a fantastic lens. I really don't see a need for me to go beyond 135mm for a portrait and f/2 is just much nicer. Both lenses I am sure would be great, but I will go for the 70-135. Primes are nice, but for the stuff I do a fast f/2 zoom is a better fit.
 
Upvote 0
What makes you think that the moment is "arbitrary"?
By your definition.
Anyway, for the smooth video stream to be shown in the EVF, you must start the captures in regular intervals and you must provide the rendered buffer to the EVF in regular intervals, namely 1/30s if your goal is 30fps.

Here Cn is the beginning of frame captures (exposures), On is readout and output to the raw buffer, Pn is the end of processing and Rn... are moments we refresh the rendered buffer for the EVF. Normally it's done via two alternating buffers, and you just switch the buffer, so R's are instantaneous.

Code:
Fig.1
C1-O1------C2-O2------C3-O3------C4-O4------C5-O5------C6-O6
------P1---------P2---------P3---------P4---------P5--------
--------R1---------R2---------R3---------R4---------R5------

It's obvious that no matter what parallel processing you use, there will be some gap between any Cn and Rn, determined by the time between Cn and On plus Pn.
Now you can offset the timeline with Rn but the distance between any Rn and Rn+1 will be fixed. If processing takes too long and the h/w architecture allows that, you can have an overlap of processing with captures:

Code:
Fig.2
C1-O1------C2-O2------C3-O3------C4-O4------C5-O5------C6-O6
P0---------P1---------P2---------P3---------P4---------P5---
--R0---------R1---------R2---------R3---------R4---------R5-

Still each Rn must finish before On+1.
Also, you can't have more than 1/30s between any Pn and Pn+1 (if that happens, you skip the frame). Bear in mind Pn in these diagrams is the end of processing, not the whole thing. The beginning of n-th processing is right after On.

So in this simple case the processing time (Pn - On) is also < 1/30s.

Now we've already discussed the possibility for (Pn - On) to be > 1/30s, in case there's some parallel processing.

Code:
Fig.3
C1-O1------C2-O2------C3-O3------C4-O4------C5-O5------C6-O6
-----------------P1--------------------P3-------------------
------P0--------------------P2--------------------P4--------
--------R0---------R1---------R2---------R3---------R4------

But as I explained in a couple of previous messages, I don't think it's happening. But I don't know for sure.

You obviously don't understand how a pipeline works.

My primary work is data processing, multithreading, rendering etc., I think I do understand how a pipeline may work on a high level.
But no, I don't know how the Canon pipeline in cameras work exactly. I guess you don't know it either unless you work at Canon.
 
Upvote 0
Apr 25, 2011
2,509
1,884
By your definition.
No.

Anyway, for the smooth video stream to be shown in the EVF, you must start the captures in regular intervals and you must provide the rendered buffer to the EVF in regular intervals, namely 1/30s if your goal is 30fps.
Yes.

Here Cn is the beginning of frame captures (exposures), On is readout and output to the raw buffer, Pn is the end of processing and Rn... are moments we refresh the rendered buffer for the EVF. Normally it's done via two alternating buffers, and you just switch the buffer, so R's are instantaneous.
Lol, you must be a game developer. No.

The EVF is connected to the processing unit (LCD controller) with a serial interface similar to the sensor's one. The processing unit does not make processing in the EVF's own memory.

Code:
Fig.1
C1-O1------C2-O2------C3-O3------C4-O4------C5-O5------C6-O6
------P1---------P2---------P3---------P4---------P5--------
--------R1---------R2---------R3---------R4---------R5------

Actually, it looks more like this (for example):
Code:
C 11111111--------22222222--------33333333--------44444444--------
O --------11111111--------22222222--------33333333--------44444444
P --------11111111--------22222222--------33333333--------44444444
Q ----------------11111111--------22222222--------33333333--------
R ------------------------11111111--------22222222--------33333333
C is the first curtain of the electronic shutter running across the sensor.
O is the second curtain of the electronic shutter running across sensor, digitizing and sending out the exposure.
P is processing unit doing processing that is possible to do incrementally (debayering, for example).
Q is processing unit doing processing that requires the whole picture (AWB, for example).
R is processing unit (LCD controller) sending the data to the LCD.

Timings may vary. For example, here, a 180 degree shutter was shown. Technically, if we need more light, a shutter can be more than 180 degrees, then the first curtain starts its next run when the second curtain hasn't yet finished its previous one. The same if we need fastest FPS and shortest lag (in milliseconds, not in frames), then we go for a 360 degree shutter and it will look like this (for example):

Code:
C 1111111122222222333333334444444455555555666666667777777788888888
O --------11111111222222223333333344444444555555556666666677777777
P --------11111111222222223333333344444444555555556666666677777777
Q ----------------111111112222222233333333444444445555555566666666
R ------------------------1111111122222222333333334444444455555555

And there could be more than one stage of Q (if different specialized hardware modules are used at different stages), and the EVF may have its own (synchronization) delays.

My primary work is data processing, multithreading, rendering etc., I think I do understand how a pipeline may work on a high level.
Looks like that's not enough. What kind of hardware are you working with?

But no, I don't know how the Canon pipeline in cameras work exactly. I guess you don't know it either unless you work at Canon.
I don't think they are reinventing the wheel there.
 
Upvote 0
The EVF is connected to the processing unit (LCD controller) with a serial interface similar to the sensor's one. The processing unit does not make processing in the EVF's own memory.
In Canon it looks like it does. I've already quoted this https://chdk.fandom.com/wiki/Frame_buffers#Viewport
Unfortunately those pages are all from hacky reverse-engineering and not very reliable.
However you can also find what processors they have under DIGIC umbrella https://chdk.fandom.com/wiki/Digic_6_Porting

In your description there's many hardware processing units that all work in parallel but it doesn't look like the case with the architecture from the links above. There's just three CPUs inside a DIGIC, in DIGIC7 one of CPUs has two cores. It doesn't look like a very low-level specialised FPGA-like hardware but resembles a small general-purpose computer with some specific interfaces and external devices and a specialised GPU.

Actually, it looks more like this (for example):
That's essentially the same as my diagram #2, only you're suggesting a greater overlap.
I'm not sure about simultaneous reading from sensor and processing data on the same physical memory. Reading from sensor is actually writing to memory, and processing is both reading and writing (to somewhere else perhaps). If the main CPU does the readout and GPU does processing at the same time, then, depending on how fast the memory is, it may be faster to do these operations consecutively.

Also I'm not sure if such extensive multithreading in EVF pipeline is possible at all - when you start focusing, you'll also need to process dual-pixel data and run AI somewhere to recognise motion/eyes/faces.

Looks like that's not enough. What kind of hardware are you working with?

Sorry. NDA. But i's different operating systems on PC, mobile and in the past some unusual input devices. In the remote past - microcontollers, but it was too long ago. But it looks like you're much closer to the low-level hardware, I'm more on the software side.
 
Upvote 0