There will not be an EOS 5D Mark V [CR2]

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
No it doesn't work like that because it's meaningless, and also requires multithreading.

It's not meaningless, it does work like that, and multithreading is a separate and higher level issue. (Threads are abstractions for programmers.)

With a single thread, the processing MUST be less than 1/120s, it's simple math.

When you talk about threading, you are talking about the programmer's view of the middle of what I'm calling the pipeline. In the middle the raw sensor data is in memory and has to be processed to a form that is used by the EVF display.

Even if this middle was a single thread/single core stage that completed a frame every 1/120, it still would be coordinated with other hardware events. And latency would still be >1/120. And I doubt this is a single thread/single core event. It's a pretty good bet that at 60 or 120 Hz multiple EVF frames are being worked on by multiple cores...at different stages...simultaneously.

What heppens in reality, the whole pipeline after the capture is basically readout plus applying some filters (de-mosaicing + current image style), after which it's basically ready for EVF right away. There's no point in breaking it down into many steps separated in time.

You have far too simplistic an understanding of the hardware events which must take place and the time which would be involved.

Of course it's shared memory/direct memory access by EVF. They do processing in a buffer and the EVF switches to the buffer right away, there's no additional copying/writing anywhere. Basically it takes two alternating buffers.

Again, I think you're confusing high level abstraction with what actually happens on the silicon. Either the EVF reads from the same RAM used by the ARM processor (modern DIGIC uses ARM cores with added instructions), or it reads from its own display buffer on a discrete bus. If the former, memory I/O must be coordinated with the processor. If the latter, it still has to be coordinated but at far fewer points for less memory contention. Basically with the latter the processor and EVF can be performing memory I/O simultaneously.

Given that there's already memory contention with multiple cores, sensor readout, etc., I would be shocked if there aren't separate buffers and discrete buses at certain points. The EVF would be the first candidate for such a buffer to reduce contention for the main memory bus.
 
  • Like
Reactions: 1 users
Upvote 0
Yes I am sure all birders could use EVF instead of OVF carry a ton of batteries and use the big whites with adapters
Not only that, going to a long mountain camp with a 5D series might need only 4 batteries for a 2 weeks or even one month camp for amazing landscapes, but if with a MILC... good luck for power
 
  • Haha
Reactions: 1 user
Upvote 0
Not me, I will just use them with the EF/RF adapter. Over time (years) I will replace some as significantly better RF lenses become available.
Agreed with you.
There are some benefits by adapting EF lens to RF Mount by using the drop in filter adapter.

My L Lens is now left 16-35mm f/4 & 70-200mm f/2.8, and the RF offering is impressive due to the small size and lighter weight for 70-200mm.

I don't see myself will using filter so much as I'm not really in landscape photography.

Therefore I would slowly let go my lens and switch to RF for the years to come.
 
  • Like
Reactions: 1 user
Upvote 0

stevelee

FT-QL
CR Pro
Jul 6, 2017
2,383
1,064
Davidson, NC
This is why there are two readings for the number four, shi and yon. Whenever possible, people try to avoid using the deathy one (shi). The Japanese also have two readings for nine as one sounds like the word for agony and torture.
I used to know the numbers from one to fifteen, but have forgotten them. In grad school in Dallas, I had a friend from Japan. He and my other friends had a house rule when shooting pool that we had to call our shots in Japanese.
 
  • Haha
Reactions: 1 user
Upvote 0
It's not meaningless, it does work like that, and multithreading is a separate and higher level issue. (Threads are abstractions for programmers.)

There's threading API as abstraction for programmers, and there's hardware threads/CPU cores.

When you talk about threading, you are talking about the programmer's view of the middle of what I'm calling the pipeline. In the middle the raw sensor data is in memory and has to be processed to a form that is used by the EVF display.

Nope I'm talking about both. With DIGIC, the processing pipeline is all software, shall it be in RAM or ROM or executed within different DIGIC components. There are hardwired modules within DIGIC but it doesn't change anything in terms of this discussion. It doesn't matter at all for the purpose of this talk.

You have far too simplistic an understanding of the hardware events which must take place and the time which would be involved.

In DIGIC everything is processed as software, just running on different CPUs. There's also programmable ASIC modules to speed some operations up, but it doesn't change anything conceptually.

Again, I think you're confusing high level abstraction with what actually happens on the silicon.

Anything that happens in the silicon is digital signal processing, that is, a program. We don't know the exact architecture, but the only thing that actually matters for this discussion - how much of parallel processing is happening there.
Parallel in the sense that two or more consecutive captured frames are processed at the same time. I don't think that's what's happening there, it's meaningless and takes more memory.

Given that there's already memory contention with multiple cores, sensor readout, etc., I would be shocked if there aren't separate buffers and discrete buses at certain points. The EVF would be the first candidate for such a buffer to reduce contention for the main memory bus.

There could be separate buffers. Or it may be same shared memory. Neither will add milliseconds to prostprocessing latency.
 
  • Haha
Reactions: 1 user
Upvote 0
The optical path from a photon hitting the front of a lens to an OVF is effectively 0ms of latency. The digital path from photon>sensor>read>de-bayer>framebuffer>EVF is non zero. Doesn't matter if it's a 1fps or a 1000fps camera. Is that latency enough to matter?

There's a threshold where the latency doesn't matter because typical human reaction time before pressing the shutter button is about 0.1s. So 1s latency obviously matters, 0.033s (30Hz EVF) perhaps matters, and 0.0083s (120Hz EVF) - is probably negligible. That applies to cases when you take a single shot and your camera stays still. When you do continuous shooting, good focusing system matters a lot more than EVF's delay.

When you do panning and follow your subject, it'll heavily depend on the angular speed of your subject. I guess for extreme cases 0.0083s may not be enough.
 
  • Haha
Reactions: 1 user
Upvote 0

davidhfe

CR Pro
Sep 9, 2015
346
518
There's a threshold where the latency doesn't matter because typical human reaction time before pressing the shutter button is about 0.1s. So 1s latency obviously matters, 0.033s (30Hz EVF) perhaps matters, and 0.0083s (120Hz EVF) - is probably negligible. That applies to cases when you take a single shot and your camera stays still. When you do continuous shooting, good focusing system matters a lot more than EVF's delay.

When you do panning and follow your subject, it'll heavily depend on the angular speed of your subject. I guess for extreme cases 0.0083s may not be enough.

I will take one more stab at this:

Latency vs Throughput - you keep calculating “Latency” in terms of Frame rate. The refresh of the OLED is a component of latency but it is only one component. You cannot just say 120hz=5ms=no problem.

Human Performance - Reaction time varies wildly depending on task. But given that you’re adding the lag of the digital system to the human reaction time, every ms just adds up. And again, if you’ve been shooting motor sports or falcons diving at 200mph for over a decade, you be learned to anticipate that shot. An EVF delay may require you to relearn that anticipation. (And this is additive to shuttle release delays as well)

Thresholds and the point of this - I agree there is a latency so low it doesn’t matter. OVFs still have latency just very, very low latency because photons are pretty quick little suckers ;)

The point is that you don’t know the latency of this system. I do not know it. Only some engineers at canon do, along with any reviewers who bothered to rig a test in the last few days. All we really know is it’s non-zero, and that it’s larger than the latency of an OVF. You and I are unconcerned, assuming it’s good enough or perhaps even imperceptible. Others are concerned.
 
  • Like
Reactions: 2 users
Upvote 0

Soren Hakanlind

Professional photographer
Feb 1, 2020
12
11
Sweden
hakanlind.com
The gap between R5 and R6 is to big. As a professional with a lot of EF-lenses, the 5dm4 and 1X is perfekt for me. I don't need the video stuff in the R5 and the R6 have only around 20 megapixels. If the R6 have had around 30 megapixels it would have been an easy choice. But now? I'm still waiting for the Eos 5dmk5. Come on! We are thousands of professional photographers waiting for the Eos 5DmarkV! We want 30-35mp and the same specs as in the R6.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Jan 30, 2020
410
513
The gap between R5 and R6 is to big. As a professional with a lot of EF-lenses, the 5dm4 and 1X is perfekt for me. I don't need the video stuff in the R5 and the R6 have only around 20 megapixels. If the R6 have had around 30 megapixels it would have been an easy choice. But now? I'm still waiting for the Eos 5dmk5. Come on! We are thousands of professional photographers waiting for the Eos 5DmarkV! We want 30-35mp and the same specs as in the R6.
Updated R with joystick and dual card slot...same cropped video and 32 mp.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I guess I don't understand this line of reasoning. To me, that's like saying that Canon couldn't possibly have made a 7D because the Rebels were not big enough or robust enough. The decision will be driven by the sensor size and the lens mount. The M mount is the mount Canon made for APS-C sensors in mirrorless cameras a specific market segment desiring compact, lightweight, and affordable cameras with a limited number of lightweight, compact, and affordable lenses. Canon can make any size and style of body they choose for the M mount. As I said, I'd prefer an RF mount, but I'm not sure Canon will agree.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
This actually makes me really sad. Like I understand why, but I've been shooting with their 6D series for a while now and I've wanted to get into the 5d series but I put off buying a mk iv in hopes of the mk v coming out as their last 5d series.

There's never been a better time to buy a 5D Mark IV. Canon authorized dealers in the U.S. were recently selling them for $1,999 USD. Those factory sponsored "instant rebates" have expired, but they'll probably return with the fall rebate season. The 5D Mark IV is a LOT of camera for less than $2K!
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Actually, I think that is the most likely scenario as well. Cropping to 1.6 is so easy with the R (In fact, I'm embarrassed to admit that I inadvertently turned it on once during an event) that the only reason for a dedicated APS-C sensor R camera that I can imagine would be cost and I'm not sure Canon would have any incentive to make or sell an APS-C R camera at a substantial discount.

When the original 7D first came out, the production cost between full frame and crop sensor was substantial and the 7D offered people a premium camera at a significantly more affordable price than full frame. Now, with bargain full frames available, that market isn't as significant.

In my mind, that only leaves the birding, wildlife and sports enthusiast market that wants a crop sensor for more perceived reach (often as a second body). That market is not very price sensitive, so I'm not sure Canon really has to offer a dedicated APS-C body at all, if a high megapixel body can meet the same need. Particularly if people are trading two camera purchases for one.

In my experience the birding, wildlife, and sports market are the only folks that bought the 7D line in any significant numbers. Even more so with the 7D Mark II than with the original 7D.

For sports, at least, it was not just driven by enthusiasts but also by semi-pros on a very tight budget shooting youth league, high school, and even small college sports and trying to make more in sales than they spent on gear. Admittedly, that class of photographers has been slowly disappearing since the 7D Mark II was rolled out in late 2014 as it seems harder and harder to get folks to actually buy event-based photographs.
 
  • Like
Reactions: 1 user
Upvote 0
Latency vs Throughput - you keep calculating “Latency” in terms of Frame rate.

No I don't. There's delay after the capture/exposure and there's delay between the physical event and displaying it in the EVF.

The refresh of the OLED is a component of latency but it is only one component. You cannot just say 120hz=5ms=no problem.

120hz ≘ 1/120s = 8.3ms and I've never said it wasn't a problem. The argument above was mostly about a possibility for the delay to be more than 1/120s. I was arguing there's no mysterious pipelines that have several frames in the queue so that the actual latency is more than 1/120s.

Also I was saying, the gap between OVFs and EVFs is narrowing down especially with 120Hz EVFs. The lag of less than 10% of human reaction time is negligible in most practical cases.

But given that you’re adding the lag of the digital system to the human reaction time, every ms just adds up. And again, if you’ve been shooting motor sports or falcons diving at 200mph for over a decade, you be learned to anticipate that shot. An EVF delay may require you to relearn that anticipation. (And this is additive to shuttle release delays as well)

200mph doesn't matter, what matters is the angular speed. Some objects moving relatively slow but close to the camera may have high angular speed. And a high-speed object moving directly towards camera is more challenging to the AF, not to the EVF with its latency.

And that's exactly what I said in the previous message - for objects with high angular speed even 1/120s may be a problem. It is a problem but in niche applications. Yes you may need to re-learn to frame properly and/or zoom out a bit etc.
 
Last edited:
Upvote 0

stevelee

FT-QL
CR Pro
Jul 6, 2017
2,383
1,064
Davidson, NC
In my experience the birding, wildlife, and sports market are the only folks that bought the 7D line in any significant numbers. Even more so with the 7D Mark II than with the original 7D.

For sports, at least, it was not just driven by enthusiasts but also by semi-pros on a very tight budget shooting youth league, high school, and even small college sports and trying to make more in sales than they spent on gear. Admittedly, that class of photographers has been slowly disappearing since the 7D Mark II was rolled out in late 2014 as it seems harder and harder to get folks to actually buy event-based photographs.
Some years back I went to a Kelby seminar at the convention center in Charlotte. At a break I found out that folks around me had sons playing high school football. They all either had a 7D or hoped to buy one soon. One guy sold photos to other parents and did pretty well with it for a hobby, plus he was shooting his own son anyway.

At the other end of the scale, I have a friend who shoots college sports professionally. He has contracts with various schools in the area and sells photos from his web site. His photos also appear in newspapers and on their web sites. He said that cancellation of spring sports had cost him $50k in income up to that point. Fall sports don't sound promising. His work is excellent. He shoots Nikons, so I don't know the models, but I would assume something top end. He obviously has some really long lenses. He has flash guns stationed in the rafters of the basketball arena here. He says that is how he gets such good color balance, yet I never notice the flashes going off during games. The arena has installed new lights that look better on TV, so he might not need the flash as much. With the old lights when I shot video it had a bit of sickly green cast to it. I was shooting pick-up games, so the camera was seeing all the empty red seats, and AWB shifted toward cyan, I guess. I did try one night using a white card to set a custom balance, but it didn't help much. I'm not great working with color grading in FCP X. Last year I was in Denmark when the games were played, and of course they didn't happen this year. So I've not tried shooting under the new lights. I'm not sure whether they would use them during pick-up games anyway, maybe just for real games for TV. If life is closer to normal next year, maybe I'll find out, or I'll feel let out of a cage and leave for Norway or somewhere else cool that I haven't been to.
 
Upvote 0

bbb34

5D mk V
Jul 24, 2012
156
173
Amsterdam
120hz = 1/120s = 8.3ms

I assume you know this is wrong notation, but in any case: please don't write it. It causes a migraine looking at it! :eek:

120 Hz = 120 1/s

1 / 120 Hz = (1 / 120) s = 8.3 ms​

To keep it short, one may use one of the symbols that are used for "corresponds to", like ≙ or ≘

120 Hz ≘ 8.3 ms​
 
Upvote 0
I assume you know this is wrong notation, but in any case: please don't write it. It causes a migraine looking at it! :eek:

120 Hz = 120 1/s

1 / 120 Hz = (1 / 120) s = 8.3 ms​

To keep it short, one may use one of the symbols that are used for "corresponds to", like ≙ or ≘

120 Hz ≘ 8.3 ms​

Ok, I fixed it for you! Of course it was wrong notation because of different units.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Some years back I went to a Kelby seminar at the convention center in Charlotte. At a break I found out that folks around me had sons playing high school football. They all either had a 7D or hoped to buy one soon. One guy sold photos to other parents and did pretty well with it for a hobby, plus he was shooting his own son anyway.

At the other end of the scale, I have a friend who shoots college sports professionally. He has contracts with various schools in the area and sells photos from his web site. His photos also appear in newspapers and on their web sites. He said that cancellation of spring sports had cost him $50k in income up to that point. Fall sports don't sound promising. His work is excellent. He shoots Nikons, so I don't know the models, but I would assume something top end. He obviously has some really long lenses. He has flash guns stationed in the rafters of the basketball arena here. He says that is how he gets such good color balance, yet I never notice the flashes going off during games. The arena has installed new lights that look better on TV, so he might not need the flash as much. With the old lights when I shot video it had a bit of sickly green cast to it. I was shooting pick-up games, so the camera was seeing all the empty red seats, and AWB shifted toward cyan, I guess. I did try one night using a white card to set a custom balance, but it didn't help much. I'm not great working with color grading in FCP X. Last year I was in Denmark when the games were played, and of course they didn't happen this year. So I've not tried shooting under the new lights. I'm not sure whether they would use them during pick-up games anyway, maybe just for real games for TV. If life is closer to normal next year, maybe I'll find out, or I'll feel let out of a cage and leave for Norway or somewhere else cool that I haven't been to.

If a school installs new lighting in their gym to make it better for TV cameras, it's not really a "small" college in my mind. It may be small for a D1 program, but that's a far cry from truly small non-scholarshipped D3 programs or even NAIA programs.
 
Upvote 0