120 cycles per second is 120Hz, not 1/120Hz.
bbb34 was referring to the fact that 1 / (120 Hz) is 8.3ms. In terms of physics units, 1/Hz = second, and 1/secons = Hz.
Upvote
0
120 cycles per second is 120Hz, not 1/120Hz.
There's threading API as abstraction for programmers, and there's hardware threads/CPU cores.
Nope I'm talking about both. With DIGIC, the processing pipeline is all software,
We don't know the exact architecture, but the only thing that actually matters for this discussion - how much of parallel processing is happening there.
Parallel in the sense that two or more consecutive captured frames are processed at the same time. I don't think that's what's happening there, it's meaningless and takes more memory.
There could be separate buffers. Or it may be same shared memory. Neither will add milliseconds to prostprocessing latency.
I was arguing there's no mysterious pipelines that have several frames in the queue so that the actual latency is more than 1/120s.
It is a small school in terms of enrollment, but plays D1 sports in the Atlantic 10. My friend's clients are all division 1 schools, mostly in the Southern Conference. The pick-up games I have shot came after hours at basketball camp. Former players, pros from Europe, current players, incoming freshmen, prospects, friends of some of the above, and sometimes some Charlotte Hornets would play. So the game are private, and I promised the coach that any video I post on YouTube will have restricted circulation. Before I started shooting the videos, Steph Curry who was already a pro and his little brother Seth who was still playing for Duke both played one night. Their dad Dell came, too, I guess to pick up Seth. We walked out together. I wish I had video of that night. Anyhow, my videos have a dedicated following, mostly people wanting to see how the new freshmen stack up. I would use the occasion to learn how to shoot video on whatever new camera I had, including my iPhone 6S one year. I shot 4K and edited it down to 1080p, using FCP X in essence, for digital zooming in. I always apologized for my bad color casts, but my audience didn't care.If a school installs new lighting in their gym to make it better for TV cameras, it's not really a "small" college in my mind. It may be small for a D1 program, but that's a far cry from truly small non-scholarshipped D3 programs or even NAIA programs.
There are no "hardware threads." Threads are entirely an abstraction for programmers. There are silicon features to make that abstraction work better and more efficiently, but a processor does not know what a "thread" is nor does it manage them apart from executing the related functions in an OS.
Again, your understanding of what's happening at the hardware level...the pieces of silicon and the buses they use to communicate...is far too simplistic. To give a view that's still simplified but illustrates what I mean: while DIGIC does its thing on bytes in RAM (the middle), the sensor is exposing/reading/resetting, and the EVF is refreshing its physical elements.
It's quite meaningful if a single core cannot prepare 120 frames for the EVF every second. The frames are roughly 1.92mp (3 dots in an EVF = 1 pixel) and have to be debayered, brightness adjusted, and color adjusted at the minimum. Hard to say without knowing core frequency, memory bus speed, what other steps are involved, and what special instructions have been added to the ARM instruction set in DIGIC. But I would guess more than one core is involved.
Memory contention can produce those kinds of performance impacts. RAM is slow which is why processors have three levels of cache with typically two that are discrete to each core (third shared for all cores). And RAM is really slow when multiple cores all stall waiting for access to the memory controller and bus. Now I could be wrong and the EVF could use main memory directly to refresh its physical pixels. I'm not saying it's impossible to coordinate the memory accesses and keep it all smooth. But the odds are there's a discrete buffer.
Latency from the moment the event occurred to the moment you see it on screen would be 16.6ms even if DIGIC broke the laws of physics and did its work instantaneously.
It is a small school in terms of enrollment, but plays D1 sports in the Atlantic 10. My friend's clients are all division 1 schools, mostly in the Southern Conference. The pick-up games I have shot came after hours at basketball camp. Former players, pros from Europe, current players, incoming freshmen, prospects, friends of some of the above, and sometimes some Charlotte Hornets would play. So the game are private, and I promised the coach that any video I post on YouTube will have restricted circulation. Before I started shooting the videos, Steph Curry who was already a pro and his little brother Seth who was still playing for Duke both played one night. Their dad Dell came, too, I guess to pick up Seth. We walked out together. I wish I had video of that night. Anyhow, my videos have a dedicated following, mostly people wanting to see how the new freshmen stack up. I would use the occasion to learn how to shoot video on whatever new camera I had, including my iPhone 6S one year. I shot 4K and edited it down to 1080p, using FCP X in essence, for digital zooming in. I always apologized for my bad color casts, but my audience didn't care.
I'm sorry. I have no idea of what message long ago in this thread got me to talking about video color casts and got you to talking about the definition of the word "small." I think at some point I had posted about meeting parents of high school football players who either had or wanted the 7D.Regardless of enrollment, a D1 athletic program is not what I consider "small". D2/D3/NAIA is what I'm talking about.
Of course there are lol, I've been worked with multithreading and real-time systems for many years, including non-Intel h/w architectures.
Try drawing a timeline diagram and you'll see that your pipeline can't be taking more than 1/120s if there's only one frame's data in the pipeline.
All these adjustments can be done in one go,
but crucially, no more than one captured frame in the pipeline.
Have you ever heard of CPU cache? L1, L2?..
There is L1 and L2 caches in ARM CPUs, I guess DIGIC X isn't an exception. In fact the feed for EVF is very small and lightweight compared to the full video data or continuous shooting.
Tell me what instructions in x86, x86-64, or ARM allow you to spawn threads that are managed by the silicon without an OS. Which instructions do I use to set thread stack size, priority, pause/resume threads, kill them? (Hint: they're not there.)
Again, there are instructions and features on the silicon designed to make thread switching and management more efficient, but that's not the same thing as a "hardware thread." When people use that term they are talking about those features, they are not implying that the CPU can spawn/manage/retire a pool of threads all on its own.
I can draw a timeline with 120 fps throughput and a 120 SECOND latency. Nothing about 120 fps throughput demands a latency of any given length of time. Each STEP has to be 1/120th or less, but not all the steps combined.
I have a paper on my drive describing one of the fastest demosaicing algorithms known. The algorithm is not "one go", it would compile to many instructions for the CPU that have to be repeated for each pixel. And I don't see how brightness or color could be worked into the algorithm without increasing instruction count and execution time.
Caches are small and do not make memory contention magically disappear.
"Managed by silicon" - why would we need that, especially in terms of this talk?
Each thread has its own set of registers.
They are not implying that, but a hardware thread means separate instruction pipeline and separate set of registers.
Or you're inventing new terminology just in order to prove you apparently incorrect statement that there's no hardware threads?
You missed the point. Then you will have 120 frames processed at the same time in a single thread.
'In one go' = 'you don't need to switch context' and 'you don't need to walk the buffer several times'.
Well it looks like we have our answer:
There doesn't seem to be appreciable lag while not shooting. (Which does not mean latency = throughput. It just means latency is too small to be a human problem.) But when shooting both Tony and Chelsea struggled to keep the subject in frame. As I said earlier in this long winded debate, the bubbles introduced by full resolution capture/processing/storage is what most people notice and complain about. (Though general latency has been a problem as well on some EVFs.) I notice this stutter even on A9 bodies and while I think can anticipate/track reasonably well with those, it's still not as nice as an OVF.
Chelsea also noted battery life and thermal issues while shooting stills. Her battery only lasted 1 hour. And while the camera did not shut down shooting stills, switching to video she realized that shooting stills did drive up temperatures because she had only 2 minutes recording time available. This raises another issue in my mind that nobody seems to be looking at with mirrorless: what are the DR and high ISO measurements after the camera has been on with EVF and shooting for a while? I suspect it's one set of values in an air conditioned lab, and another after several hours of EVF and shooting. This may not matter for most shots, but for long exposure night/astro photography it could be significant.
Now I don't want to sound like a Sony fan claiming 'Canon is doomed.' These are still very good cameras IMHO. The issues people are discovering are issues which exist across the mirrorless world. But those of us who complain that we prefer an OVF for some situations are vindicated (yet again). And until EVFs truly match OVFs, I would love to see DSLR versions of some R bodies.
Because you keep insisting there are "hardware threads" and there are not.
Hyperthreaded cores do not have distinct execution pipelines for threads. Execution units, caches, and the system bus are all shared while some resources, like the register set, are duplicated. Hyperthreading is basically a way to keep execution pipelines better fed with instructions since any thread will tend to have stalls or bubbles which leave units idle.
Much more likely to have 120 separate threads. (Actually 120x120 since my example was 120s latency and 120 fps throughput.)
You're going to 'walk the buffer' (i.e. iterate over the bytes in the frame you're working on) more than once
To be fair, I couldn't keep a bird in frame with a 2xTC on a 500mm L IS (first version) using a DSLR - too heavy and too narrow fov. Chelsea is quite a woman to manage that.+++ But when shooting both Tony and Chelsea struggled to keep the subject in frame.
Thanks for pointing this out. I am wondering if the same issue Was confirmed for R6?
How many here used CPS, KC, MC when in high school? They changed to HZ and KHZ and more we did not have much call to use GHZ back then.I assume you know this is wrong notation, but in any case: please don't write it. It causes a migraine looking at it!
120 Hz = 120 1/s
1 / 120 Hz = (1 / 120) s = 8.3 ms
To keep it short, one may use one of the symbols that are used for "corresponds to", like ≙ or ≘
120 Hz ≘ 8.3 ms
How many here used CPS, KC, MC when in high school? They changed to HZ and KHZ and more we did not have much call to use GHZ back then.
Right this is exactly what I don’t like about mirrorless, the lag is fine for me 80% of the time but once a while I do shoot sports where that lag is a problem, it’s that the varying lag causing the human misinterpreted the actual movement to keep the subject in frame. And battery in real life is a real problem. Imagine in vacation i normally walk whole day without any chance to charge, that battery life is making me mandatory to carry some 4-5 batteries daily which is 1) expensive and 2) takes forever to chargeWell it looks like we have our answer:
There doesn't seem to be appreciable lag while not shooting. (Which does not mean latency = throughput. It just means latency is too small to be a human problem.) But when shooting both Tony and Chelsea struggled to keep the subject in frame. As I said earlier in this long winded debate, the bubbles introduced by full resolution capture/processing/storage is what most people notice and complain about. (Though general latency has been a problem as well on some EVFs.) I notice this stutter even on A9 bodies and while I think can anticipate/track reasonably well with those, it's still not as nice as an OVF.
Chelsea also noted battery life and thermal issues while shooting stills. Her battery only lasted 1 hour. And while the camera did not shut down shooting stills, switching to video she realized that shooting stills did drive up temperatures because she had only 2 minutes recording time available. This raises another issue in my mind that nobody seems to be looking at with mirrorless: what are the DR and high ISO measurements after the camera has been on with EVF and shooting for a while? I suspect it's one set of values in an air conditioned lab, and another after several hours of EVF and shooting. This may not matter for most shots, but for long exposure night/astro photography it could be significant.
Now I don't want to sound like a Sony fan claiming 'Canon is doomed.' These are still very good cameras IMHO. The issues people are discovering are issues which exist across the mirrorless world. But those of us who complain that we prefer an OVF for some situations are vindicated (yet again). And until EVFs truly match OVFs, I would love to see DSLR versions of some R bodies.
When I was in high school, humans had 48 chromosomes and we had nine planets.How many here used CPS, KC, MC when in high school? They changed to HZ and KHZ and more we did not have much call to use GHZ back then.
When I was in high school, humans had 48 chromosomes and we had nine planets.