There will not be an EOS 5D Mark V [CR2]

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
There's threading API as abstraction for programmers, and there's hardware threads/CPU cores.

There are no "hardware threads." Threads are entirely an abstraction for programmers. There are silicon features to make that abstraction work better and more efficiently, but a processor does not know what a "thread" is nor does it manage them apart from executing the related functions in an OS.

Nope I'm talking about both. With DIGIC, the processing pipeline is all software,

Again, your understanding of what's happening at the hardware level...the pieces of silicon and the buses they use to communicate...is far too simplistic. To give a view that's still simplified but illustrates what I mean: while DIGIC does its thing on bytes in RAM (the middle), the sensor is exposing/reading/resetting, and the EVF is refreshing its physical elements.

While some of what you said about DIGIC is accurate, it's still the middle part, and it still says nothing about how long it takes to process a frame nor whether or not multiple frames are worked on simultaneously to sustain 120 fps throughput.

We don't know the exact architecture, but the only thing that actually matters for this discussion - how much of parallel processing is happening there.

That's only a part of what matters. We keep losing sight of what people really notice and that's stutter while burst shooting, the 'bubbles' in the pipeline when the EVF has to exposure and readout a full resolution frame and the DIGIC processor has to work on that full resolution frame.

Parallel in the sense that two or more consecutive captured frames are processed at the same time. I don't think that's what's happening there, it's meaningless and takes more memory.

It's quite meaningful if a single core cannot prepare 120 frames for the EVF every second. The frames are roughly 1.92mp (3 dots in an EVF = 1 pixel) and have to be debayered, brightness adjusted, and color adjusted at the minimum. Hard to say without knowing core frequency, memory bus speed, what other steps are involved, and what special instructions have been added to the ARM instruction set in DIGIC. But I would guess more than one core is involved.

There could be separate buffers. Or it may be same shared memory. Neither will add milliseconds to prostprocessing latency.

Memory contention can produce those kinds of performance impacts. RAM is slow which is why processors have three levels of cache with typically two that are discrete to each core (third shared for all cores). And RAM is really slow when multiple cores all stall waiting for access to the memory controller and bus. Now I could be wrong and the EVF could use main memory directly to refresh its physical pixels. I'm not saying it's impossible to coordinate the memory accesses and keep it all smooth. But the odds are there's a discrete buffer.
 
  • Like
Reactions: 1 user
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
I was arguing there's no mysterious pipelines that have several frames in the queue so that the actual latency is more than 1/120s.

We know what you're arguing, but your argument rests on a very simple and inaccurate view of what's happening on the silicon. Right off the bat there's a sensor, DIGIC, and an EVF. So even with that still oversimplified view there are 3 frames in play at any given moment: the frame being captured, the frame being processed, and the frame being refreshed to the EVF. Latency from the moment the event occurred to the moment you see it on screen would be 16.6ms even if DIGIC broke the laws of physics and did its work instantaneously.
 
  • Like
Reactions: 1 user
Upvote 0

stevelee

FT-QL
CR Pro
Jul 6, 2017
2,383
1,064
Davidson, NC
If a school installs new lighting in their gym to make it better for TV cameras, it's not really a "small" college in my mind. It may be small for a D1 program, but that's a far cry from truly small non-scholarshipped D3 programs or even NAIA programs.
It is a small school in terms of enrollment, but plays D1 sports in the Atlantic 10. My friend's clients are all division 1 schools, mostly in the Southern Conference. The pick-up games I have shot came after hours at basketball camp. Former players, pros from Europe, current players, incoming freshmen, prospects, friends of some of the above, and sometimes some Charlotte Hornets would play. So the game are private, and I promised the coach that any video I post on YouTube will have restricted circulation. Before I started shooting the videos, Steph Curry who was already a pro and his little brother Seth who was still playing for Duke both played one night. Their dad Dell came, too, I guess to pick up Seth. We walked out together. I wish I had video of that night. Anyhow, my videos have a dedicated following, mostly people wanting to see how the new freshmen stack up. I would use the occasion to learn how to shoot video on whatever new camera I had, including my iPhone 6S one year. I shot 4K and edited it down to 1080p, using FCP X in essence, for digital zooming in. I always apologized for my bad color casts, but my audience didn't care.
 
Upvote 0
There are no "hardware threads." Threads are entirely an abstraction for programmers. There are silicon features to make that abstraction work better and more efficiently, but a processor does not know what a "thread" is nor does it manage them apart from executing the related functions in an OS.

Of course there are lol, I've been worked with multithreading and real-time systems for many years, including non-Intel h/w architectures. Maybe do some googling on 'software vs hardware threads'. On certain devices and corresponding operating systems you have direct control on h/w threads through the application-level API (there's always control on the operating system level).

Again, your understanding of what's happening at the hardware level...the pieces of silicon and the buses they use to communicate...is far too simplistic. To give a view that's still simplified but illustrates what I mean: while DIGIC does its thing on bytes in RAM (the middle), the sensor is exposing/reading/resetting, and the EVF is refreshing its physical elements.

It is simplistic in terms of Canon h/w architecture because we don't know everything. But what you're saying here doesn't change anything in my point, and I've already highlighted several times before that most likely the system is doing postprocessing while the sensor is doing exposure. From the beginning I was calculating the delay between readout and displaying the processed buffer in the EVF. It didn't include exposure time.

Try drawing a timeline diagram and you'll see that your pipeline can't be taking more than 1/120s if there's only one frame's data in the pipeline.

It's quite meaningful if a single core cannot prepare 120 frames for the EVF every second. The frames are roughly 1.92mp (3 dots in an EVF = 1 pixel) and have to be debayered, brightness adjusted, and color adjusted at the minimum. Hard to say without knowing core frequency, memory bus speed, what other steps are involved, and what special instructions have been added to the ARM instruction set in DIGIC. But I would guess more than one core is involved.

All these adjustments can be done in one go, and I suspect there's some parallel processing happening, but crucially, no more than one captured frame in the pipeline.

Memory contention can produce those kinds of performance impacts. RAM is slow which is why processors have three levels of cache with typically two that are discrete to each core (third shared for all cores). And RAM is really slow when multiple cores all stall waiting for access to the memory controller and bus. Now I could be wrong and the EVF could use main memory directly to refresh its physical pixels. I'm not saying it's impossible to coordinate the memory accesses and keep it all smooth. But the odds are there's a discrete buffer.

Have you ever heard of CPU cache? L1, L2?.. There is L1 and L2 caches in ARM CPUs, I guess DIGIC X isn't an exception. In fact the feed for EVF is very small and lightweight compared to the full video data or continuous shooting.

Latency from the moment the event occurred to the moment you see it on screen would be 16.6ms even if DIGIC broke the laws of physics and did its work instantaneously.

Again try drawing a timeline diagram. My point is, the delay between readout and EVF shouldn't be longer than 1/120s. That's an upper limit, in fact it can be less than that. Now the delay between the physical event and EVF may be longer (and I covered it in this thread already) but the added time can't be longer than the exposure time. If a flash happens during exposure, you'll see it in the EVF in the nearest frame update.
 
  • Haha
Reactions: 1 user
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
It is a small school in terms of enrollment, but plays D1 sports in the Atlantic 10. My friend's clients are all division 1 schools, mostly in the Southern Conference. The pick-up games I have shot came after hours at basketball camp. Former players, pros from Europe, current players, incoming freshmen, prospects, friends of some of the above, and sometimes some Charlotte Hornets would play. So the game are private, and I promised the coach that any video I post on YouTube will have restricted circulation. Before I started shooting the videos, Steph Curry who was already a pro and his little brother Seth who was still playing for Duke both played one night. Their dad Dell came, too, I guess to pick up Seth. We walked out together. I wish I had video of that night. Anyhow, my videos have a dedicated following, mostly people wanting to see how the new freshmen stack up. I would use the occasion to learn how to shoot video on whatever new camera I had, including my iPhone 6S one year. I shot 4K and edited it down to 1080p, using FCP X in essence, for digital zooming in. I always apologized for my bad color casts, but my audience didn't care.


Regardless of enrollment, a D1 athletic program is not what I consider "small". D2/D3/NAIA is what I'm talking about.
 
  • Like
Reactions: 1 user
Upvote 0

stevelee

FT-QL
CR Pro
Jul 6, 2017
2,383
1,064
Davidson, NC
Regardless of enrollment, a D1 athletic program is not what I consider "small". D2/D3/NAIA is what I'm talking about.
I'm sorry. I have no idea of what message long ago in this thread got me to talking about video color casts and got you to talking about the definition of the word "small." I think at some point I had posted about meeting parents of high school football players who either had or wanted the 7D.

But yes, in terms of budgets, D1 costs a lot more than other divisions. That's why many very fine schools choose not to go the D1 scholarship route. I have met the men's basketball coaches at Swathmore and Emory. Both are doing excellent work where they are in D3, vying for national championships. Both would make fine D1 coaches. I met the Emory coach several years ago when his son was in basketball camp here and were watching the after hours pick-up game together. I know much better an assistant coach at Dartmouth and the head coach at Macalester. Both would call me by name if they saw me. That all strikes me as odd, since I don't even consider myself a sports fan.
 
  • Like
Reactions: 1 user
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
Of course there are lol, I've been worked with multithreading and real-time systems for many years, including non-Intel h/w architectures.

Tell me what instructions in x86, x86-64, or ARM allow you to spawn threads that are managed by the silicon without an OS. Which instructions do I use to set thread stack size, priority, pause/resume threads, kill them? (Hint: they're not there.)

Again, there are instructions and features on the silicon designed to make thread switching and management more efficient, but that's not the same thing as a "hardware thread." When people use that term they are talking about those features, they are not implying that the CPU can spawn/manage/retire a pool of threads all on its own.

Try drawing a timeline diagram and you'll see that your pipeline can't be taking more than 1/120s if there's only one frame's data in the pipeline.

I can draw a timeline with 120 fps throughput and a 120 SECOND latency. Nothing about 120 fps throughput demands a latency of any given length of time. Each STEP has to be 1/120th or less, but not all the steps combined.

All these adjustments can be done in one go,

I have a paper on my drive describing one of the fastest demosaicing algorithms known. The algorithm is not "one go", it would compile to many instructions for the CPU that have to be repeated for each pixel. And I don't see how brightness or color could be worked into the algorithm without increasing instruction count and execution time.

but crucially, no more than one captured frame in the pipeline.

This entire debate boils down to you being unable to imagine a timeline that maintains 120 fps yet has a latency longer than 1/120th. Which I suspect is because you've never dealt with computing below a certain level. (Latency vs throughput is common in assembly because for the vast majority of execution units those are two different times. A 'pipeline' from sensor to EVF is no different in concept.) I've said from early on that your view of how this works is very high level and abstract.

Have you ever heard of CPU cache? L1, L2?..

Did you really ask that after quoting me saying "RAM is slow which is why processors have three levels of cache with typically two that are discrete to each core (third shared for all cores)."? Really???

There is L1 and L2 caches in ARM CPUs, I guess DIGIC X isn't an exception. In fact the feed for EVF is very small and lightweight compared to the full video data or continuous shooting.

Caches are small and do not make memory contention magically disappear. In fact, high level languages and design patterns often hamstring cache prediction algorithms which are, by nature of being part of the silicon, very simple. If the EVF does not have its own buffer then getting everything to work smoothly would likely involve some very careful fine tuning to make sure data is where it needs to be at certain clock cycles. OTOH just giving the EVF a buffer makes synchronization extremely simple.
 
  • Like
Reactions: 1 user
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
Well it looks like we have our answer:

There doesn't seem to be appreciable lag while not shooting. (Which does not mean latency = throughput. It just means latency is too small to be a human problem.) But when shooting both Tony and Chelsea struggled to keep the subject in frame. As I said earlier in this long winded debate, the bubbles introduced by full resolution capture/processing/storage is what most people notice and complain about. (Though general latency has been a problem as well on some EVFs.) I notice this stutter even on A9 bodies and while I think can anticipate/track reasonably well with those, it's still not as nice as an OVF.

Chelsea also noted battery life and thermal issues while shooting stills. Her battery only lasted 1 hour. And while the camera did not shut down shooting stills, switching to video she realized that shooting stills did drive up temperatures because she had only 2 minutes recording time available. This raises another issue in my mind that nobody seems to be looking at with mirrorless: what are the DR and high ISO measurements after the camera has been on with EVF and shooting for a while? I suspect it's one set of values in an air conditioned lab, and another after several hours of EVF and shooting. This may not matter for most shots, but for long exposure night/astro photography it could be significant.

Now I don't want to sound like a Sony fan claiming 'Canon is doomed.' These are still very good cameras IMHO. The issues people are discovering are issues which exist across the mirrorless world. But those of us who complain that we prefer an OVF for some situations are vindicated (yet again). And until EVFs truly match OVFs, I would love to see DSLR versions of some R bodies.
 
  • Like
Reactions: 5 users
Upvote 0
Tell me what instructions in x86, x86-64, or ARM allow you to spawn threads that are managed by the silicon without an OS. Which instructions do I use to set thread stack size, priority, pause/resume threads, kill them? (Hint: they're not there.)

"Managed by silicon" - why would we need that, especially in terms of this talk? Everything is controlled from code. In Intel CPUs, the threads are controlled through interruptions initially from the main thread/core. Each thread has its own set of registers. It's all supported 'in silicon', that's why there are hardware threads.

Again, there are instructions and features on the silicon designed to make thread switching and management more efficient, but that's not the same thing as a "hardware thread." When people use that term they are talking about those features, they are not implying that the CPU can spawn/manage/retire a pool of threads all on its own.

They are not implying that, but a hardware thread means separate instruction pipeline and separate set of registers. That's why it's 'hardware'. The threads are obviously managed by the OS and nobody's ever implied the silicon would spawn threads on its own.

Also they're called 'threads' in the Intel CPU instruction manuals - so for manufacturers there are hardware threads but there's none for you? Or you're inventing new terminology just in order to prove you apparently incorrect statement that there's no hardware threads? Very interesting twist.

I can draw a timeline with 120 fps throughput and a 120 SECOND latency. Nothing about 120 fps throughput demands a latency of any given length of time. Each STEP has to be 1/120th or less, but not all the steps combined.

You missed the point. Then you will have 120 frames processed at the same time in a single thread. In discrete steps, but you will have 120 frames in your pipeline and you will need 120x of memory buffers and you will need context switching every time you go from one frame's data to another. Which makes such a system absolutely pointless.

I have a paper on my drive describing one of the fastest demosaicing algorithms known. The algorithm is not "one go", it would compile to many instructions for the CPU that have to be repeated for each pixel. And I don't see how brightness or color could be worked into the algorithm without increasing instruction count and execution time.

'In one go' = 'you don't need to switch context' and 'you don't need to walk the buffer several times'. Obviously you will need more instructions.

Caches are small and do not make memory contention magically disappear.

An L2 cache can be as big as half the EVF buffer, worst case 5-10% of the buffer which still drastically increases performance if you don't do context switching and processing of multiple frames in the pipeline.
 
Upvote 0

dtaylor

Canon 5Ds
Jul 26, 2011
1,805
1,433
"Managed by silicon" - why would we need that, especially in terms of this talk?

Because you keep insisting there are "hardware threads" and there are not.

Each thread has its own set of registers.

At best a core has two sets of registers (hyperthreaded) and at 'worst' only one. The OS manages what's swapped in/out of those registers at any given moment as thread switching occurs.

They are not implying that, but a hardware thread means separate instruction pipeline and separate set of registers.

Hyperthreaded cores do not have distinct execution pipelines for threads. Execution units, caches, and the system bus are all shared while some resources, like the register set, are duplicated. Hyperthreading is basically a way to keep execution pipelines better fed with instructions since any thread will tend to have stalls or bubbles which leave units idle.

Or you're inventing new terminology just in order to prove you apparently incorrect statement that there's no hardware threads?

No, I am explaining what the terminology actually means. And that's relevant because when you first used the term you did so in a manner that implied the processors could do something that they cannot, like you do in this post.

You missed the point. Then you will have 120 frames processed at the same time in a single thread.

Much more likely to have 120 separate threads. (Actually 120x120 since my example was 120s latency and 120 fps throughput.) But the point isn't that any camera would have such extreme latency or such an extreme configuration. The point is that latency is not limited or locked by throughput. They are separate measurements of time.

'In one go' = 'you don't need to switch context' and 'you don't need to walk the buffer several times'.

You're going to 'walk the buffer' (i.e. iterate over the bytes in the frame you're working on) more than once. And I kind of doubt that each core has a massive L2 cache. Perhaps I'm wrong and there are cores with large L2 caches to speed up EVF frame processing. That's possible (EVF frame should be just under 6 MB) and would be cool. But without a discrete EVF buffer you still have contention for the main memory bus because the EVF can't read those caches. If you have an EVF buffer that contention goes way down.

Whatever they did, however they designed it, I can tell you this: latency is almost certainly >8.3ms but low enough that experienced photographers (Tony and Chelsea) did not notice it. And burst shooting disrupts the flow severely enough to make tracking more difficult than an OVF, which they did notice.
 
Upvote 0

SecureGSM

2 x 5D IV
Feb 26, 2017
2,360
1,231
Well it looks like we have our answer:

There doesn't seem to be appreciable lag while not shooting. (Which does not mean latency = throughput. It just means latency is too small to be a human problem.) But when shooting both Tony and Chelsea struggled to keep the subject in frame. As I said earlier in this long winded debate, the bubbles introduced by full resolution capture/processing/storage is what most people notice and complain about. (Though general latency has been a problem as well on some EVFs.) I notice this stutter even on A9 bodies and while I think can anticipate/track reasonably well with those, it's still not as nice as an OVF.

Chelsea also noted battery life and thermal issues while shooting stills. Her battery only lasted 1 hour. And while the camera did not shut down shooting stills, switching to video she realized that shooting stills did drive up temperatures because she had only 2 minutes recording time available. This raises another issue in my mind that nobody seems to be looking at with mirrorless: what are the DR and high ISO measurements after the camera has been on with EVF and shooting for a while? I suspect it's one set of values in an air conditioned lab, and another after several hours of EVF and shooting. This may not matter for most shots, but for long exposure night/astro photography it could be significant.

Now I don't want to sound like a Sony fan claiming 'Canon is doomed.' These are still very good cameras IMHO. The issues people are discovering are issues which exist across the mirrorless world. But those of us who complain that we prefer an OVF for some situations are vindicated (yet again). And until EVFs truly match OVFs, I would love to see DSLR versions of some R bodies.

+++ But when shooting both Tony and Chelsea struggled to keep the subject in frame.
Thanks for pointing this out. I am wondering if the same issue Was confirmed for R6?
 
Upvote 0
Because you keep insisting there are "hardware threads" and there are not.

And that's a very amusing statement. The term 'hardware threads' doesn't imply they're 'managed by silicon' - in terms of modern CPU architecture(s), hardware threads imply independent instruction pipelines an/or independent contexts (that is registers). You can go here for example and figure out if there's any 'threads' in the CPU instructions.
Obviously there are software threads also, as an abstraction provided by the operating system for applications. There may be more software threads spawned than there's h/w threads.

Hyperthreaded cores do not have distinct execution pipelines for threads. Execution units, caches, and the system bus are all shared while some resources, like the register set, are duplicated. Hyperthreading is basically a way to keep execution pipelines better fed with instructions since any thread will tend to have stalls or bubbles which leave units idle.

So you're mentioning hardware threads here but insist they don't exist.


Much more likely to have 120 separate threads. (Actually 120x120 since my example was 120s latency and 120 fps throughput.)

That may also be the case (totally unrealistic but feasible), but that's what I stated myself in the beginning of this conversation.

You're going to 'walk the buffer' (i.e. iterate over the bytes in the frame you're working on) more than once

I guess it's a more complex process than just iteration, likely it processes the buffer in chunks, say 2x2 or 8x8 pixels. Almost certainly DIGIC with its special instructions can prepare the data for the EVF in one go and very fast, faster than 8.3ms.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,343
22,520
+++ But when shooting both Tony and Chelsea struggled to keep the subject in frame.
Thanks for pointing this out. I am wondering if the same issue Was confirmed for R6?
To be fair, I couldn't keep a bird in frame with a 2xTC on a 500mm L IS (first version) using a DSLR - too heavy and too narrow fov. Chelsea is quite a woman to manage that.
 
  • Like
Reactions: 2 users
Upvote 0
Mar 25, 2011
16,848
1,835
I assume you know this is wrong notation, but in any case: please don't write it. It causes a migraine looking at it! :eek:

120 Hz = 120 1/s

1 / 120 Hz = (1 / 120) s = 8.3 ms​

To keep it short, one may use one of the symbols that are used for "corresponds to", like ≙ or ≘

120 Hz ≘ 8.3 ms​
How many here used CPS, KC, MC when in high school? They changed to HZ and KHZ and more we did not have much call to use GHZ back then.
 
Upvote 0

bbb34

5D mk V
Jul 24, 2012
156
173
Amsterdam
How many here used CPS, KC, MC when in high school? They changed to HZ and KHZ and more we did not have much call to use GHZ back then.

Not me. I went to school in Germany. There, the Hertz was accepted as part of the MKS system in 1935. I'm a tad to young to have witnessed that.

When did schools in the USA and in the UK switch from CPS to Hz?
 
Upvote 0
Well it looks like we have our answer:

There doesn't seem to be appreciable lag while not shooting. (Which does not mean latency = throughput. It just means latency is too small to be a human problem.) But when shooting both Tony and Chelsea struggled to keep the subject in frame. As I said earlier in this long winded debate, the bubbles introduced by full resolution capture/processing/storage is what most people notice and complain about. (Though general latency has been a problem as well on some EVFs.) I notice this stutter even on A9 bodies and while I think can anticipate/track reasonably well with those, it's still not as nice as an OVF.

Chelsea also noted battery life and thermal issues while shooting stills. Her battery only lasted 1 hour. And while the camera did not shut down shooting stills, switching to video she realized that shooting stills did drive up temperatures because she had only 2 minutes recording time available. This raises another issue in my mind that nobody seems to be looking at with mirrorless: what are the DR and high ISO measurements after the camera has been on with EVF and shooting for a while? I suspect it's one set of values in an air conditioned lab, and another after several hours of EVF and shooting. This may not matter for most shots, but for long exposure night/astro photography it could be significant.

Now I don't want to sound like a Sony fan claiming 'Canon is doomed.' These are still very good cameras IMHO. The issues people are discovering are issues which exist across the mirrorless world. But those of us who complain that we prefer an OVF for some situations are vindicated (yet again). And until EVFs truly match OVFs, I would love to see DSLR versions of some R bodies.
Right this is exactly what I don’t like about mirrorless, the lag is fine for me 80% of the time but once a while I do shoot sports where that lag is a problem, it’s that the varying lag causing the human misinterpreted the actual movement to keep the subject in frame. And battery in real life is a real problem. Imagine in vacation i normally walk whole day without any chance to charge, that battery life is making me mandatory to carry some 4-5 batteries daily which is 1) expensive and 2) takes forever to charge
 
  • Like
Reactions: 1 user
Upvote 0

SteveC

R5
CR Pro
Sep 3, 2019
2,678
2,592
When I was in high school, humans had 48 chromosomes and we had nine planets.

From Wikipedia: "The number of human chromosomes was published in 1923 by Theophilus Painter. By inspection through the microscope, he counted 24 pairs, which would mean 48 chromosomes. His error was copied by others and it was not until 1956 that the true number, 46, was determined by Indonesia-born cytogeneticist Joe Hin Tjio."

In other words, that was just one of those errors that got propagated endlessly.

Pluto, on the other hand, was a matter of changing the definition of a "planet" (the original definition by the ancient Greeks included the Sun and Moon). Pluto remains what it was beforehand, it's our labels that changed. I do think it doesn't belong in the same "bucket" as the eight currently recognized planets, but I also think those eight planets don't themselves belong in the same bucket as each other either; there are fundamental differences between gas giants and "terrestrial" planets far greater than between the terrestrials and Pluto. Once that rebucketing occurs, and we speak of terrestrials, asteroids, gas giants, and kuiper belt objects as separate classes of things, perhaps the word "planet" will be a sort of bucket for buckets, and we'll use it to group different classes of things that orbit stars together, and plutons will be included. Or maybe not. Scientists are all about classifying things they observe, and sometimes the classification scheme in use starts to break down. We've seen it in biology (where "kingdom" is no longer the top-level phylogenetic division, and many more levels have been established between things like class, order, and family), we're seeing it now in astronomy.
 
  • Like
Reactions: 1 users
Upvote 0