Canon EOS R1 Specifications [CR2]

Larger pixels capture more photons. This is why the Sony A7S3 has a 12MP sensor, to allow it to capture better low-light video.
Just to drive the point home, compare the DR of the a7S III to the contemporary a7R IV (the a7R V has the same plot). Same size sensor, but one is 60 MP and the other is 12 MP and therefore has significantly larger pixels. By your 'large pixel' logic the a7S III should have lower noise and better DR. Except that...it doesn't. Even at high ISO (typically used in low light), it's no better than a sensor with much smaller pixels but having the same total area.

Screenshot 2024-01-22 at 9.46.45 AM.png

Now look at it from the other side. Here's the R5 in regular vs. crop mode, same size pixels (the exact same pixels, in fact), same sensor, but imaging with a smaller area of the sensor. If pixel size were the determinant, DR would be the same since the pixels are identical. Clearly, it's not. Using an APS-C area of the FF sensor costs you about a stop of DR. Less total light gathered means more image noise means less DR.

Screenshot 2024-01-22 at 9.44.19 AM.png
 
  • Like
Reactions: 2 users
Upvote 0
I've watched both Tool's 24MP head to head R3 vs A9III test shot comparison and Jared Polin's real world sports photo comparison and the A9III does suffer as a result of the global shutter. No doubt about it.
I will never watch or believe anything from Northrup, though it seems he's right about this (then again, a stopped analog clock is right twice a day). I lost all respect for him after watching the first and last of his informercials I've seen, a decade ago. He compared the D810 to the 5DIII where he used inappropriate settings (i.e., settings the manual recommends against) for the 5DIII AF system, and then "demonstrated" the D810's better performance in a "sports test" consisting of a subject walking slowly toward him. He stated that cropping the 22 MP 5DIII to the framing of the 7DII yields a 14 MP image (either he can't do basic math or more likely he doesn't understand how cropping affects resolution and simply divided 22 by the 1.6x crop factor). He concluded, "The 5DIII is ok if all you’re doing is posting to Facebook."
 
Upvote 0
tbh I am thinking about a lens I have the 28-70 and a respected YT lens tester Chris Frost, who tested the lens 4 years ago on an R, then recently again on the R5 and R7. His conclusions were that corner sharpness was noticeably less on the higher MP of the R5 (than on the 30MP R), but the R7 struggled with this lens. For portrait photographers, they may not care, clearly landscape ones do care about the ability to resolve details across the frame. So to answer your question, it is corner performance (could be important in sports too depending on the composition).
Keep in mind that Chris uses SOOC jpegs for the resolution tests and assumes zero field curvature. I don’t disagree with using SOOC jpegs for testing, it levels the playing field a bit, especially with 3rd party software taking months to catch up with lens profiles.
For some lenses he does (re)focus on the corners, but those tend to be ones with really bad corners.

You might be able to get more mileage out of the corners by shooting RAW and lean on LR/Dxo/c1/dpp4 to be better than the on-camera jpeg engine. Those engines are very good nowadays, so I’m not expecting miracles for low ISO shots.
 
Upvote 0
L lens quality control is high (I imagine).
Deson't mean you can't get a bad lens. Read some of the articles on lens QC from Roger Cicala (LensRentals founder). As an anecdotal example, when Bryan/TDP tested the EF 24-70/2.8L II, he ended up trying four copies since the first two he tried had optical problems (different ones for each of them).

Any lens I buy, from the $300 28/2.8 to the $9500 100-300/2.8 and $13K 600/4 II, I thoroughly test using a setup similar to Bryan's (including the same 'enhanced' ISO 12233-type charts).
 
Upvote 0
Neuro, we all know that lower base ISO increases DR (as per the charts above), so why did Sony engineers when trying to optimize their new A9III sensor choose base ISO 250? Is it purely for digital scaling purposes - they are assuming higher shutter speeds etc.

The Nikon D850 has a base ISO of 64, the Sony A7S3 base ISO is 80 (640 for dual active ISO).
 
Upvote 0
Semantics. When you have millions of them in a sensor the wall size is as important as the gaps between the pixels;)
I should elaborate. You can't necessarily compare pixel size between two different fabs, much less between two different companies. Canon might have thinner walls than Sony, or vice versa, allowing larger pixels in the same combination of sensor size and resolution.
 
Upvote 0
I should elaborate. You can't necessarily compare pixel size between two different fabs, much less between two different companies. Canon might have thinner walls than Sony, or vice versa, allowing larger pixels in the same combination of sensor size and resolution.
Okay. Really eluding to the old analogy of photosites as buckets, capturing photons as they fall through the lens onto the sensor cavities like water droplets.
 
Upvote 0
Neuro, we all know that lower base ISO increases DR (as per the charts above), so why did Sony engineers when trying to optimize their new A9III sensor choose base ISO 250? Is it purely for digital scaling purposes - they are assuming higher shutter speeds etc.
Much more likely, it's about the global shutter having twice as shallow pixel well depth than the "rolling" shutter, all other things being equal.

The global shutter pixel just cannot hold the ISO 100 amount of photoelectrons; it will saturate at a higher ISO.
 
  • Like
Reactions: 2 users
Upvote 0
I should elaborate. You can't necessarily compare pixel size between two different fabs, much less between two different companies. Canon might have thinner walls than Sony, or vice versa, allowing larger pixels in the same combination of sensor size and resolution.
But adding to the complexity, microlenses....What you refer to above was a significant issue up until about a decade ago when manufacturers started adding microlenses which direct the light from the edges of a pixel toward the center. So, how good are the microlenses matters as much as pixel size, etc.

As for the bucket analogy, imagine a funnel that covers the edge of the bucket. Now, water that hits your bucket edge is redirected toward the center and not lost.

As for the A9III, I get the physical space taken up by flash memory required for a global shutter, but have wondered if microlenses could mitigate that issue. But, I also believe that each step adds noise and global shutters add a step where the photon count is stored instantaneously in flash memory. So, the noise difference may not be due to surface area, but rather additional steps. Plus, from what I've seen...it is present, but not horrible. Whether the R1 is 0.8 milliseconds or microseconds, either is more than sufficient for my purposes and should make rolling shutter irrelevant for the masses.
 
  • Like
Reactions: 3 users
Upvote 0
But adding to the complexity, microlenses....What you refer to above was a significant issue up until about a decade ago when manufacturers started adding microlenses which direct the light from the edges of a pixel toward the center. So, how good are the microlenses matters as much as pixel size, etc.

As for the bucket analogy, imagine a funnel that covers the edge of the bucket. Now, water that hits your bucket edge is redirected toward the center and not lost.
Exactly my point. A 'long time' ago (>15 years), there was a substantial penalty for smaller pixels in the same size sensor. That is essentially gone thanks to gapless microlenses, as the DR data show.

Diagram courtesy of Nikon, touting new tech in 2008:
1705938594022.png
 
  • Like
Reactions: 4 users
Upvote 0
Now look at it from the other side. Here's the R5 in regular vs. crop mode, same size pixels (the exact same pixels, in fact), same sensor, but imaging with a smaller area of the sensor. If pixel size were the determinant, DR would be the same since the pixels are identical. Clearly, it's not. Using an APS-C area of the FF sensor costs you about a stop of DR. Less total light gathered means more image noise means less DR.

View attachment 214349

Is it implied that this chart represents imaging the same scene with two different sensor sizes, or alternately up-scaling the cropped image?
 
Upvote 0
Now look at it from the other side. Here's the R5 in regular vs. crop mode, same size pixels (the exact same pixels, in fact), same sensor, but imaging with a smaller area of the sensor. If pixel size were the determinant, DR would be the same since the pixels are identical. Clearly, it's not. Using an APS-C area of the FF sensor costs you about a stop of DR. Less total light gathered means more image noise means less DR.

View attachment 214349
I thought I’d read on Bill Claff’s site that the reason the crop to APS from FF readings on his PDR readings was due to the crop having to be enlarged more for equivalent viewing.
 
  • Like
Reactions: 1 users
Upvote 0
Isn't that essentially the same thing?

Going with buckets, the same size of buckets (R5, per Neuro's post). For APS-C mode, the R5 has 17 million of those buckets capturing light. In FF mode, the R5 has 45 million buckets capturing light. Thus, more light is captured by 45 million same sized buckets compared to the 17 million buckets. But...the 17 million buckets can be framed to show the exact same image as the FF bucket using different focal lengths (in camera) or print outs, which some reviewers do. But, it is still 45 million buckets vs 17 million buckets. More overall light gathered vs less overlight gathered....even though each bucket is gathering the same amount, you simply have more buckets with FF.
 
  • Like
Reactions: 1 user
Upvote 0
Isn't that essentially the same thing?

Going with buckets, the same size of buckets (R5, per Neuro's post). For APS-C mode, the R5 has 17 million of those buckets capturing light. In FF mode, the R5 has 45 million buckets capturing light. Thus, more light is captured by 45 million same sized buckets compared to the 17 million buckets. But...the 17 million buckets can be framed to show the exact same image as the FF bucket using different focal lengths (in camera) or print outs, which some reviewers do. But, it is still 45 million buckets vs 17 million buckets. More overall light gathered vs less overlight gathered....even though each bucket is gathering the same amount, you simply have more buckets with FF.
Yes, that's what's happening. But not everyone has the technical acumen to understand that when looking at a chart.
 
  • Like
Reactions: 1 user
Upvote 0