The Canon EOS R3 is out in the wild

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
Except that the preview would still be of what the JPEG would look like if you were shooting JPEGs, not how the Raw file will come up in Lightroom or ACR before you start twiddling, wouldn't it?

Yep. But then shooters who haven't developed an eye in their own brain to see how an image will look before ever raising the camera to their eye and shooting it are the same ones who tend to shoot JPEG, in my experience. They're the ones that think a WYSIWYG EVF is such a revolutionary tool.
 
Last edited:
  • Like
Reactions: 1 users

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
The EVF and LCD screen are physically composed of single color dots, not pixels. If you want to divide the number of dots by three and call it a pixel, feel free to.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots. For every 2 green and 1 red and blue dot the marketing department chooses to call them all "pixels", implying that all 4 dots each have red, green, and blue. They do not. The fact that subsequent software interpolates neighboring pixels together to create additional estimated color values and provide 3 colors per dot on output does not mean that the sensor actually sensed 3 colors per dot - it did not. The only camera sensor that actually has each dot sense all 3 colors is the Foveon sensor from Sigma, which has 3 vertical sensing layers per dot. The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales. If they chose to further use software to upres a 20MDot(declared 20MPixel) sensor by 2x2 then they could present you with a 80MPixel file from the sensor and call it a 80MPixel camera - but it's still not. Most of the CR members here probably already know this topic and would say it's old news and not worth arguing over. I agree with that. You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.

They're not even single color sensing dots. They all three have some sensitivity to the entire visible spectrum. Putting a green color filter over a sensel on a digital sensor does not eliminate all red and blue light from entering it any more than putting a green filter over a lens eliminates all red and green light from reaching black and white film and making any blue or red object in the scene pure black in the photo. It just makes the blue and red things look darker than the green things that are the same brightness in the actual scene.

ALL THREE colors have to be interpolated when demosaicing is done. Not only because of the overlapping way each filtered pixel is sensitive to the rest of the visible spectrum, but also because different light sources emit different parts of the visible spectrum in differing amounts. They all have to be interpolated as well because the three colors that most Bayer masks use are not the same three colors that RGB monitors emit, either. The three colors of Bayer mask filters are closer to the three colors to which each type of our retinal cones are most sensitive: A slightly violet blue, a slightly yellow green and a slightly green yellow, though most Bayer masks use an orangish yellow color instead of the lime color to which our L cones are most sensitive.

1623856704715.png

We named the cones "red", "green", and "blue" decades before we managed to measure the exact response of each type of cone to various wavelengths of light.

The reason trichromatic color masks on cameras and trichromatic reproduction systems (RGB monitors or CMY three color printing) work is because our retinas and brains do the same thing. Our Medium wavelength and Long wavelength cones have a very large amount of overlapping sensitivity. Our short wavelength cones have less overlap with the M and L cones, but there is still some overlap there. If there were no overlapping sensitivity between the S, M, and L cones our brains could not create colors. Colors do not exist in the electromagnetic spectrum. There are only wavelengths and frequencies. It's our brains that create color as a response to certain wavelengths/frequencies in the electromagnetic spectrum.


1623856780683.png

The "red" filtered part of this Sony IMX249 sensor peaks at 600nm, which is what we call "orange", rather than 640nm, which is the color we call "Red."

All of the cute little drawings on the internet notwithstanding, the actual colors of a Bayer filer are not Red (640nm), Green (530nm), and Blue (465nm).

1623857306775.jpeg

Actual image of an actual Bayer mask and a sensor. Part of the mask has been peeled away.
 
Last edited:
  • Like
  • Love
Reactions: 3 users

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
The EVF and LCD screen are physically composed of single color dots, not pixels. If you want to divide the number of dots by three and call it a pixel, feel free to.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots. For every 2 green and 1 red and blue dot the marketing department chooses to call them all "pixels", implying that all 4 dots each have red, green, and blue. They do not. The fact that subsequent software interpolates neighboring pixels together to create additional estimated color values and provide 3 colors per dot on output does not mean that the sensor actually sensed 3 colors per dot - it did not. The only camera sensor that actually has each dot sense all 3 colors is the Foveon sensor from Sigma, which has 3 vertical sensing layers per dot. The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales. If they chose to further use software to upres a 20MDot(declared 20MPixel) sensor by 2x2 then they could present you with a 80MPixel file from the sensor and call it a 80MPixel camera - but it's still not. Most of the CR members here probably already know this topic and would say it's old news and not worth arguing over. I agree with that. You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.

Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.

FHD is 5,760 x 1080 (because the dots on most monitors are three times as tall as they are wide).
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
27,556
7,341
The world is moving on, EVF and mirrorless is the future.
Moving on, at a snail's pace. In 2020, MILCs comprised 55% of ILC sales, DSLRs comprised 45%. In Jan-Apr of 2021, MILCs comprised 56% of ILC sales, DSLRs comprised 44%. At that rate of change, the fully mirrorless future will arrive in a few short decades.
 
  • Like
Reactions: 2 users

SteveC

R5
CR Pro
Sep 3, 2019
2,577
2,475
Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.
Not really (even following their "logic"), because "4K" is a reference to the number of pixels in the horizontal dimension of the display. (either 3840 or 4096 depending on exactly what kind of 4K it is). Tripling the number of pixels will not triple the horizontal pixel count; it would likely add ~73.2 percent to it.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
Not really (even following their "logic"), because "4K" is a reference to the number of pixels in the horizontal dimension of the display. (either 3840 or 4096 depending on exactly what kind of 4K it is). Tripling the number of pixels will not triple the horizontal pixel count; it would likely add ~73.2 percent to it.

Since most monitors actually have R,G, and B rectangles that are three times as tall as they are wide, then it wouldn't change the vertical resolution at all. But the horizontal resolution would be tripled. Each monitor "pixel" is a square comprised of three vertical 3:1 rectangles, one red, one green, and one blue.

1623867626310.png

FHD would be 5,760x1080 rather than 1920x 1080.

4K would be 12,288x2160 instead of 4096x2160.
 
Last edited:

Toglife_Anthony

Hit the G.A.S. & pump the brakes at the same time!
Apr 2, 2020
60
77
Moving on, at a snail's pace. In 2020, MILCs comprised 55% of ILC sales, DSLRs comprised 45%. In Jan-Apr of 2021, MILCs comprised 56% of ILC sales, DSLRs comprised 44%. At that rate of change, the fully mirrorless future will arrive in a few short decades.
Seems a little silly to make comparisons to a pandemic year when the world basically fell flat, don't cha think? ;-) BTW, what's your source for those numbers?
 

usern4cr

R5
CR Pro
Sep 2, 2018
1,284
2,160
Kentucky, USA
They're not even single color sensing dots. They all three have some sensitivity to the entire visible spectrum. Putting a green color filter over a sensel on a digital sensor does not eliminate all red and blue light from entering it any more than putting a green filter over a lens eliminates all red and green light from reaching black and white film and making any blue or red object in the scene pure black in the photo. It just makes the blue and red things look darker than the green things that are the same brightness in the actual scene.

ALL THREE colors have to be interpolated when demosaicing is done. Not only because of the overlapping way each filtered pixel is sensitive to the rest of the visible spectrum, but also because different light sources emit different parts of the visible spectrum in differing amounts. They all have to be interpolated as well because the three colors that most Bayer masks use are not the same three colors that RGB monitors emit, either. The three colors of Bayer mask filters are closer to the three colors to which each type of our retinal cones are most sensitive: A slightly violet blue, a slightly yellow green and a slightly green yellow, though most Bayer masks use and orangish yellow color instead of the lime color to which our L cones are most sensitive.

View attachment 198366

We named the cones "red", "green", and "blue" decades before we managed to measure the exact response of each type of cone to various wavelengths of light.

The reason trichromatic color masks on cameras and trichromatic reproduction systems (monitors or three color printing) work is because our retinas and brains do the same thing. Our Medium wavelength and Long wavelength cones have a very large amount of overlapping sensitivity. Our short wavelength cones have less overlap with the M and L cones, but there is still some overlap there. If there were no overlapping sensitivity between the S, M, and L cones our brains could not create colors. Colors do not exist in the electromagnetic spectrum. There are only wavelengths and frequencies. It's our brains that create color as a response to certain wavelengths/frequencies in the electromagnetic spectrum.


View attachment 198367

The "red" filtered part of this Sony IMX249 sensor peaks at 600nm, which is what we call "orange", rather than 640nm, which is the color we call "Red."

All of the cute little drawings on the internet notwithstanding, the actual colors of a Bayer filer are not Red (640nm), Green (530nm), and Blue (465nm).

View attachment 198368

Actual image of an actual Bayer mask and a sensor. Part of the mask has been peeled away.
Thanks for the detailed post. I did know the QE curves of the eyes as you've shown. But I didn't know what the QE of the sensor color filters were in general. I'm surprised (and impressed) that they are similar to that of the eyes.
 
  • Like
Reactions: 1 user

usern4cr

R5
CR Pro
Sep 2, 2018
1,284
2,160
Kentucky, USA
Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.

FHD is 5,760 x 1080 (because the dots on most monitors are three times as tall as they are wide).
It would be nice if everyone had to use a clear and "honest" standard to make their marketing claims:
A "pixel" would be all 3 colors on top of each other (in the same space), without interpolation (e.g. a Foveon sensor, but not applicable to Bayer style sensors or emitters).
A "dot" would be 1 color element (without the other 2 color elements at that location, which is applicable to Bayer style sensors or emitters).
A "interpolated pixel" would be a dot with 2 interpolated colors added in software (so a 10M Dot Bayer array could output a 10M InterpolatedPixel image file)
But I know this won't happen, and we just have to live with whatever companies want to claim.

It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!
 

SteveC

R5
CR Pro
Sep 3, 2019
2,577
2,475
Since most monitors actually have R,G, and B rectangles that are three times as tall as they are wide, then it wouldn't change the vertical resolution at all. But the horizontal resolution would be tripled. Each monitor "pixel" is a square comprised of three vertical 3:1 rectangles, one red, one green, and one blue.

View attachment 198376

FHD would be 5,760x1080 rather than 1920x 1080.

4K would be 12,288x2160 instead of 4096x2160.
I was thinking of the camera sensor. Good catch.
 
  • Like
Reactions: 1 user

dtaylor

Canon 5Ds
Jul 26, 2011
1,795
1,418
To cut a long story short, you've convinced that EF and optical is better and will argue until death that you're right.
To cut a long story short, you're very unhappy that I pointed out some of your assumptions about how things work are dead wrong. So you're going to misrepresent my position and the things I've said, and resort to personal attacks and fallacies.

Nothing more needs to be said to you until you grow up.
 

dtaylor

Canon 5Ds
Jul 26, 2011
1,795
1,418
The EVF and LCD screen are physically composed of single color dots, not pixels.
So are monitors. But they are reported accurately in pixels since it takes an R, G, and B display element or 'dot' to form a single image element. This is how EVFs and rear LCDs should be reported so the comparison is consistent to monitors, phones, etc.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots.
This is false. A sensor element senses luminance and is filtered to be biased towards R, G, or B. There is considerable overlap in wavelength sensitivity to reduce noise and facilitate better color detection. The more important point to this conversation is that these screens are not reported in 'dots' because of Bayer. One 'dot' does not correspond to one sensor element, nor could it. Screen dots are not arranged the same way as a Bayer sensor color filter array. Screen dots are R, G, or B only while sensor pixels have wavelength overlap. And sensor data is fully demosaiced before being displayed.

The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales.
Also false. Each sensor element produces an image element...even if it borrows some color data from neighbors to do so...and the observable resolution of a 20mp sensor is consistent with a 20mp sampling rate across most of its color gamut. Put another way, if you photograph a resolution chart and work backwards using Nyquist, you can predict the MP of the sensor to within a couple percentage points. If what you claimed were true your final result would be 1/3rd the published MP. (The only time this is not true is with a monochromatic test designed to hit a peak filtration point in the CFA. In such a test you end up with 1/4 or 1/2 the expected sampling rate because you've designed a test to exploit the CFA and effectively 'mask' some of the elements.)

By analogy if sensor manufacturers wanted to play the same game as EVF manufacturers, they would say that since each pixel uses data from neighboring pixels a 20mp sensor is really an 80mp one. (Actually 180mp since modern demosaicing algorithms look at 8 neighbors and not a simple quad.)

You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.
Nyquist is observable, repeatable science. You can choose to ignore it if you like, but don't expect other people to ignore it with you.
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
27,556
7,341
Seems a little silly to make comparisons to a pandemic year when the world basically fell flat, don't cha think? ;-) BTW, what's your source for those numbers?
Perhaps. But going back to 2019 paints a very different picture, with MILCs comprising 46% and DSLRs comprising 54% of ILCs. In 2018 it was 38% MILC and 62% DSLR. Of course, that supports keeping DSLRs around even more strongly, right? The pandemic does complicate interpretation, certainly. Perhaps everyone was hit hard, (total ILCs were 10M in 2018, 8M in 2019, 5M in 2020), but the DSLR-buying market was hit harder? Average unit price of DSLRs is lower, so that actually sort of makes sense (i.e. people with less disposable income prior to the pandemic were hit harder by the economic changes).

Data are from CIPA (cipa.jp).
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
It would be nice if everyone had to use a clear and "honest" standard to make their marketing claims:
A "pixel" would be all 3 colors on top of each other (in the same space), without interpolation (e.g. a Foveon sensor, but not applicable to Bayer style sensors or emitters).
A "dot" would be 1 color element (without the other 2 color elements at that location, which is applicable to Bayer style sensors or emitters).
A "interpolated pixel" would be a dot with 2 interpolated colors added in software (so a 10M Dot Bayer array could output a 10M InterpolatedPixel image file)
But I know this won't happen, and we just have to live with whatever companies want to claim.

It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!

All three RGB values have to be interpolated, not just two of them. Mainly because the colors of the filters are not the same colors as those emitted by RGB screens, but also to compensate for light sources with different spectral distributions.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,206
2,383
It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!

The 1" sensor is a legacy thing. Back in the early days of television sensors were at the end of vacuum tubes. A 1" sensor is the size, including all of the needed things around the edges of the actual sensing surface, that would fit inside a 1" glass vacuum tube.
 
  • Like
Reactions: 1 user

AlanF

Stay at home
CR Pro
Aug 16, 2012
10,126
16,197
The 1" sensor is a legacy thing. Back in the early days of television sensors were at the end of vacuum tubes. A 1" sensor is the size, including all of the needed things around the edges of the actual sensing surface, that would fit inside a 1" glass vacuum tube.
True, like 2/3" and 1/3" sensors are smaller than those dimensions. Interestingly, the crop factor of the 1" sensor relative to Canon APS-C (1.68x) is close to that of APS-C to FF (1.6x).
 

privatebydesign

I post too Much on Here!!
CR Pro
Jan 29, 2011
10,673
6,112
True, like 2/3" and 1/3" sensors are smaller than those dimensions. Interestingly, the crop factor of the 1" sensor relative to Canon APS-C (1.68x) is close to that of APS-C to FF (1.6x).
The blue rectangle is the dimension (relative) of a FF sensor. I was kind of excited when the G series cameras were getting 1” sensors then realized they could never compete with the M cameras I already had.


0E9CFCA4-E3B9-4265-AE22-0EB9FB22CD98.jpeg
 

AlanF

Stay at home
CR Pro
Aug 16, 2012
10,126
16,197
The blue rectangle is the dimension (relative) of a FF sensor. I was kind of excited when the G series cameras were getting 1” sensors then realized they could never compete with the M cameras I already had.


View attachment 198393
They don't compete with the M on S/N but they are a good example how you can squeeze out "reach". I have a Sony RX10 IV which I use occasionally. It's incredibly good, and its zoom at f/4 and 220 mm has an FF fov of 600mm and resolves as well as my 5DIV with the 100-400mm II at 400mm. Interestingly, Sony, which used to bring out new models yearly, hasn't updated the RX10 range since 2017. It has the A9 AF system and Sony had a firmware upgrade to eyeAF. Canon never competed with it. Before the RX10 IV, I had a G3X but the lens wasn't as good and the AF in a league or two below.
 

stevelee

FT-QL
CR Pro
Jul 6, 2017
2,314
1,009
Davidson, NC
The blue rectangle is the dimension (relative) of a FF sensor. I was kind of excited when the G series cameras were getting 1” sensors then realized they could never compete with the M cameras I already had.
OTOH, my G7X II was replacing my S120 and not my Rebel, so a 1" sensor was a nice step up. And I took some rather nice photographs with the S120, some of which hang on my walls. As a practical matter for me anyway, sensor size is a matter of noise. The G5X II does pretty well up to ISO 1600 in most cases, comparable to 6400 on my 6D2, more or less. My pictures of Venice by night from the balcony of my stateroom were shot at ISO 2500 and 3200. The sky was a little noisy, but that was easily correctible, since there was no detail to lose. I have made some rather nice 13" x 19" prints of one of the shots.
 

usern4cr

R5
CR Pro
Sep 2, 2018
1,284
2,160
Kentucky, USA
All three RGB values have to be interpolated, not just two of them. Mainly because the colors of the filters are not the same colors as those emitted by RGB screens, but also to compensate for light sources with different spectral distributions.
Yes, all 3 have to be interpolated. But the fact that there are no sensors at that location for the other 2 colors is what I'm talking about. You have to infer what the value would be by considering the neighbors. You only have one piece of data at the particular spot, and that piece has an appreciable peak of color for one of the colors over the others.

If you had a foveon sensor, with all 3 colors in one spot, you could "interpolate" 3 additional intermediate spots of color by using the neighbors and thus a 10MP foveon sensor could output a 40MP file - Does that mean it's a 40MP sensor? If all the manufacturers had Foveon type sensors and did this and it was widely accepted, would you be also be saying it's a 40MP sensor?