The Canon EOS R3 is out in the wild

dtaylor

Canon 5Ds
Jul 26, 2011
1,706
1,266
Really? Measuring the EVF or back LCD in "dots" is exactly what those sensors are. Maybe you'd rather they call them "pixels" and use the same number
A screen pixel would be composed of three 'dots' (RGB). They switched to reporting dots for camera screens and EVFs for pure marketing reasons to make the screens sound higher resolution than they really are.

That would be what they're doing with the (Bayer) sensor!
No it is not.

the sensor is actually "dots" and not "pixels".
The final output from a sensor has one full RGB pixel per sensor pixel with the additional color information derived from neighboring pixels. This is distinctly different from camera screens where the reported number of dots leads a consumer to believe the screen is comparable to a monitor of a particular class when in fact it's comparable to a much lower resolution monitor. It has no relationship to Bayer what so ever. Your EVF/rear screen shows a fully demosaiced JPEG.
 

dilbert

EOS M6 Mark II
Aug 12, 2010
89
74
From what I've seen the RF interface adds pins for a serial channel for exchanging more complex data like lens corrections. Otherwise it's EF. And the EF pins/protocol would ever prove to be the upper bound on AF performance. Computing distance and moving lens elements would always be slower than the frequency at which EF signals are exchanged.

To cut a long story short, you've convinced that EF and optical is better and will argue until death that you're right. You're a few years late with this perspective and your arguments are similarly dated. It's 2021, not 2015.

If you want the truth behind the questions in your response, I'm sure you know how to use Google.

The world is moving on, EVF and mirrorless is the future. That train has left the station and there's no going back. Innovation will see that EVF cameras are just as good, if not better, than OVF. Of course there will be some folks that are wedded to their DSLRs, just like there are some that are wedded to film.

There are multiple threads here with you about EVFs and mirrorless. I would encourage you to step back for a few days and come back. This thread does not look like how you imagine it does.
 

usern4cr

R5
CR Pro
Sep 2, 2018
1,153
1,901
Kentucky, USA
A screen pixel would be composed of three 'dots' (RGB). They switched to reporting dots for camera screens and EVFs for pure marketing reasons to make the screens sound higher resolution than they really are.


No it is not.


The final output from a sensor has one full RGB pixel per sensor pixel with the additional color information derived from neighboring pixels. This is distinctly different from camera screens where the reported number of dots leads a consumer to believe the screen is comparable to a monitor of a particular class when in fact it's comparable to a much lower resolution monitor. It has no relationship to Bayer what so ever. Your EVF/rear screen shows a fully demosaiced JPEG.
The EVF and LCD screen are physically composed of single color dots, not pixels. If you want to divide the number of dots by three and call it a pixel, feel free to.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots. For every 2 green and 1 red and blue dot the marketing department chooses to call them all "pixels", implying that all 4 dots each have red, green, and blue. They do not. The fact that subsequent software interpolates neighboring pixels together to create additional estimated color values and provide 3 colors per dot on output does not mean that the sensor actually sensed 3 colors per dot - it did not. The only camera sensor that actually has each dot sense all 3 colors is the Foveon sensor from Sigma, which has 3 vertical sensing layers per dot. The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales. If they chose to further use software to upres a 20MDot(declared 20MPixel) sensor by 2x2 then they could present you with a 80MPixel file from the sensor and call it a 80MPixel camera - but it's still not. Most of the CR members here probably already know this topic and would say it's old news and not worth arguing over. I agree with that. You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
Except that the preview would still be of what the JPEG would look like if you were shooting JPEGs, not how the Raw file will come up in Lightroom or ACR before you start twiddling, wouldn't it?

Yep. But then shooters who haven't developed an eye in their own brain to see how an image will look before ever raising the camera to their eye and shooting it are the same ones who tend to shoot JPEG, in my experience. They're the ones that think a WYSIWYG EVF is such a revolutionary tool.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
The EVF and LCD screen are physically composed of single color dots, not pixels. If you want to divide the number of dots by three and call it a pixel, feel free to.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots. For every 2 green and 1 red and blue dot the marketing department chooses to call them all "pixels", implying that all 4 dots each have red, green, and blue. They do not. The fact that subsequent software interpolates neighboring pixels together to create additional estimated color values and provide 3 colors per dot on output does not mean that the sensor actually sensed 3 colors per dot - it did not. The only camera sensor that actually has each dot sense all 3 colors is the Foveon sensor from Sigma, which has 3 vertical sensing layers per dot. The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales. If they chose to further use software to upres a 20MDot(declared 20MPixel) sensor by 2x2 then they could present you with a 80MPixel file from the sensor and call it a 80MPixel camera - but it's still not. Most of the CR members here probably already know this topic and would say it's old news and not worth arguing over. I agree with that. You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.

They're not even single color sensing dots. They all three have some sensitivity to the entire visible spectrum. Putting a green color filter over a sensel on a digital sensor does not eliminate all red and blue light from entering it any more than putting a green filter over a lens eliminates all red and green light from reaching black and white film and making any blue or red object in the scene pure black in the photo. It just makes the blue and red things look darker than the green things that are the same brightness in the actual scene.

ALL THREE colors have to be interpolated when demosaicing is done. Not only because of the overlapping way each filtered pixel is sensitive to the rest of the visible spectrum, but also because different light sources emit different parts of the visible spectrum in differing amounts. They all have to be interpolated as well because the three colors that most Bayer masks use are not the same three colors that RGB monitors emit, either. The three colors of Bayer mask filters are closer to the three colors to which each type of our retinal cones are most sensitive: A slightly violet blue, a slightly yellow green and a slightly green yellow, though most Bayer masks use an orangish yellow color instead of the lime color to which our L cones are most sensitive.

1623856704715.png


We named the cones "red", "green", and "blue" decades before we managed to measure the exact response of each type of cone to various wavelengths of light.

The reason trichromatic color masks on cameras and trichromatic reproduction systems (RGB monitors or CMY three color printing) work is because our retinas and brains do the same thing. Our Medium wavelength and Long wavelength cones have a very large amount of overlapping sensitivity. Our short wavelength cones have less overlap with the M and L cones, but there is still some overlap there. If there were no overlapping sensitivity between the S, M, and L cones our brains could not create colors. Colors do not exist in the electromagnetic spectrum. There are only wavelengths and frequencies. It's our brains that create color as a response to certain wavelengths/frequencies in the electromagnetic spectrum.


1623856780683.png


The "red" filtered part of this Sony IMX249 sensor peaks at 600nm, which is what we call "orange", rather than 640nm, which is the color we call "Red."

All of the cute little drawings on the internet notwithstanding, the actual colors of a Bayer filer are not Red (640nm), Green (530nm), and Blue (465nm).

1623857306775.jpeg


Actual image of an actual Bayer mask and a sensor. Part of the mask has been peeled away.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
The EVF and LCD screen are physically composed of single color dots, not pixels. If you want to divide the number of dots by three and call it a pixel, feel free to.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots. For every 2 green and 1 red and blue dot the marketing department chooses to call them all "pixels", implying that all 4 dots each have red, green, and blue. They do not. The fact that subsequent software interpolates neighboring pixels together to create additional estimated color values and provide 3 colors per dot on output does not mean that the sensor actually sensed 3 colors per dot - it did not. The only camera sensor that actually has each dot sense all 3 colors is the Foveon sensor from Sigma, which has 3 vertical sensing layers per dot. The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales. If they chose to further use software to upres a 20MDot(declared 20MPixel) sensor by 2x2 then they could present you with a 80MPixel file from the sensor and call it a 80MPixel camera - but it's still not. Most of the CR members here probably already know this topic and would say it's old news and not worth arguing over. I agree with that. You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.

Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.

FHD is 5,760 x 1080 (because the dots on most monitors are three times as tall as they are wide).
 

SteveC

R5
CR Pro
Sep 3, 2019
2,315
2,175
Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.
Not really (even following their "logic"), because "4K" is a reference to the number of pixels in the horizontal dimension of the display. (either 3840 or 4096 depending on exactly what kind of 4K it is). Tripling the number of pixels will not triple the horizontal pixel count; it would likely add ~73.2 percent to it.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
Not really (even following their "logic"), because "4K" is a reference to the number of pixels in the horizontal dimension of the display. (either 3840 or 4096 depending on exactly what kind of 4K it is). Tripling the number of pixels will not triple the horizontal pixel count; it would likely add ~73.2 percent to it.

Since most monitors actually have R,G, and B rectangles that are three times as tall as they are wide, then it wouldn't change the vertical resolution at all. But the horizontal resolution would be tripled. Each monitor "pixel" is a square comprised of three vertical 3:1 rectangles, one red, one green, and one blue.

1623867626310.png


FHD would be 5,760x1080 rather than 1920x 1080.

4K would be 12,288x2160 instead of 4096x2160.
 
Last edited:

Toglife_Anthony

Hit the G.A.S. & pump the brakes at the same time!
Apr 2, 2020
26
28
Moving on, at a snail's pace. In 2020, MILCs comprised 55% of ILC sales, DSLRs comprised 45%. In Jan-Apr of 2021, MILCs comprised 56% of ILC sales, DSLRs comprised 44%. At that rate of change, the fully mirrorless future will arrive in a few short decades.
Seems a little silly to make comparisons to a pandemic year when the world basically fell flat, don't cha think? ;-) BTW, what's your source for those numbers?
 

usern4cr

R5
CR Pro
Sep 2, 2018
1,153
1,901
Kentucky, USA
They're not even single color sensing dots. They all three have some sensitivity to the entire visible spectrum. Putting a green color filter over a sensel on a digital sensor does not eliminate all red and blue light from entering it any more than putting a green filter over a lens eliminates all red and green light from reaching black and white film and making any blue or red object in the scene pure black in the photo. It just makes the blue and red things look darker than the green things that are the same brightness in the actual scene.

ALL THREE colors have to be interpolated when demosaicing is done. Not only because of the overlapping way each filtered pixel is sensitive to the rest of the visible spectrum, but also because different light sources emit different parts of the visible spectrum in differing amounts. They all have to be interpolated as well because the three colors that most Bayer masks use are not the same three colors that RGB monitors emit, either. The three colors of Bayer mask filters are closer to the three colors to which each type of our retinal cones are most sensitive: A slightly violet blue, a slightly yellow green and a slightly green yellow, though most Bayer masks use and orangish yellow color instead of the lime color to which our L cones are most sensitive.

View attachment 198366

We named the cones "red", "green", and "blue" decades before we managed to measure the exact response of each type of cone to various wavelengths of light.

The reason trichromatic color masks on cameras and trichromatic reproduction systems (monitors or three color printing) work is because our retinas and brains do the same thing. Our Medium wavelength and Long wavelength cones have a very large amount of overlapping sensitivity. Our short wavelength cones have less overlap with the M and L cones, but there is still some overlap there. If there were no overlapping sensitivity between the S, M, and L cones our brains could not create colors. Colors do not exist in the electromagnetic spectrum. There are only wavelengths and frequencies. It's our brains that create color as a response to certain wavelengths/frequencies in the electromagnetic spectrum.


View attachment 198367

The "red" filtered part of this Sony IMX249 sensor peaks at 600nm, which is what we call "orange", rather than 640nm, which is the color we call "Red."

All of the cute little drawings on the internet notwithstanding, the actual colors of a Bayer filer are not Red (640nm), Green (530nm), and Blue (465nm).

View attachment 198368

Actual image of an actual Bayer mask and a sensor. Part of the mask has been peeled away.
Thanks for the detailed post. I did know the QE curves of the eyes as you've shown. But I didn't know what the QE of the sensor color filters were in general. I'm surprised (and impressed) that they are similar to that of the eyes.
 
  • Like
Reactions: Michael Clark

usern4cr

R5
CR Pro
Sep 2, 2018
1,153
1,901
Kentucky, USA
Then you need to start counting all three dots that make up a screen like a computer monitor as well. 4K is really 12K. 8K is really 24K, and so on.

FHD is 5,760 x 1080 (because the dots on most monitors are three times as tall as they are wide).
It would be nice if everyone had to use a clear and "honest" standard to make their marketing claims:
A "pixel" would be all 3 colors on top of each other (in the same space), without interpolation (e.g. a Foveon sensor, but not applicable to Bayer style sensors or emitters).
A "dot" would be 1 color element (without the other 2 color elements at that location, which is applicable to Bayer style sensors or emitters).
A "interpolated pixel" would be a dot with 2 interpolated colors added in software (so a 10M Dot Bayer array could output a 10M InterpolatedPixel image file)
But I know this won't happen, and we just have to live with whatever companies want to claim.

It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!
 

SteveC

R5
CR Pro
Sep 3, 2019
2,315
2,175
Since most monitors actually have R,G, and B rectangles that are three times as tall as they are wide, then it wouldn't change the vertical resolution at all. But the horizontal resolution would be tripled. Each monitor "pixel" is a square comprised of three vertical 3:1 rectangles, one red, one green, and one blue.

View attachment 198376

FHD would be 5,760x1080 rather than 1920x 1080.

4K would be 12,288x2160 instead of 4096x2160.
I was thinking of the camera sensor. Good catch.
 
  • Like
Reactions: Michael Clark

dtaylor

Canon 5Ds
Jul 26, 2011
1,706
1,266
To cut a long story short, you've convinced that EF and optical is better and will argue until death that you're right.
To cut a long story short, you're very unhappy that I pointed out some of your assumptions about how things work are dead wrong. So you're going to misrepresent my position and the things I've said, and resort to personal attacks and fallacies.

Nothing more needs to be said to you until you grow up.
 

dtaylor

Canon 5Ds
Jul 26, 2011
1,706
1,266
The EVF and LCD screen are physically composed of single color dots, not pixels.
So are monitors. But they are reported accurately in pixels since it takes an R, G, and B display element or 'dot' to form a single image element. This is how EVFs and rear LCDs should be reported so the comparison is consistent to monitors, phones, etc.

The commonly used Bayer sensor is composed of single color receptors, which are single color sensing dots.
This is false. A sensor element senses luminance and is filtered to be biased towards R, G, or B. There is considerable overlap in wavelength sensitivity to reduce noise and facilitate better color detection. The more important point to this conversation is that these screens are not reported in 'dots' because of Bayer. One 'dot' does not correspond to one sensor element, nor could it. Screen dots are not arranged the same way as a Bayer sensor color filter array. Screen dots are R, G, or B only while sensor pixels have wavelength overlap. And sensor data is fully demosaiced before being displayed.

The camera companies with Bayer sensors call a 20MDot sensor a 20MPixel sensor to increase sales.
Also false. Each sensor element produces an image element...even if it borrows some color data from neighbors to do so...and the observable resolution of a 20mp sensor is consistent with a 20mp sampling rate across most of its color gamut. Put another way, if you photograph a resolution chart and work backwards using Nyquist, you can predict the MP of the sensor to within a couple percentage points. If what you claimed were true your final result would be 1/3rd the published MP. (The only time this is not true is with a monochromatic test designed to hit a peak filtration point in the CFA. In such a test you end up with 1/4 or 1/2 the expected sampling rate because you've designed a test to exploit the CFA and effectively 'mask' some of the elements.)

By analogy if sensor manufacturers wanted to play the same game as EVF manufacturers, they would say that since each pixel uses data from neighboring pixels a 20mp sensor is really an 80mp one. (Actually 180mp since modern demosaicing algorithms look at 8 neighbors and not a simple quad.)

You (and much of the public) can feel free to think a 20MDot sensor is really 20MPixels - enjoy.
Nyquist is observable, repeatable science. You can choose to ignore it if you like, but don't expect other people to ignore it with you.
 

neuroanatomist

I post too Much on Here!!
CR Pro
Jul 21, 2010
25,005
2,922
Seems a little silly to make comparisons to a pandemic year when the world basically fell flat, don't cha think? ;-) BTW, what's your source for those numbers?
Perhaps. But going back to 2019 paints a very different picture, with MILCs comprising 46% and DSLRs comprising 54% of ILCs. In 2018 it was 38% MILC and 62% DSLR. Of course, that supports keeping DSLRs around even more strongly, right? The pandemic does complicate interpretation, certainly. Perhaps everyone was hit hard, (total ILCs were 10M in 2018, 8M in 2019, 5M in 2020), but the DSLR-buying market was hit harder? Average unit price of DSLRs is lower, so that actually sort of makes sense (i.e. people with less disposable income prior to the pandemic were hit harder by the economic changes).

Data are from CIPA (cipa.jp).
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
It would be nice if everyone had to use a clear and "honest" standard to make their marketing claims:
A "pixel" would be all 3 colors on top of each other (in the same space), without interpolation (e.g. a Foveon sensor, but not applicable to Bayer style sensors or emitters).
A "dot" would be 1 color element (without the other 2 color elements at that location, which is applicable to Bayer style sensors or emitters).
A "interpolated pixel" would be a dot with 2 interpolated colors added in software (so a 10M Dot Bayer array could output a 10M InterpolatedPixel image file)
But I know this won't happen, and we just have to live with whatever companies want to claim.

It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!

All three RGB values have to be interpolated, not just two of them. Mainly because the colors of the filters are not the same colors as those emitted by RGB screens, but also to compensate for light sources with different spectral distributions.
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
3,657
2,142
It would also be nice if a "1 inch sensor" had a diagonal measurement of 1". It doesn't - it's not even close. How that is possible to advertise is beyond me!

The 1" sensor is a legacy thing. Back in the early days of television sensors were at the end of vacuum tubes. A 1" sensor is the size, including all of the needed things around the edges of the actual sensing surface, that would fit inside a 1" glass vacuum tube.
 
  • Like
Reactions: stevelee

AlanF

Stay at home
CR Pro
Aug 16, 2012
8,230
10,236
The 1" sensor is a legacy thing. Back in the early days of television sensors were at the end of vacuum tubes. A 1" sensor is the size, including all of the needed things around the edges of the actual sensing surface, that would fit inside a 1" glass vacuum tube.
True, like 2/3" and 1/3" sensors are smaller than those dimensions. Interestingly, the crop factor of the 1" sensor relative to Canon APS-C (1.68x) is close to that of APS-C to FF (1.6x).
 

privatebydesign

I post too Much on Here!!
CR Pro
Jan 29, 2011
10,209
5,265
True, like 2/3" and 1/3" sensors are smaller than those dimensions. Interestingly, the crop factor of the 1" sensor relative to Canon APS-C (1.68x) is close to that of APS-C to FF (1.6x).
The blue rectangle is the dimension (relative) of a FF sensor. I was kind of excited when the G series cameras were getting 1” sensors then realized they could never compete with the M cameras I already had.


0E9CFCA4-E3B9-4265-AE22-0EB9FB22CD98.jpeg