Update: The Canon EOS R3 will be officially announced on June 29th

Danglin52

Wildlife Shooter
Aug 8, 2018
314
340
First, and this is absolutely no insult, if you knew which was which, a subconscious assumption may have biased your analysis. That's why real science is based on blind or even double-blind testing: make people rate the photos NOT knowing which camera took which. And maybe even have the person making the test not know which is which, so they don't fall into some unspoken subconscious ordering.

Second, Canon has always had sensors with the wiring on the front of the sensor (front-side sensor). The double MP sensor has twice the wires interfering with light, so it is NOT true that a hi-MP front-side sensor will pick up as many TOTAL photons as a lo-MP sensor. In contrast the R3 and one assumes all future Canons are back-side. The entire front of the sensor is sensitive to photons, whether there's 1 big pixel or a billion. In this case, while every single pixel in the hi-MP sensor will have more noise, the scene as a whole won't, so down-sampling (especially, downsampling by an integer factor like exactly 4 pixels to 1) should give an identical image with identical noise characteristics as well as identical resolution.
I actually shared the files with a couple of other folks and just labeled them A & B version. Certainly not scientific measurement, but everyone picked the R6 shots off of a iMac 5k monitor. I am not a sensor geek, but aware of the difference between the sensors and potential advantages.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0

Danglin52

Wildlife Shooter
Aug 8, 2018
314
340
It's a property of math, relied upon in statistics.

Say we're imaging something half-reflective of light, and it's so dark that even a pure white object will only yield one photon in each pixel of a hi-MP sensor. So our gray object should yield a half-photon per pixel. NO single pixel will have the correct answer of half-reflective, because there's no such thing as a half-photon. Instead they'll either have twice the real value, or a zero value. Noise is +- 100%, basically! (This is like: we know the odds when flipping a coin is 50% either way, but if we then just flip a coin one time, we cannot get 50%, we only get 100% heads or 0% heads.)

Now, take 4 pixels of this hi-MP sensor, either getting 0 photons (black) or 1 (white), and sum them. Consider receipt of a photon as a coin flip. Giving 0 for no photon, and 1 for a photon, the equally likely possibilities are:

0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

Now you'll see a total of:

0 -- 1 time in 16. We're reporting this larger pixel to be black, 100% off its true value.
1 -- 4 times in 16. We're reporting this larger pixel to be 25, 50% off its true value.
2 -- 6 times in 16. We're reporting this larger pixel to be 50, its true value.
3 -- 4 times in 16. We're reporting this larger pixel to be 75, 50% off its true value.
4 -- 1 time in 16. We're reporting this larger pixel to be 100, 100% off its true value.

Instead of being off the correct value of .5 by 100% as before, now we're off by: 100 * 1/16 + 50 * 4/16 + 0 * 6/16 + 50 * 4/16 + 100 * 1/16 = 37.5%.

Now, take a lo-MP sensor with 1/4 the resolution. Its pixels are big enough they'll get 4 photons from a white object. Our gray object should return 2 photons. The math works identically to the above five cases and their chances of happening, giving the same 37.5% noise.

So, back to sensors. An 80MP back-side sensor should capture as many photons total as a 20MP sensor, though that means only 1/4 the photos per pixel and thus far higher noise per pixel. But then average four neighboring pixels together and the noise level comes down to exactly the same as the 20MP sensor.
This one I will have to give some consideration and research since I am not a sensor geek and hated statistics. Reminds me too much of the discussion I had with the design engineers when we bought the Image Sensor Division from Kodak. I have no issues admitting I don't know, which was a surprisingly effective approach in business.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Jul 21, 2010
31,088
12,851
If the RAW file does not contain any interpolated colours, that would mean that each pixel only contains either red, blue and green colours. Doesn't that mean that the whole information contained in one red, one blue and one of the green pixels could be saved in a single RGB pixel? I think that depends on how the interploation works. Is each colour interpolated separately or do the blue pixels for example help interploating the red pixels?
Reducing the data from adjacent pixels with different color masks would mean a loss of spatial resolution. Also, following that processing you would no longer have a RAW file.

A simple algorithm using only adjacent photosites would interpolate color poorly. Current RAW conversion algorithms use many surrounding photosites of all three color masks to better estimate the color value of each pixel. Demosaicing is more than just simple interpolation among immediately adjacent pixels.
 
  • Like
Reactions: 1 user
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
Yes, at the current rate of market change that should only take a decade or so. Did you know that the best-selling ILC in Japan for the last two months was a Canon DSLR?
..... well I succumbed to the irresistible R5, but only because Canon didn't produce a DSLR successor to the 5DS and 5DMkiv.

I've owned the R5 for a couple of months, shooting about 8000 shots so far, of birds (incl. BIF), mammals, insects and landscapes. It's a fantastic camera, but I still greatly prefer the DSLR viewfinder experience...
 
  • Like
Reactions: 2 users
Upvote 0
Jul 21, 2010
31,088
12,851
..... well I succumbed to the irresistible R5, but only because Canon didn't produce a DSLR successor to the 5DS and 5DMkiv.

I've owned the R5 for a couple of months, shooting about 8000 shots so far, of birds (incl. BIF), mammals, insects and landscapes. It's a fantastic camera, but I still greatly prefer the DSLR viewfinder experience...
I have gotten used to my EOS R viewfinder but I do still prefer the 1D X. But the advantages of the R3 outweigh that preference, and even though I prefer the OVF, an EVF certainly has some advantages.
 
  • Like
Reactions: 1 users
Upvote 0
I have gotten used to my EOS R viewfinder but I do still prefer the 1D X. But the advantages of the R3 outweigh that preference, and even though I prefer the OVF, an EVF certainly has some advantages.
I think the image stabilization is a big point, especially if you have some older, non stabilized lenses. The focusing accuracy and the ability to focus just a little bit further off center are huge. I do a lot of low light people work and the liveview focusing in my 5D IV has sold me on mirrorless.
 
  • Like
Reactions: 1 user
Upvote 0
Just my guess but a lot of professional sports shooters would prefer 40fps and 24mp (or even 45fps and 20mp) over 30fps and 30mp as capturing the exact crucial moment like a bat hitting a ball or a diver just entering the water is more important to them than a small gain in resolution which has no noticeable effect on the final image but does slow down your workflow.
If this is going to be an even higher end sports camera, then frankly, the R3 was a waste of time. That development time and money could have been better spent elsewhere. If Canon is following their old model of development, as some have suggested, the the R1 will not be another sports camera.
 
  • Like
Reactions: 1 user
Upvote 0
It's a property of math, relied upon in statistics.

Say we're imaging something half-reflective of light, and it's so dark that even a pure white object will only yield one photon in each pixel of a hi-MP sensor. So our gray object should yield a half-photon per pixel. NO single pixel will have the correct answer of half-reflective, because there's no such thing as a half-photon. Instead they'll either have twice the real value, or a zero value. Noise is +- 100%, basically! (This is like: we know the odds when flipping a coin is 50% either way, but if we then just flip a coin one time, we cannot get 50%, we only get 100% heads or 0% heads.)

Now, take 4 pixels of this hi-MP sensor, either getting 0 photons (black) or 1 (white), and sum them. Consider receipt of a photon as a coin flip. Giving 0 for no photon, and 1 for a photon, the equally likely possibilities are:

0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

Now you'll see a total of:

0 -- 1 time in 16. We're reporting this larger pixel to be black, 100% off its true value.
1 -- 4 times in 16. We're reporting this larger pixel to be 25, 50% off its true value.
2 -- 6 times in 16. We're reporting this larger pixel to be 50, its true value.
3 -- 4 times in 16. We're reporting this larger pixel to be 75, 50% off its true value.
4 -- 1 time in 16. We're reporting this larger pixel to be 100, 100% off its true value.

Instead of being off the correct value of .5 by 100% as before, now we're off by: 100 * 1/16 + 50 * 4/16 + 0 * 6/16 + 50 * 4/16 + 100 * 1/16 = 37.5%.

Now, take a lo-MP sensor with 1/4 the resolution. Its pixels are big enough they'll get 4 photons from a white object. Our gray object should return 2 photons. The math works identically to the above five cases and their chances of happening, giving the same 37.5% noise.

So, back to sensors. An 80MP back-side sensor should capture as many photons total as a 20MP sensor, though that means only 1/4 the photos per pixel and thus far higher noise per pixel. But then average four neighboring pixels together and the noise level comes down to exactly the same as the 20MP sensor.
This is why we love CR forum.
 
  • Like
Reactions: 1 user
Upvote 0

unfocused

Photos/Photo Book Reviews: www.thecuriouseye.com
Jul 20, 2010
7,184
5,483
70
Springfield, IL
www.thecuriouseye.com
I have gotten used to my EOS R viewfinder but I do still prefer the 1D X. But the advantages of the R3 outweigh that preference, and even though I prefer the OVF, an EVF certainly has some advantages.
Based on using both the R and the R5, I think you’ll find the R3 viewfinder won’t be as noticeably different from a DSLR as the R viewfinder is. R5 seems less laggy and less problematic in bright sunlight.
 
  • Like
Reactions: 1 users
Upvote 0
Jul 21, 2010
31,088
12,851
The definition of RAW has changed. I’m not sure if any camera manufacturer outputs “pure RAW“ anymore. It’s getting to the point where anything other than JPEG’s or HEIF will be considered to be RAW.
RAW is still RAW. The fact that commonly used RAW conversion software applications apply various corrections by default does not change the underlying data, which can be viewed in ‘pristine’ form with apps like RawTherapee and Rawnalyze.
 
  • Like
Reactions: 1 user
Upvote 0

usern4cr

R5
CR Pro
Sep 2, 2018
1,376
2,308
Kentucky, USA
If the RAW file does not contain any interpolated colours, that would mean that each pixel only contains either red, blue and green colours. Doesn't that mean that the whole information contained in one red, one blue and one of the green pixels could be saved in a single RGB pixel? I think that depends on how the interploation works. Is each colour interpolated separately or do the blue pixels for example help interploating the red pixels?
We've had previous posts in great detail that show the QE curves for each R G B filter. The R & G filtered dots share a lot of the same information, with some blue in them. The blue filter peak is further away but still shares some R & G information. The responses of the 3 filters is reported to be similar to the responses of the human eye. It is *not* true that the R G B dots are just pure R, G, and B information.

If your question is: can't 3 dots (R G B) be considered a single pixel? Well, sure (but that'd reduce the MP bragging rights by 3x). Also, the Bayer array has 2 G's for every R & B, so do you reduce 4 dots to 1 pixel or to 2 pixels? I still wish they'd call a 20MDot sensor just that, but they call it a 20MPixel sensor and do a pretty good job of interpolation.
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0
I don't think they will evev give use the real RAW images. For example there is one quite extreme transformation they apply before creating the RAW: The Bayer Filter only lets through either red, green or blue light for each pixel. However the RAW file already seems to contain an interpolated version of that real raw data that comes from the camera. Each colour is interpolated to neighbouring pixels to create the final image. So combining two green, one red and one blue pixel into a single pixel of an sRAW image would even be more accurate than the big RAW image that contains a lot of interpolated colour information. Even some noise reduction that can't be disabled is used to create the RAW file. So it is quite far away from real raw data.
Exactly! And some years ago both Nikon and Sony began to lossily compress parts of their RAW files, Sony in the highlights, and I don’t remember exactly what Nikon is doing. And the, Sony has offered 12 bit RAW files, down from the 14 bit capture. Even making a 16 bit RAW files isn’t giving the real data, since the files are really 14 bits. There’s a lot going on in RAW files these days with some manufacturers putting lens correction in those files that can’t be modified or removed later.

that why I mentioned “pure Raw” files earlier. Maybe manufacturers should just stop using the term RAW. And look at what Apple is doing with their Apple RAW files. It’s just a marketing term nowadays. It’s lost its original meaning.
back in “the old days”, a friend and I used to beta test backs for a couple of companies. The RAW files would come out of the camera back into the computer, and then into their demosaicing software. That was a true RAW file. I don’t think any other file counts are really being RAW.
 
Upvote 0
Yes, at the current rate of market change that should only take a decade or so. Did you know that the best-selling ILC in Japan for the last two months was a Canon DSLR?
If I remember correctly this "best-selling" DSLR ist one of the very cheapest. You should keep in mind, that u can get new Canon DSLRs sometimes for under 300€ (with Lens and accessories), while the cheapest Canon DSLM is twice as expensive -> of course the big group of cheap consumers still buys a lot of DSLRs.
 
Upvote 0

Bahrd

Red herrings...
Jun 30, 2013
252
186
Things are probably a lot more complicated than the above, however, does it mean that a higher MP sensor would have worst low light performance unless its technology, processing etc can sufficiently compensate?
I assume you are 21+ and can read this explanation (a takeaway version: the more pixels the larger additional read noise error ;)
 
  • Like
Reactions: 1 user
Upvote 0
Jul 21, 2010
31,088
12,851
If I remember correctly this "best-selling" DSLR ist one of the very cheapest. You should keep in mind, that u can get new Canon DSLRs sometimes for under 300€ (with Lens and accessories), while the cheapest Canon DSLM is twice as expensive -> of course the big group of cheap consumers still buys a lot of DSLRs.
You should keep mind that it’s always best to rely on facts and data, memory is prone to error, as is the case for u here.

The list to which I refer is for Japan and the Kiss X10 (250D) tops it. The second-highest Canon ILC on the list is the Kiss M (M50, the original not the MkII, though the latter is in the top ten as well).

Currently on amazon.co.jp:
  • Kiss X10 + 18-55mm — ¥82,500
  • Kiss M + 15-45mm — ¥68,800
A lot of DSLRs are being bought even though they cost more than a similar MILC. Another bogus claim debunked by real data.
 
  • Like
Reactions: 1 user
Upvote 0
Aug 7, 2018
598
549
It seems an sRAW file already is demosaiced in the camera and is more similar to a JPEG. Like a JPEG wiith 14 bits or so instead of 8 bits. It seems some sRAWs even have a similar file size as the full RAW, although they contain less image data. This article also says that hardly any software can work with sRAW files. So it seems it really is not a good idea to use it.
 
Upvote 0
Jul 21, 2010
31,088
12,851
It seems an sRAW file already is demosaiced in the camera and is more similar to a JPEG. Like a JPEG wiith 14 bits or so instead of 8 bits. It seems some sRAWs even have a similar file size as the full RAW, although they contain less image data. This article also says that hardly any software can work with sRAW files. So it seems it really is not a good idea to use it.
sRAW is not RAW. If you’re interested in a more in-depth examination of Canon’s sRAW and mRAW formats (with a short description of RAW to start, for comparison), see the excellent article by Doug Kerr.
 
  • Like
Reactions: 1 user
Upvote 0