Another mention of a 70+ megapixel EOS R camera

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,246
1,939
Canada
So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.
There won’t be one “pro” body, there will be several high end bodies.

Expect a high megapixel body, a good wedding shooter body (like 5D4), and a sports body (like the 1DX2). Obviously, they are not all being released at the same time.

Note that the sales for the lower end bodies should be far higher numbers for the lower units plus some who buy lower units will then go on to buy higher units. Fot sales reasons, it makes sense to release the lower units first.

Remember too, these are rumours. It is rumoured that a high megapixel body is coming. That does not mean it is next, or even if it will arrive at all.

P.S. when you use words like turd and joke in a post you essentially remove yourself from a reasoned discussion and invite attacks.
 
  • Like
Reactions: 2 users
Upvote 0
Mar 2, 2012
3,187
542
I've read that the ADC's aren't linear in which case that's probably not it. But I couldn't say for certain.

I’m not sure they’d need to be linear. It would be enough if they are not matched to a sensor.

That would allow multiple cameras to use similar circuit design and bills of material, simplify the tuning process.
Let’s say the cameras can use their full well capacity. If so, at the base ISO setting, shouldn’t the 5Div (larger wells) take longer to overexpose than the 5Ds (smaller wells), all else being equal?

It doesn’t; they apply the speed ratings such that full exposure is roughly the same between models, meaning the same photon count converts to charge for a given focus plane exposure and same sensor size.
 
Last edited:
Upvote 0
Jul 21, 2010
31,095
12,857
Fair enough, but you can equally well make a historgram and highlight/lowlight warnings based on the RAW, couldn't you?

Just because the CPU doesn't show the user that data on a graph doesn't mean the CPU doesn't have access to it, does it?
Who is ‘you’? I can’t. Canon could, if they choose to do so. I wouldn’t hold my breath on that one...RAW files have been around for a long time, histograms are still based on JPGs.
 
  • Like
Reactions: 1 user
Upvote 0
Jul 21, 2010
31,095
12,857
What makes you think it's not already done this way? How do you think it's done, if not this way?
As stated above, the review image/histogram/highlight warning are all based on the processed data (8-bit converted, most in-camera settings applied), not the RAW image/stream.
 
  • Like
Reactions: 1 user
Upvote 0
Jul 21, 2010
31,095
12,857
Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
What are you arguing here? Of course Canon could provide a RAW histogram, as I mentioned several posts back. They could have implemented that feature at any time, as I also mentioned several posts back. But they haven’t...as I mentioned several posts back. Those are the facts. If your point is, Canon should give us a RAW histogram option, I’d certainly use one if they do. But I’m not going to hold my breath waiting for one, and I would not recommend that you do so, either (as...wait for it...I mentioned several posts back).
 
Upvote 0

Ozarker

Love, joy, and peace to all of good will.
CR Pro
Jan 28, 2015
5,933
4,336
The Ozarks
Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
Then write the code. You should be able to knock out those 20 lines in a few minutes. Let us know how it works when you are done. Harry could probably help you. Honestly, a tool like that would be more helpful in the film era. Personally, perfect shadow detail ain't real high on my wish list. Knock yourself out. I didn't realize we had so many coders with so much knowledge around here.
 
  • Like
  • Haha
Reactions: 1 users
Upvote 0

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,246
1,939
Canada
Then write the code. You should be able to knock out those 20 lines in a few minutes. Let us know how it works when you are done. Harry could probably help you. Honestly, a tool like that would be more helpful in the film era. Personally, perfect shadow detail ain't real high on my wish list. Knock yourself out. I didn't realize we had so many coders with so much knowledge around here.
BTW, a digic processor does not run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.

It also takes a lot more computing power to analyze a 70+ megapixel RAW file for histograms and zebras than it does to analyze a 1 megapixel JPG file, so not only does he have to write the code and determine if it reacts badly to other code, but he will have to somehow find more CPU cycles.... a lot more!

There is always a reason why things are done the way that they are.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Mar 2, 2012
3,187
542
BTW, a digic processor does run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.

It also takes a lot more computing power to analyze a 70+ megapixel RAW file for histograms and zebras than it does to analyze a 1 megapixel JPG file, so not only does he have to write the code and determine if it reacts badly to other code, but he will have to somehow find more CPU cycles.... a lot more!

There is always a reason why things are done the way that they are.

I’m not sure there is a huge amount of value in a raw histogram. I don’t think i would change my shooting style much. Using the raw would only affect extreme exposures. It wouldn’t make much difference in the mid tones, so ETTR (or L I suppose) is the likely application.

Since people who shoot raw by definition develop after the fact, a programmatically lighter and less data intensive method might be an automated ETTR function. Flag the hottest pixel in each channel after quantization, and automagically back exposure off a fraction. No need to read the whole file and map the distribution.
 
Last edited:
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I think writing to the card takes more time that in-camera processing, that's why in-camera downsampling may help increase the burst rate. Roughly speaking, say you have internal in-memory RAW data of the size of S and it takes T time to write. Now we want to downsample and shrink it to the size of S/2, it will now only take T/2 time to write. But in-camera downsampling will take much less than T/2 time to process, so in total writing of downsampled data will be between T/2 and T. I guess it'll be closer to T/2.

If that were the case, then frame rates for higher MP cameras could closer to the rate of lower MP cameras until the buffer is full. That does not happen.

In fact, the opposite is more true. A 7D Mark II can go at 10 fps for almost 30 raw frames, but once it bogs down then it is not much faster than a bogged down 5Ds.

The 5Ds, on the other hand, goes only half as fast as the 7D Mark II prior to filling the buffer, then goes at about the same much slower rate.


I'm not an expert on this but downsampling 4:1 would mean you get four doses of "read noise" and while that often offsets, statistically speaking 4 of them on average are going to be twice as bad as 1. So while 4 small pixels could theoretically capture the exact same photons as 1 big one, and the downsampling (let us say) perfectly averages those out to the right photon count, you'll still have more noise.

You are correct about read noise. But Poisson distribution ("shot") noise is just the opposite, since it is totally random. Averaging 4 pixels into one will always decrease the amount of shot noise. There would also be no difference in terms of off-sensor dark current noise.

At low ISOs, read noise is more of a concern. At high ISOs, though, shot noise is what dominates.


For both off-chip ADC sensors (Canon prior to 5D4) and on-chip ADC sensors (Nikon/Sony) the highest DR sensors are also the highest resolution ones. One can point out that Canon's 5Ds sensor isn't exactly the same generation as a 6D II, but with Nikon/Sony we see variable resolutions within the same generation.

For the record, larger pixels should result in higher DR not because of read noise but because of well capacity. It's odd that this is not the case right now.

Well capacity affects the highlight end of things. When most people speak of DR, what they really mean is pushing underexposed shadows. They couldn't care less about the highlights because they never get close to FWC if they are limiting exposure based on the jpeg preview "blinkies" that are 1-2 stops or more below FWC. If you don't push to the right to exploit the higher full well capacity of larger pixels when shooting, it makes no real difference when you start pushing the shadows in post instead of when you set your Tv and Av (unless the manufacturer has radically altered the "ISO rating" for the same amount of amplification).


That makes sense, if the old 1/FL was from the days of film or low-density FF sensors. A rule that worked for an 18mpx crop sensor won't necessarily work for 24mpx...


The 1/FL rule was for viewing an approximately 8.5X enlargement ratio from 36x24mm cropped to 30x24mm (for the aspect ratio) to 8x10" display size. Pixel peeping a 50MP sensor at 100% on a 24" HD monitor is like looking at a piece of a 120x80" enlargement! Enlarge a 35mm negative to 120x80 and you'll see a lot of blur you couldn't at 8x10".


MRAW has never provided those options in the past, ie: on the 5Ds / 5DsR,etc so why would they magically occur now?

it's still the reading of the sensor, and the initial processing of 75mp that is the problem.

there's zero in the way of evidence that canon can or will do a higher speed MRAW version of a 70+ MP sensor size.

BINGO!

Every single speed issue Canon has right now can be explained by slow sensor readout times. All of them.

Still frame rates for high rez sensors.
Full frame 4K video.
AI Servo tracking between each frame for mirrorless.

It's all about sensor readout speed at Canon right now. Accept it or move on.


Here is a half-baked thought: maybe they aren’t actually saturating the sensor at native response (e.g., “base ISO”) before they hit the quantization limit (2^14 in most cases).

Or maybe those who are obsessed with DR think the sensor is already saturated when the JPEG preview blinkies start flashing?


But it's not a magnification. It's exactly what you said, a 1:1 view where 1 pixel from the image corresponds to 1 pixel on the screen. "Comp" option shows the 5DSr's image downsampled and D810's image as 1:1.

So you don't have to magnify a 17.14µm² pixel more than a 23.81µm² pixel to see both at the same size? What kind of sorcery is that?

So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.

Yep. Just like we didn't see the 1D X Mark II and 5D Mark IV a few months after the 5Ds/5Ds R.


I’m not sure they’d need to be linear. It would be enough if they are not matched to a sensor.

That would allow multiple cameras to use similar circuit design and bills of material, simplify the tuning process.
Let’s say the cameras can use their full well capacity. If so, at the base ISO setting, shouldn’t the 5Div (larger wells) take longer to overexpose than the 5Ds (smaller wells), all else being equal?

It doesn’t; they apply the speed ratings such that full exposure is roughly the same between models, meaning the same photon count converts to charge for a given focus plane exposure and same sensor size.

No, because those larger wells are also collecting photons at a higher rate for the same light density per unit area. If you've got cylindrical rain buckets and one has twice the diameter of four others, all will still fill at the same rate in terms of inches per hour. That's because the large bucket has four times as much rain falling into it, the same as it has four times the surface area and four times the volume per inch of height. But it will take the rain in all four of the smaller buckets to equal the volume of the water in the larger one.

The advantage of larger wells is that the randomness of photons (poisson distribution) is averaged out better the larger your sample size is.

In the rain bucket analogy, since water drops are not perfectly distributed evenly as they fall (just as photons are not), each of the four small buckets will have slightly different amounts of water in them. That difference is what we call "shot noise". But when we dump the water from the four small buckets into another large bucket, most of those differences will be averaged out. We will very likely have less deviation between the large bucket that collected rain and a large bucket the was filled from four small buckets that collected rain than the variation we will see between each of the four smaller buckets.
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
What makes you think it's not already done this way? How do you think it's done, if not this way?

Because if you do it for the highlights, you'd also need to do it for the shadows, which means you'd have to process the entire image extremely flat and it would look like crap on the rear LCD. Then the other 98% of potential buyers besides the 2% DRone crowd would look at that crappy flat image on the LCD when they shoot three or four shots under the really crappy lighting at the trade show or on the sales floor at the camera store and say, "There's no way I'm buying a camera that takes pictures that look that crappy!"
 
  • Haha
  • Like
Reactions: 1 users
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.

"the camera's internal methodology?"

What internal methodology would that be?

Metering?

How is the camera going to examine the raw histogram of an exposed shot when it is calculating Tv and/or Av and/or ISO before the shot is taken?

Analog amplification?

Are you suggesting the camera should somehow miraculously adjust the analog amplification of the sensor based on the data it reads after the analog information has been amplified and converted by the ADC?

ISO "override?"

So if I dial in my Tv, Av, and ISO in full manual exposure mode you want the camera to alter the analog amplification of the sensor so that the highlights are always just shy of saturation on every shot? Again, how will the camera adjust analog amplification of data it can't process until after it has already been amplified?
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I’m not sure there is a huge amount of value in a raw histogram. I don’t think i would change my shooting style much. Using the raw would only affect extreme exposures. It wouldn’t make much difference in the mid tones, so ETTR (or L I suppose) is the likely application.

Since people who shoot raw by definition develop after the fact, a programmatically lighter and less data intensive method might be an automated ETTR function. Flag the hottest pixel in each channel after quantization, and automagically back exposure off a fraction. No need to read the whole file and map the distribution.

How can you "back off exposure in each channel after quantization" when the analog amplification, which is the only way to increase the SNR in the shadows, must occur before quantization? Once you do ADC, the SNR is locked in. Any adjustments to signal are also made to noise.
 
Upvote 0