Another mention of a 70+ megapixel EOS R camera

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,192
1,773
Canada
Then write the code. You should be able to knock out those 20 lines in a few minutes. Let us know how it works when you are done. Harry could probably help you. Honestly, a tool like that would be more helpful in the film era. Personally, perfect shadow detail ain't real high on my wish list. Knock yourself out. I didn't realize we had so many coders with so much knowledge around here.
BTW, a digic processor does not run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.

It also takes a lot more computing power to analyze a 70+ megapixel RAW file for histograms and zebras than it does to analyze a 1 megapixel JPG file, so not only does he have to write the code and determine if it reacts badly to other code, but he will have to somehow find more CPU cycles.... a lot more!

There is always a reason why things are done the way that they are.
 
Last edited:
  • Like
Reactions: Michael Clark

3kramd5

EOS 5D MK IV
Mar 2, 2012
3,083
404
BTW, a digic processor does run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.

It also takes a lot more computing power to analyze a 70+ megapixel RAW file for histograms and zebras than it does to analyze a 1 megapixel JPG file, so not only does he have to write the code and determine if it reacts badly to other code, but he will have to somehow find more CPU cycles.... a lot more!

There is always a reason why things are done the way that they are.
I’m not sure there is a huge amount of value in a raw histogram. I don’t think i would change my shooting style much. Using the raw would only affect extreme exposures. It wouldn’t make much difference in the mid tones, so ETTR (or L I suppose) is the likely application.

Since people who shoot raw by definition develop after the fact, a programmatically lighter and less data intensive method might be an automated ETTR function. Flag the hottest pixel in each channel after quantization, and automagically back exposure off a fraction. No need to read the whole file and map the distribution.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
1,155
578
I think writing to the card takes more time that in-camera processing, that's why in-camera downsampling may help increase the burst rate. Roughly speaking, say you have internal in-memory RAW data of the size of S and it takes T time to write. Now we want to downsample and shrink it to the size of S/2, it will now only take T/2 time to write. But in-camera downsampling will take much less than T/2 time to process, so in total writing of downsampled data will be between T/2 and T. I guess it'll be closer to T/2.
If that were the case, then frame rates for higher MP cameras could closer to the rate of lower MP cameras until the buffer is full. That does not happen.

In fact, the opposite is more true. A 7D Mark II can go at 10 fps for almost 30 raw frames, but once it bogs down then it is not much faster than a bogged down 5Ds.

The 5Ds, on the other hand, goes only half as fast as the 7D Mark II prior to filling the buffer, then goes at about the same much slower rate.


I'm not an expert on this but downsampling 4:1 would mean you get four doses of "read noise" and while that often offsets, statistically speaking 4 of them on average are going to be twice as bad as 1. So while 4 small pixels could theoretically capture the exact same photons as 1 big one, and the downsampling (let us say) perfectly averages those out to the right photon count, you'll still have more noise.
You are correct about read noise. But Poisson distribution ("shot") noise is just the opposite, since it is totally random. Averaging 4 pixels into one will always decrease the amount of shot noise. There would also be no difference in terms of off-sensor dark current noise.

At low ISOs, read noise is more of a concern. At high ISOs, though, shot noise is what dominates.


For both off-chip ADC sensors (Canon prior to 5D4) and on-chip ADC sensors (Nikon/Sony) the highest DR sensors are also the highest resolution ones. One can point out that Canon's 5Ds sensor isn't exactly the same generation as a 6D II, but with Nikon/Sony we see variable resolutions within the same generation.

For the record, larger pixels should result in higher DR not because of read noise but because of well capacity. It's odd that this is not the case right now.
Well capacity affects the highlight end of things. When most people speak of DR, what they really mean is pushing underexposed shadows. They couldn't care less about the highlights because they never get close to FWC if they are limiting exposure based on the jpeg preview "blinkies" that are 1-2 stops or more below FWC. If you don't push to the right to exploit the higher full well capacity of larger pixels when shooting, it makes no real difference when you start pushing the shadows in post instead of when you set your Tv and Av (unless the manufacturer has radically altered the "ISO rating" for the same amount of amplification).


That makes sense, if the old 1/FL was from the days of film or low-density FF sensors. A rule that worked for an 18mpx crop sensor won't necessarily work for 24mpx...

The 1/FL rule was for viewing an approximately 8.5X enlargement ratio from 36x24mm cropped to 30x24mm (for the aspect ratio) to 8x10" display size. Pixel peeping a 50MP sensor at 100% on a 24" HD monitor is like looking at a piece of a 120x80" enlargement! Enlarge a 35mm negative to 120x80 and you'll see a lot of blur you couldn't at 8x10".


MRAW has never provided those options in the past, ie: on the 5Ds / 5DsR,etc so why would they magically occur now?

it's still the reading of the sensor, and the initial processing of 75mp that is the problem.

there's zero in the way of evidence that canon can or will do a higher speed MRAW version of a 70+ MP sensor size.
BINGO!

Every single speed issue Canon has right now can be explained by slow sensor readout times. All of them.

Still frame rates for high rez sensors.
Full frame 4K video.
AI Servo tracking between each frame for mirrorless.

It's all about sensor readout speed at Canon right now. Accept it or move on.


Here is a half-baked thought: maybe they aren’t actually saturating the sensor at native response (e.g., “base ISO”) before they hit the quantization limit (2^14 in most cases).
Or maybe those who are obsessed with DR think the sensor is already saturated when the JPEG preview blinkies start flashing?


But it's not a magnification. It's exactly what you said, a 1:1 view where 1 pixel from the image corresponds to 1 pixel on the screen. "Comp" option shows the 5DSr's image downsampled and D810's image as 1:1.
So you don't have to magnify a 17.14µm² pixel more than a 23.81µm² pixel to see both at the same size? What kind of sorcery is that?

So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.
Yep. Just like we didn't see the 1D X Mark II and 5D Mark IV a few months after the 5Ds/5Ds R.


I’m not sure they’d need to be linear. It would be enough if they are not matched to a sensor.

That would allow multiple cameras to use similar circuit design and bills of material, simplify the tuning process.
Let’s say the cameras can use their full well capacity. If so, at the base ISO setting, shouldn’t the 5Div (larger wells) take longer to overexpose than the 5Ds (smaller wells), all else being equal?

It doesn’t; they apply the speed ratings such that full exposure is roughly the same between models, meaning the same photon count converts to charge for a given focus plane exposure and same sensor size.
No, because those larger wells are also collecting photons at a higher rate for the same light density per unit area. If you've got cylindrical rain buckets and one has twice the diameter of four others, all will still fill at the same rate in terms of inches per hour. That's because the large bucket has four times as much rain falling into it, the same as it has four times the surface area and four times the volume per inch of height. But it will take the rain in all four of the smaller buckets to equal the volume of the water in the larger one.

The advantage of larger wells is that the randomness of photons (poisson distribution) is averaged out better the larger your sample size is.

In the rain bucket analogy, since water drops are not perfectly distributed evenly as they fall (just as photons are not), each of the four small buckets will have slightly different amounts of water in them. That difference is what we call "shot noise". But when we dump the water from the four small buckets into another large bucket, most of those differences will be averaged out. We will very likely have less deviation between the large bucket that collected rain and a large bucket the was filled from four small buckets that collected rain than the variation we will see between each of the four smaller buckets.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
1,155
578
What makes you think it's not already done this way? How do you think it's done, if not this way?
Because if you do it for the highlights, you'd also need to do it for the shadows, which means you'd have to process the entire image extremely flat and it would look like crap on the rear LCD. Then the other 98% of potential buyers besides the 2% DRone crowd would look at that crappy flat image on the LCD when they shoot three or four shots under the really crappy lighting at the trade show or on the sales floor at the camera store and say, "There's no way I'm buying a camera that takes pictures that look that crappy!"
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
1,155
578
Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
"the camera's internal methodology?"

What internal methodology would that be?

Metering?

How is the camera going to examine the raw histogram of an exposed shot when it is calculating Tv and/or Av and/or ISO before the shot is taken?

Analog amplification?

Are you suggesting the camera should somehow miraculously adjust the analog amplification of the sensor based on the data it reads after the analog information has been amplified and converted by the ADC?

ISO "override?"

So if I dial in my Tv, Av, and ISO in full manual exposure mode you want the camera to alter the analog amplification of the sensor so that the highlights are always just shy of saturation on every shot? Again, how will the camera adjust analog amplification of data it can't process until after it has already been amplified?
 

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
1,155
578
I’m not sure there is a huge amount of value in a raw histogram. I don’t think i would change my shooting style much. Using the raw would only affect extreme exposures. It wouldn’t make much difference in the mid tones, so ETTR (or L I suppose) is the likely application.

Since people who shoot raw by definition develop after the fact, a programmatically lighter and less data intensive method might be an automated ETTR function. Flag the hottest pixel in each channel after quantization, and automagically back exposure off a fraction. No need to read the whole file and map the distribution.
How can you "back off exposure in each channel after quantization" when the analog amplification, which is the only way to increase the SNR in the shadows, must occur before quantization? Once you do ADC, the SNR is locked in. Any adjustments to signal are also made to noise.
 

3kramd5

EOS 5D MK IV
Mar 2, 2012
3,083
404
How can you "back off exposure in each channel after quantization" when the analog amplification, which is the only way to increase the SNR in the shadows, must occur before quantization? Once you do ADC, the SNR is locked in. Any adjustments to signal are also made to noise.
I expect we were discussing the histogram you see before you take a photo.

My (also half baked) thought was: the camera could evaluate a raw data feed to establish max exposure, and back off by adjusting exposure time when you press the shutter release. In AE modes, the shutter release triggers metering. Same thing here. Whatever you’ve set, back off enough to not clip the brightest channel. If nothing has clipped, no change.
 
Last edited:

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
1,155
578
I expect we were discussing the histogram you are before you take a photo.

My (also half baked) thought was: the camera could evaluate a raw data feed to establish max exposure, and back off by adjusting exposure time when you press the shutter release.
Again, that would take TONS more processing of each frame displayed on the rear LCD screen/EVF at 15-30 fps.
 

koenkooi

EOS 7D MK II
Feb 25, 2015
527
314
Fair enough, but you can equally well make a historgram and highlight/lowlight warnings based on the RAW, couldn't you?

Just because the CPU doesn't show the user that data on a graph doesn't mean the CPU doesn't have access to it, does it?
Correct, that's why Magic Lantern has a RAW histogram option: https://wiki.magiclantern.fm/camera_help#histogram

I've also read post from people saying a very flat picture style will also get you closer to actual RAW. But as pointed out earlier in this thread, it screws up the picture in EVF.
 
  • Like
Reactions: SwissFrank

neuroanatomist

I post too Much on Here!!
Jul 21, 2010
24,620
2,106
Correct, that's why Magic Lantern has a RAW histogram option: https://wiki.magiclantern.fm/camera_help#histogram

I've also read post from people saying a very flat picture style will also get you closer to actual RAW. But as pointed out earlier in this thread, it screws up the picture in EVF.
As I stated, Canon could implement such a feature, but I doubt they ever will. The number of users who would want that is likely insignificant.
 

SwissFrank

EOS RP
Dec 9, 2018
304
117
As I stated, Canon could implement such a feature, but I doubt they ever will. The number of users who would want that is likely insignificant.
I don't see that you ever said that, and if you did you wouldn't be correct.

I know you know a lot about photography and I don't want to get bad blood over an argument over what's technically feasible. So pls don't take this personally, but just as a statement of fact. Maybe I'm misunderstanding you but you seem to be convinced that the camera can only have one histogram in it and that is the one it shows the user if the user asks to see it and also the one used to render the live preview. Actually, though, software could totally show you a RAW histogram while showing you a JPG rendering in the EVF. It could do the reverse as well. Or it could show you a JPG histo, a JPG-range EVF image, and also be using a RAW histo to control exposure. Calculating two histograms from one image shouldn't take notably more time than calculating one. And once calculated there's no reason at all the camera would then have to show it to the user, or use it to simulate exposure in the EVF. A histogram of a 20-stop scene (assuming a sensor with that DR comes along) in 1/10 stop buckets would be less than 1k of memory.
 

SwissFrank

EOS RP
Dec 9, 2018
304
117
How is the camera going to examine the raw histogram of an exposed shot when it is calculating Tv and/or Av and/or ISO before the shot is taken?

Are you suggesting the camera should somehow miraculously adjust the analog amplification of the sensor based on the data it reads after the analog information has been amplified and converted by the ADC?
Gosh Michael, I didn't specify, but given a choice between the utterly impossible (what you outline) and the mundane and typical (using the scene as metered just before shooting--which on live view/EVF cameras is the full scene), I'm not sure why you assume I'm proposing the utterly impossible. Just to be clear as a bell on this, I'm talking about the mundane and typical: using the data captured up to the moment before the shutter is tripped and we go from live view to actually accumulating photos for the image.

So if I dial in my Tv, Av, and ISO in full manual exposure mode you want the camera to alter the analog amplification of the sensor so that the highlights are always just shy of saturation on every shot? Again, how will the camera adjust analog amplification of data it can't process until after it has already been amplified?
Actually I'm mainly talking about the camera being in full-auto, program, or Ap mode, and the shutter being under AE control. But forget I brought it up; I can tell you've got no wish to have a friendly discussion.
 

SwissFrank

EOS RP
Dec 9, 2018
304
117
Because if you do it for the highlights, you'd also need to do it for the shadows, which means you'd have to process the entire image extremely flat and it would look like crap on the rear LCD.
Why in heck would calculating a histogram of the raw image have any effect on the rear LCD or EVF? You seem to think the camera could only have one data structure in memory representing a histogram and that data structure must be used to scale the appearance on the screens. That would be incorrect. Sit this one out, Michael, you seem to have no interest, or no competence, or both.
 

Kit.

EOS 6D MK II
Apr 25, 2011
1,419
799
If I were making an AE system for a mirrorless, the sensor can see the image... so you can actually see how bright the brightest stuff is, and how dark the darkest stuff is. If you're getting 0's and max values, then by default try to equalize the number of pixels falling off each end.
I don't think that such an equalization corresponds to any meaningful property of the resulting photograph, even if all that you shoot is calibration charts.
 

SwissFrank

EOS RP
Dec 9, 2018
304
117
Then write the code.
Actually, I did in my earlier comment but decided to delete it as anyone who's ever programmed already gets my point, and people who haven't wouldn't understand the code.

Still, it's a bit assinine to flippantly suggest I program the camera when not even Magic Lantern has yet gotten code running on an R. Are you saying the inability to actually implement a solution means there's no point in idly musing how exposure should work? If you don't give a crap what your shadow noise is like, fair enough, but then what are you even doing in the conversation?

Personally, perfect shadow detail ain't real high on my wish list
Fair enough, but if scene DR < sensor DR capability, why not automatically ETTR to the extent allowed by keeping the shutter speed hand-holdable (if indeed the camera senses it's hand-held, and it uses reciprocal rule, corrected for IS, and potentially also adjusted by a user preference setting to go longer or shorter than recip would advise)? Is shadow detail so low on your list that even when you can optimize it you prefer not to?
 

Kit.

EOS 6D MK II
Apr 25, 2011
1,419
799
Fair enough, but if scene DR < sensor DR capability, why not automatically ETTR to the extent allowed by keeping the shutter speed hand-holdable (if indeed the camera senses it's hand-held, and it uses reciprocal rule, corrected for IS, and potentially also adjusted by a user preference setting to go longer or shorter than recip would advise)?
If scene DR < sensor DR capability, why not automatically ETTL to freeze the action?

The good thing is that once you get a camera that supports CCAPI, you will be able to write those 20 lines of code (although it will cost you the battery life).
 
  • Like
Reactions: SwissFrank

CanonFanBoy

EOS 5D SR
Jan 28, 2015
4,173
1,757
Irving, Texas
Still, it's a bit assinine to flippantly suggest I program the camera when not even Magic Lantern has yet gotten code running on an R. Are you saying the inability to actually implement a solution means there's no point in idly musing how exposure should work? If you don't give a crap what your shadow noise is like, fair enough, but then what are you even doing in the conversation?
No, it is asinine for you to flippantly suggest that it would only take 20 lines of code that could be knocked out while waiting for a bus, when in fact, you actually have zero idea as to what it would take. Zero. While I do "give a crap" about shadow noise, I also know how to bracket and expose properly. If shadow noise is a problem with a photo I took, then I did something wrong.

Is shadow detail so low on your list that even when you can optimize it you prefer not to?
Absolutely not, but I'm not the one sitting around claiming 20 lines of code written at a bus stop is going to be the solution either. :rolleyes: Do us all a favor, write the 20 lines of code and tell us all how to install it. Otherwise, quit claiming knowledge and expertise insight that you don't really have. People aren't fooled as easily as you'd like them to be. You have absolutely no idea what it would take to do what you are pining for. You just want to act like you do.
 

neuroanatomist

I post too Much on Here!!
Jul 21, 2010
24,620
2,106
As I stated, Canon could implement such a feature, but I doubt they ever will. The number of users who would want that is likely insignificant.
I don't see that you ever said that, and if you did you wouldn't be correct.
Then you weren't reading. I even repeated it for you.

Who is ‘you’? I can’t. Canon could, if they choose to do so. I wouldn’t hold my breath on that one...RAW files have been around for a long time, histograms are still based on JPGs.
What are you arguing here? Of course Canon could provide a RAW histogram, as I mentioned several posts back. They could have implemented that feature at any time, as I also mentioned several posts back. But they haven’t...as I mentioned several posts back. Those are the facts. If your point is, Canon should give us a RAW histogram option, I’d certainly use one if they do. But I’m not going to hold my breath waiting for one, and I would not recommend that you do so, either (as...wait for it...I mentioned several posts back).
How is that not correct? Or did you just fixate on the 'the number of users who would want that is likely insignificant' bit? That's also correct. The vast majority of ILC users never shoot RAW images.


I know you know a lot about photography and I don't want to get bad blood over an argument over what's technically feasible. So pls don't take this personally, but just as a statement of fact. Maybe I'm misunderstanding you but you seem to be convinced that the camera can only have one histogram in it and that is the one it shows the user if the user asks to see it and also the one used to render the live preview. Actually, though, software could totally show you a RAW histogram while showing you a JPG rendering in the EVF. It could do the reverse as well. Or it could show you a JPG histo, a JPG-range EVF image, and also be using a RAW histo to control exposure. Calculating two histograms from one image shouldn't take notably more time than calculating one. And once calculated there's no reason at all the camera would then have to show it to the user, or use it to simulate exposure in the EVF. A histogram of a 20-stop scene (assuming a sensor with that DR comes along) in 1/10 stop buckets would be less than 1k of memory.
One last time, and I do mean that. CANON COULD IMPLEMENT A RAW HISTOGRAM. Is that clear enough for you? Please, go back and read the capitalized words one more time. Or the blue text above. Heck, read them all. Then do it again. Then once more, for good measure. This is the second time you've argued that Canon can implement a RAW histogram after I already stated they could, if they wanted to. If they did, of course they would not eliminate the 'standard' (jpg-based) histogram, that's a rather silly strawman you've created there. Honestly, the fact is you seem sadly confused on this whole issue, you keep perseverating on arguing that something is possible when we both agree that it's possible. The fact of the matter is that Canon has not chosen to do so, nor have they chosen to implement hundreds of other features for which a handful of people on this forum have expressed a desire. Feel free to re-state your repetitive argument that Canon can add a RAW histogram in a pointless attempt to convince me of something that I've stated repeatedly is true...I'm out.

 

Don Haines

Beware of cats with laser eyes!
Jun 4, 2012
8,192
1,773
Canada
No, it is asinine for you to flippantly suggest that it would only take 20 lines of code that could be knocked out while waiting for a bus, when in fact, you actually have zero idea as to what it would take. Zero. While I do "give a crap" about shadow noise, I also know how to bracket and expose properly. If shadow noise is a problem with a photo I took, then I did something wrong.


Absolutely not, but I'm not the one sitting around claiming 20 lines of code written at a bus stop is going to be the solution either. :rolleyes: Do us all a favor, write the 20 lines of code and tell us all how to install it. Otherwise, quit claiming knowledge and expertise insight that you don't really have. People aren't fooled as easily as you'd like them to be. You have absolutely no idea what it would take to do what you are pining for. You just want to act like you do.
I doubt that one could declare the variables in 20 lines of code......