I think writing to the card takes more time that in-camera processing, that's why in-camera downsampling may help increase the burst rate. Roughly speaking, say you have internal in-memory RAW data of the size of S and it takes T time to write. Now we want to downsample and shrink it to the size of S/2, it will now only take T/2 time to write. But in-camera downsampling will take much less than T/2 time to process, so in total writing of downsampled data will be between T/2 and T. I guess it'll be closer to T/2.
If that were the case, then frame rates for higher MP cameras could closer to the rate of lower MP cameras until the buffer is full. That does not happen.
In fact, the opposite is more true. A 7D Mark II can go at 10 fps for almost 30 raw frames, but once it bogs down then it is not much faster than a bogged down 5Ds.
The 5Ds, on the other hand, goes only half as fast as the 7D Mark II prior to filling the buffer, then goes at about the same much slower rate.
I'm not an expert on this but downsampling 4:1 would mean you get four doses of "read noise" and while that often offsets, statistically speaking 4 of them on average are going to be twice as bad as 1. So while 4 small pixels could theoretically capture the exact same photons as 1 big one, and the downsampling (let us say) perfectly averages those out to the right photon count, you'll still have more noise.
You are correct about read noise. But Poisson distribution ("shot") noise is just the opposite, since it is totally random. Averaging 4 pixels into one will
always decrease the amount of shot noise. There would also be no difference in terms of off-sensor dark current noise.
At low ISOs, read noise is more of a concern. At high ISOs, though, shot noise is what dominates.
For both off-chip ADC sensors (Canon prior to 5D4) and on-chip ADC sensors (Nikon/Sony) the highest DR sensors are also the highest resolution ones. One can point out that Canon's 5Ds sensor isn't exactly the same generation as a 6D II, but with Nikon/Sony we see variable resolutions within the same generation.
For the record, larger pixels should result in higher DR not because of read noise but because of well capacity. It's odd that this is not the case right now.
Well capacity affects the highlight end of things. When most people speak of DR, what they really mean is pushing underexposed shadows. They couldn't care less about the highlights because they never get close to FWC if they are limiting exposure based on the jpeg preview "blinkies" that are 1-2 stops or more below FWC. If you don't push to the right to exploit the higher full well capacity of larger pixels when shooting, it makes no real difference when you start pushing the shadows in post instead of when you set your Tv and Av (unless the manufacturer has radically altered the "ISO rating" for the same amount of amplification).
That makes sense, if the old 1/FL was from the days of film or low-density FF sensors. A rule that worked for an 18mpx crop sensor won't necessarily work for 24mpx...
The 1/FL rule was for viewing an approximately 8.5X enlargement ratio from 36x24mm cropped to 30x24mm (for the aspect ratio) to 8x10" display size. Pixel peeping a 50MP sensor at 100% on a 24" HD monitor is like looking at a piece of a 120x80" enlargement! Enlarge a 35mm negative to 120x80 and you'll see a lot of blur you couldn't at 8x10".
MRAW has never provided those options in the past, ie: on the 5Ds / 5DsR,etc so why would they magically occur now?
it's still the reading of the sensor, and the initial processing of 75mp that is the problem.
there's zero in the way of evidence that canon can or will do a higher speed MRAW version of a 70+ MP sensor size.
BINGO!
Every single speed issue Canon has right now can be explained by slow sensor readout times. All of them.
Still frame rates for high rez sensors.
Full frame 4K video.
AI Servo tracking between each frame for mirrorless.
It's all about sensor readout speed at Canon right now. Accept it or move on.
Here is a half-baked thought: maybe they aren’t actually saturating the sensor at native response (e.g., “base ISO”) before they hit the quantization limit (2^14 in most cases).
Or maybe those who are obsessed with DR think the sensor is already saturated when the JPEG preview blinkies start flashing?
But it's not a magnification. It's exactly what you said, a 1:1 view where 1 pixel from the image corresponds to 1 pixel on the screen. "Comp" option shows the 5DSr's image downsampled and D810's image as 1:1.
So you don't have to magnify a 17.14µm² pixel more than a 23.81µm² pixel to see both at the same size? What kind of sorcery is that?
So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.
Yep. Just like we didn't see the 1D X Mark II and 5D Mark IV a few months after the 5Ds/5Ds R.
I’m not sure they’d need to be linear. It would be enough if they are not matched to a sensor.
That would allow multiple cameras to use similar circuit design and bills of material, simplify the tuning process.
Let’s say the cameras can use their full well capacity. If so, at the base ISO setting, shouldn’t the 5Div (larger wells) take longer to overexpose than the 5Ds (smaller wells), all else being equal?
It doesn’t; they apply the speed ratings such that full exposure is roughly the same between models, meaning the same photon count converts to charge for a given focus plane exposure and same sensor size.
No, because those larger wells are also collecting photons at a higher rate for the same light density
per unit area. If you've got cylindrical rain buckets and one has twice the diameter of four others, all will still fill at the same rate in terms of inches per hour. That's because the large bucket has four times as much rain falling into it, the same as it has four times the surface area and four times the volume per inch of height. But it will take the rain in all four of the smaller buckets to equal the
volume of the water in the larger one.
The advantage of larger wells is that the randomness of photons (poisson distribution) is averaged out better the larger your sample size is.
In the rain bucket analogy, since water drops are not perfectly distributed evenly as they fall (just as photons are not), each of the four small buckets will have slightly different amounts of water in them. That difference is what we call "shot noise". But when we dump the water from the four small buckets into another large bucket, most of those differences will be averaged out. We will very likely have less deviation between the large bucket that collected rain and a large bucket the was filled from four small buckets that collected rain than the variation we will see between each of the four smaller buckets.