The 5Ds sold next to nothing.
Citation needed.
Canon is a joke.
Feel free to go to a Sony forum.
Upvote
0
The 5Ds sold next to nothing.
Canon is a joke.
If you had wanted "a REAL pro body", you should have bought the 1DX II.So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body?
There won’t be one “pro” body, there will be several high end bodies.So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.
But it's not a magnification. It's exactly what you said, a 1:1 view where 1 pixel from the image corresponds to 1 pixel on the screen.
I've read that the ADC's aren't linear in which case that's probably not it. But I couldn't say for certain.
Who is ‘you’? I can’t. Canon could, if they choose to do so. I wouldn’t hold my breath on that one...RAW files have been around for a long time, histograms are still based on JPGs.Fair enough, but you can equally well make a historgram and highlight/lowlight warnings based on the RAW, couldn't you?
Just because the CPU doesn't show the user that data on a graph doesn't mean the CPU doesn't have access to it, does it?
Mbell75 has 14 postings, every single one a rant against Canon............
P.S. when you use words like turd and joke in a post you essentially remove yourself from a reasoned discussion and invite attacks.
What makes you think it's not already done this way? How do you think it's done, if not this way?
Yet another reason to be civil and objective, isn't it?Mbell75 has 14 postings, every single one a rant against Canon.
As stated above, the review image/histogram/highlight warning are all based on the processed data (8-bit converted, most in-camera settings applied), not the RAW image/stream.What makes you think it's not already done this way? How do you think it's done, if not this way?
What are you arguing here? Of course Canon could provide a RAW histogram, as I mentioned several posts back. They could have implemented that feature at any time, as I also mentioned several posts back. But they haven’t...as I mentioned several posts back. Those are the facts. If your point is, Canon should give us a RAW histogram option, I’d certainly use one if they do. But I’m not going to hold my breath waiting for one, and I would not recommend that you do so, either (as...wait for it...I mentioned several posts back).Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
Then write the code. You should be able to knock out those 20 lines in a few minutes. Let us know how it works when you are done. Harry could probably help you. Honestly, a tool like that would be more helpful in the film era. Personally, perfect shadow detail ain't real high on my wish list. Knock yourself out. I didn't realize we had so many coders with so much knowledge around here.Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
BTW, a digic processor does not run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.Then write the code. You should be able to knock out those 20 lines in a few minutes. Let us know how it works when you are done. Harry could probably help you. Honestly, a tool like that would be more helpful in the film era. Personally, perfect shadow detail ain't real high on my wish list. Knock yourself out. I didn't realize we had so many coders with so much knowledge around here.
BTW, a digic processor does run an interpreter, it requires code that has been compiled into machine language. Your "20 lines of code" becomes quite large at the machine level, plus you are going to need a huge amount of memory to hold the 70+ Megawords of array to hold that image, and since it requires real-time operation you can not interfere with resources and CPU cycles required for other processes.
It also takes a lot more computing power to analyze a 70+ megapixel RAW file for histograms and zebras than it does to analyze a 1 megapixel JPG file, so not only does he have to write the code and determine if it reacts badly to other code, but he will have to somehow find more CPU cycles.... a lot more!
There is always a reason why things are done the way that they are.
I think writing to the card takes more time that in-camera processing, that's why in-camera downsampling may help increase the burst rate. Roughly speaking, say you have internal in-memory RAW data of the size of S and it takes T time to write. Now we want to downsample and shrink it to the size of S/2, it will now only take T/2 time to write. But in-camera downsampling will take much less than T/2 time to process, so in total writing of downsampled data will be between T/2 and T. I guess it'll be closer to T/2.
I'm not an expert on this but downsampling 4:1 would mean you get four doses of "read noise" and while that often offsets, statistically speaking 4 of them on average are going to be twice as bad as 1. So while 4 small pixels could theoretically capture the exact same photons as 1 big one, and the downsampling (let us say) perfectly averages those out to the right photon count, you'll still have more noise.
For both off-chip ADC sensors (Canon prior to 5D4) and on-chip ADC sensors (Nikon/Sony) the highest DR sensors are also the highest resolution ones. One can point out that Canon's 5Ds sensor isn't exactly the same generation as a 6D II, but with Nikon/Sony we see variable resolutions within the same generation.
For the record, larger pixels should result in higher DR not because of read noise but because of well capacity. It's odd that this is not the case right now.
That makes sense, if the old 1/FL was from the days of film or low-density FF sensors. A rule that worked for an 18mpx crop sensor won't necessarily work for 24mpx...
MRAW has never provided those options in the past, ie: on the 5Ds / 5DsR,etc so why would they magically occur now?
it's still the reading of the sensor, and the initial processing of 75mp that is the problem.
there's zero in the way of evidence that canon can or will do a higher speed MRAW version of a 70+ MP sensor size.
Here is a half-baked thought: maybe they aren’t actually saturating the sensor at native response (e.g., “base ISO”) before they hit the quantization limit (2^14 in most cases).
But it's not a magnification. It's exactly what you said, a 1:1 view where 1 pixel from the image corresponds to 1 pixel on the screen. "Comp" option shows the 5DSr's image downsampled and D810's image as 1:1.
So the EOS R was Canon's answer to the a7iii and we won't be seeing a REAL pro body? Just a budget turd and a 75 MP camera thats sure to cost over $4k that no one is asking for or wants? The 5Ds sold next to nothing. What makes Canon think a mirrorless version of it will sell any better? Where is the 30-35MP pro, mirrorless version of the 5DIV and the 20-25MP pro, mirrorless version of the 1DxII? Canon is a joke.
I’m not sure they’d need to be linear. It would be enough if they are not matched to a sensor.
That would allow multiple cameras to use similar circuit design and bills of material, simplify the tuning process.
Let’s say the cameras can use their full well capacity. If so, at the base ISO setting, shouldn’t the 5Div (larger wells) take longer to overexpose than the 5Ds (smaller wells), all else being equal?
It doesn’t; they apply the speed ratings such that full exposure is roughly the same between models, meaning the same photon count converts to charge for a given focus plane exposure and same sensor size.
What makes you think it's not already done this way? How do you think it's done, if not this way?
Why do you think those have any importance to the camera's internal methodology? I think those are just features on the periphery built for you the photographer, not for the camera's internal processing. What I suggest might be less than 20 lines of code. You wouldn't say OMG, we're doing this whole JPG conversion and making a histogram for the user, so we have to use that histogram, even though it totally doesn't aid us in figuring out how to expose to maximize detail captured, and even though the tiny bit of extra code that would do the job exactly could be written while waiting for the bus.
I’m not sure there is a huge amount of value in a raw histogram. I don’t think i would change my shooting style much. Using the raw would only affect extreme exposures. It wouldn’t make much difference in the mid tones, so ETTR (or L I suppose) is the likely application.
Since people who shoot raw by definition develop after the fact, a programmatically lighter and less data intensive method might be an automated ETTR function. Flag the hottest pixel in each channel after quantization, and automagically back exposure off a fraction. No need to read the whole file and map the distribution.