Because the company that’s led the market for 20 years should be terribly concerned about competition from the company that’s lost 2/3 of its market share in the past few years?Why?
Upvote
0
Because the company that’s led the market for 20 years should be terribly concerned about competition from the company that’s lost 2/3 of its market share in the past few years?Why?
Actually you are wrong, the sensor does have more dynamic range than what clog 3 offers and it is easy to test, shoot RAW video, transform it to clog2 and you will see just by comparing it to the Clog3. Or you can compare CLog3 with the photo files of the same camera.That's really not how Clog2 works. It's not a magical fix all that gives your camera sensor more dynamic range. It's a specific tool that high-end cameras with 16+ stops of dynamic range use to get the most out of their sensors. Just adding it to any camera doesn't magically make it better. Canon said on launch day of the R5C that the reason it didn't have Clog2 is that the sensor wasn't capable of it. So adding it wouldn't make any difference.
I do really think that is the main reason Canon went so hard on the overheat control on both R5 and R6, to maintain the IQ and also prevent sensors dying from shooting few hours in an overheat mode. At the end after they had enough data they did expand the limit on the R5. The R6 didn't get so much love cause of the R6mk2 which is really a beast of a camera, having an opportunity using it for a while now comparing and it with the R3,, canon did definitely produced a really great camera that can easily replace the R3 for many.Any steps that Canon take to prevent overheating are a winner for me. I don't shoot video and probably never will, but I frequently work in hot tropical climates where the camera body can become almost too hot to hold comfortably. Hot cameras produce more noise and shorten sensor life. They probably also shorten the life of memory cards and increase the risk of data corruption.
Thanks for your help.While the AF is on-sensor, it is not image based. It uses the phase difference between 2 adjacent subpixels, which can be thrown off by things like UV filters.
I would welcome a mode where the camera does use image data to fine tune the last steps of acquiring focus, it would fix a number of Canon AF annoyances.
Correct, it won’t magically give the sensor more dynamic range, but what it will do is utilize the sensors dynamic range.Adding Clog2 doesn't magically give the sensor in your camera more dynamic range. The sensors with 16+ stop of dynamic range have clog2 as that's designed to get the most out of those sensors. It's like asking if I put formula one tyres on my vehicle does it make it a race car. No, it doesn't. Those tyres are designed and calibrated to work on an entirely different machine. The tyres don't make the Formula one car what it is but they do help get the best out it. If that makes sense? I don't drive. I don't even know why I'm making car analogy's.
Dual cf express slotsI don't see why there should be a mark 2 because how can the R5 get better. Like maybe perfect white balance lol.
-Cody
Wedding photographer in Maryland
You're right in that the computer analogy is better. So, what you're saying is that you have a 3070 and if you put 4090 drivers on it, it will somehow perform like the 4090. That's incorrect. Let's say you're correct at 14.6 stops. Clog2 is for cameras with 16+ You might 'feel' it give you more than clog3. You might prefer the way it initially looks as the gamma curve is different in the balance of shadows and mid tones. But that doesn't mean it's factually correct.Correct, it won’t magically give the sensor more dynamic range, but what it will do is utilize the sensors dynamic range.
The R5 is measured at 14.6, yet the a7siii is 13.9. Yet the a7siii has more dynamic range? Oh right it’s what the slog3 profile can do.
Your analogy should be compared to the tuning on the computer that makes the car go faster.
A more conservative tuning and you lose HP.
Actually you are wrong, the sensor does have more dynamic range than what clog 3 offers and it is easy to test, shoot RAW video, transform it to clog2 and you will see just by comparing it to the Clog3. Or you can compare CLog3 with the photo files of the same camera.
There is more clipping happening shoting Clog3, in Gerald's video about R5c you can see it clearly, and Some people explain it well in the comments.
There where done so many tests regarding that and It is proven that Clog2 would bring additional dynamic range to the file in the highlights.
I think canon didn't go with it cause there is a huge amount of noise using Clog2, especially in case of R5.
R5c has a much cleaner image and actually slightly better DR when shooting video so there is definitely room to improve at least the image of R5.
I’m not a video shooter, but when you make the claim that the R5 C has a different sensor than the R5, you pretty much trash your credibility across the board.The R5c has a much cleaner image and better DR than the R5 because it has a different sensor and processor than the R5.
Dual CFe cards will generate more heat but would allow dual recording for video. Cost vs USH-II SD cards isn't very different. Unlikely I think.Dual cf express slots
Digital hot shoe for the new flashes
14 bit electronic shutter
Better rolling shutter
Different sensor? I thought it was all about adding a fan...The R5C also has a different sensor and CPU. They're similar but not the same as the R5 which is why the R5C can do more.
In AF menu 2, I set "Focus guide" to "On".how did you get the arrows with a nikon lens, on a r5 ?
and removing IBIS and cinema menus (with lag to reboot between)Different sensor? I thought it was all about adding a fan...
I’m at the same point, I know that DPAF can get thrown off by things that don’t impact contrast detect, but I don’t know why. If someone has a link to an explanation,Thanks for your help.
I'm still a bit confused (I must be getting slower as I age).
Because the phase-detect pixels are on-sensor, focus is evaluated on the same plane as image capture. How the light gets to the sensor is irrelevant -- it's just measured once it's there.
How can phase-detect subpixels receive the same light on the same plane as the image sensor at capture time, conclude that the image is in focus (arrows align and turn green), but still capture a back-focused image?
Why does focus peaking work accurately but the manual focus assist box/arrows do not?
In my mind, focus peaking and the focus assist box are both visual aids to alert me when focus is detected as I turn the focus ring.
How can they produce different results?
(In case it wasn't obvious, I'm only talking about manual focusing. It makes sense why a lens would need electric communication to/from the processor to know which direction to adjust focus and how far. I don't see how it is necessary for manual focusing since the processor is communicating with the operator via visual cues, then detects the changes on-sensor as the focus ring is turned.)
Focus peaking only suggests more or less sharp areas. But if you look closer (on pixel level), you'll notice that the definition of "sharp" is quite vague, while the "triangle" method yields a much higher "on the spot" precision.Thanks for your help.
I'm still a bit confused (I must be getting slower as I age).
Because the phase-detect pixels are on-sensor, focus is evaluated on the same plane as image capture. How the light gets to the sensor is irrelevant -- it's just measured once it's there.
How can phase-detect subpixels receive the same light on the same plane as the image sensor at capture time, conclude that the image is in focus (arrows align and turn green), but still capture a back-focused image?
Why does focus peaking work accurately but the manual focus assist box/arrows do not?
In my mind, focus peaking and the focus assist box are both visual aids to alert me when focus is detected as I turn the focus ring.
How can they produce different results?
(In case it wasn't obvious, I'm only talking about manual focusing. It makes sense why a lens would need electric communication to/from the processor to know which direction to adjust focus and how far. I don't see how it is necessary for manual focusing since the processor is communicating with the operator via visual cues, then detects the changes on-sensor as the focus ring is turned.)
I'd love to have that increased precision, but for some reason, the triangles always indicate focus is achieved when it's actually back-focused. It's a head scratcher...Focus peaking only suggests more or less sharp areas. But if you look closer (on pixel level), you'll notice that the definition of "sharp" is quite vague, while the "triangle" method yields a much higher "on the spot" precision.
Focus peaking can be helpful, but not for determining with precision which part of a flower, for instance, is in perfect focus. I'd rather use the "loupe" function for critical work, like macros.
You are right the reduced bit depth to be a contribution to max readout speed.My understanding is that the loss of DR with ES typically amounts to about 1 stop. The loss of DR is due to a reduction in bit depth (12 bit mechanical, 10 bit electronic in most cases), and that is done purely to increase readout speed. AFAIK, there is no reason why electronic shutter can't run on 12 bit and thereby retain the full DR. Presumably it's dictated by sensor design and processor power. Anyone with greater knowledge please contribute.
Shooting raw allows you to use the maximum dynamic range from the sensor, In case of canon it is from 12bit raw since it does not use the full 14 bit for video, probably to contain faster readout speeds.Your mistaken. Raw is developed with Clog2.. That doesn't mean it shoots clog2. The R5c has a much cleaner image and better DR than the R5 because it has a different sensor and processor than the R5. They're similar but they are very different, hence the increased capabilities of the R5C.
There is one difference, Canon uses 12bit image for video, Sony uses 14bit image and some of the differences come from that. Also that is the reason most of the Sony bodies have much worse readout speeds and stronger rolling shutter but do produce more DR in video. A7sIII being the best from Sony due to having only 12mpx sensor and it doesn't oversample from higher resolutions to 4K it is basically 1/1 readout.Correct, it won’t magically give the sensor more dynamic range, but what it will do is utilize the sensors dynamic range.
The R5 is measured at 14.6, yet the a7siii is 13.9. Yet the a7siii has more dynamic range? Oh right it’s what the slog3 profile can do.
Your analogy should be compared to the tuning on the computer that makes the car go faster.
A more conservative tuning and you lose HP.