Canon to release major firmware update for the Canon EOS R5

I was under the impression slog3 is their highest dynamic range profile. Slog2 is the equivalent to clog3.
That is true, but both S-LOG 2 and CLOG 3 are capable of 14+ stops of dynamic range.
Just like driving a faster car will not change the speed limit, having a log curve with more dynamic range will will not go beyond the limit of the sensor.
 
Upvote 0
you clearly haven’t done any work for clients as this is something they have never asked me nor anyone I know in the industry has been asked. Clients don’t care about MP, they care about the final product. This is such a mute point
Actually, it was eosuser1234 who mentioned the demand from client earlier in this thread (quote: “I have been waiting for more megapixels. Clients demanding more than the R5. Had to make the jump to the GFX 100s. I am now booked almost 8 months. Still have my RF glass, but... who knows.”). entoman asked a reasonable question on whether this ‘demand’ was logical, to which someone else took him to task for the presumed intention behind his question (probably a misinterpretation of entoman’s question). I don’t think entoman claimed to know what the client wants, but because someone claimed that the demand was from clients, he simply asked if that was 'hyped by the media'.
 
  • Like
Reactions: 2 users
Upvote 0
you clearly haven’t done any work for clients as this is something they have never asked me nor anyone I know in the industry has been asked. Clients don’t care about MP, they care about the final product. This is such a mute point
OK, as you ask, for context, I've had 3 books published which between them included nearly 2000 of my images, and the publishers specified minimum resolutions which to me didn't make sense, considering the reproduction size and ppi. I did my own calculations, and these were accepted by the publisher. The books were accordingly published and got excellent reviews, with the quality of the images and printing being heavily praised.

But getting back to the original question - I've read several times on forums that people claiming to be professional photographers *have* recounted cases where clients *have* demanded high resolution images. To state that "clients don't care about MP" is pure nonsense - it depends entirely on what type of client you are dealing with, what the end-purpose of the images will be, and on their own level of understanding of photographic and printing requirements.
 
  • Like
Reactions: 5 users
Upvote 0
I would bet you, you can’t even blind test the differenve with a shot taken in 12 bit vs 14 bit. Go ahead try it. This concern for 14 bit is only different on paper, real world results are mute
Oh I realize that, does not mean I wouldn't want it if I could get it.
I was simply stating it is something that would make the R5ii improved over the R5.
 
Upvote 0
If one could not, why would the manufacturer use a 14 bit ADC in the first place?
I want to say ‘bragging rights’, but the premise is a bit flawed. There are no 14-bit screens available on the mass market and prints have even less dynamic range. So viewing 14-bit images/footage needs a tone map to visualize the extreme ends,
which moves it into subjective taste territory and people start saying things like ‘highlight roll-off’.
 
Upvote 0
If one could not, why would the manufacturer use a 14 bit ADC in the first place?
Because 14 must be better than 12?

This hearkens back to the DRone Wars of 10 years ago, when those following the Jedi Cannon did battle with the Sonith who were seduced by the dark side of the Exmor. Now we're talking about the difference between 12 stops of DR and 10.5 stops of DR with different modes of the same camera, instead of the difference between 11 stops of DR and 9.5 stops of DR in different cameras, but the conclusion is no different. Having the extra 1.5 stops of DR can be of benefit in certain, limited situations. In most real world shooting situations, the DR of a scene is either less than 9 stops rendering the difference meaningless, or far more than 12 stops meaning multiple exposures are needed to capture the full scene DR regardless.
 
Upvote 0
That’s unlikely to change, IMO. My solution is a custom file name with my two initials, an underscore then a number that I manually increment every 10,000 shots, i.e., NN_0, NN_1. NN_2 … NN10, etc.
When my filenames go over a certain limit (eg. 6,000) I start them over at 0 and change the starting name (like F02_, F03_ etc) or whatever (almost exactly what you've done). While I'm happily taking thousands of photos at a time, that's about the only way I can avoid them wrapping over while I'm not paying attention and getting duplicate filenames. I've never had to worry about this with other brands before (roll eyes!)
 
Upvote 0
When my filenames go over a certain limit (eg. 6,000) I start them over at 0 and change the starting name (like F02_, F03_ etc) or whatever (almost exactly what you've done). While I'm happily taking thousands of photos at a time, that's about the only way I can avoid them wrapping over while I'm not paying attention and getting duplicate filenames. I've never had to worry about this with other brands before (roll eyes!)
I use a script to rename pictures on import to ‘YYYYMMDD HHMM Camera lens IMG_xxxx’. I found out the hard way that the camera adjusts the counter based on the last file present on the card, so moving non-empty cards between different bodies will quickly lead to duplicate numbers.

This also makes it a bit easier to find pictures manually outside of light room.
 
Upvote 0
Yeah, IIRC the 8.3 file name format started with cuneiform writing on stone tablets.
Yes (ha!), some conventions are qwerky that we have to live with. But If they had just allowed us to choose a first letter (as required by this convention) and let the other 7 numerical digits roll (as I'm used to) then this wouldn't have been a problem. Some software programmer or manager had to make this decision thinking that it was better, and others there chose to continue it despite the limitations.
 
Upvote 0
Whenever I put a card into a camera, I format it immediately. IMO, always best to start with a clean slate.
Same here, as soon as I've downloaded *and* backed up my images, I reformat the card in the camera. A couple of self-acclaimed "experts" on dpr challenged this practice and said it increases the chances of corrupting a card, but I've been doing this for at least 10 years without issues.

The only occasion when I've had a card corrupt was once when I used a brand new card (formatted in camera), but I'm pretty sure it was because the card itself had faulty memory - I filled half the card, but everything after that has a yellow triangle warning icon instead of an image... I dumped the card, rather than risk using it again.
 
Last edited:
Upvote 0
Same here, as soon as I've downloaded *and* backed up my images, I reformat the card in the camera. A couple of self-acclaimed "experts" on dpr challenged this practice and said it increases the chances of corrupting a card, but I've been doing this for at least 10 years without issues.
*inhales* Actually…..

Formatting in the camera most likely reduces corruption and on a lot of Canon bodies, it would make writing to the card a lot faster. The OS on the camera didn’t like how Windows would format the cards and would use a different, slower method for writing pictures.

The above is assuming you have a non-counterfeit SD card from a known brand.
 
Upvote 0
Same here, as soon as I've downloaded *and* backed up my images, I reformat the card in the camera.
I keep two sets of cards for each camera. When I remove them, I put in the old ones and format them. For the just-removed cards, I copy the RAW images to my Mac and thus I have the unformatted cards as an extra backup. Sometimes I don't get around to processing the images right away, and in any case it takes a few days for images on my internal drive to propagate through my various backups (the first is the hourly Time Machine backup to my home NAS, but I also keep a pair of Time Machine backups on USB-C hard drives that I alternate keeping one in my desk at the office, I swap those out weekly for an off-site backup. Overkill, but at least I won't lose data!
 
  • Wow
Reactions: 1 user
Upvote 0
I keep two sets of cards for each camera. When I remove them, I put in the old ones and format them. For the just-removed cards, I copy the RAW images to my Mac and thus I have the unformatted cards as an extra backup. Sometimes I don't get around to processing the images right away, and in any case it takes a few days for images on my internal drive to propagate through my various backups (the first is the hourly Time Machine backup to my home NAS, but I also keep a pair of Time Machine backups on USB-C hard drives that I alternate keeping one in my desk at the office, I swap those out weekly for an off-site backup. Overkill, but at least I won't lose data!
Nothing wrong with overkill, it pays to be cautious with important shots - I shoot a duplicate set simultaneously to both cards in my R5. I back up all my photos daily via Time Machine to a pair of portable SSDs, and also backup everything to iCloud. If I leave the house empty for more than a couple of hours, I take one of the portable SSDs with me and put the other one in a fireproof safe! How's that for overkill?
 
  • Like
Reactions: 1 users
Upvote 0
Yes, firmware gets more complex with every new camera, and with literally thousands of customisation permutations available, it's impossible to test all of them and eliminate bugs before firmware is released. It takes time for users to report issues and for Canon to determine how rare they are, and whether they are caused by firmware issues, or by user error, or sub-standard components. Inevitably bugs and conflicts will occur, causing potential freezes.

Ideally manufacturers would incorporate system error reporting into cameras, and to transmit the report via bluetooth to a phone, from where it could be forwarded to the manufacturer. Unlikely to happen though.
IMHO, there are only 2 reasons why errors occur frequently. The first is sloppy programming, the second is careless engineering design. I'm a retired arcade & console programmer and will give a few examples of what I have seen in my career:

Careless Programming: I started programming when personal computers & arcade machines first came out. I wrote assembly language from scratch to run everything (there was no prior code or operating system I had to build upon). I double checked every section of code and data and single stepped every instruction until I was sure it was correct, including testing the maximum limits of data instead of just the typical ranges. When I found a bug I could quickly track it down since there was usually only a single bug in what I just wrote. This took a lot of time, maybe three times as long as other fellow programmers who quickly wrote code and ran it and assumed it was correct if they didn't initially see any problem and then they'd repeat adding code over and over until the project was finished. When they got bugs (often fatal) they took forever to track them down (and they never found them all) because they had a house of cards with hundreds of bugs just waiting for infrequent combinations to occur. In the meantime, I got over a dozen arcade and early personal computer games produced without a single bug ever found in the field.

That was back in the day when you could do that. In my later career they had "operating system" code you had to start with and build upon. I dreaded that as it was never bug-free. Or maybe you had to take over a fellow programmer's code and build upon that. Once I had to track down a bug in another programmers C++ code and found that they had redefined the "+" operation for a particular type of data with the wrong assembly code. Now you'd run some code that didn't work and yet your simple C code looked perfect - what a pain. Another time a truly brilliant programmer wrote a big piece of code on the PS3 which ran fine. I wrote another piece which ran fine. When that programmer ran my code and then ran his code afterwards, his code would crash. He told me that I did something wrong. I checked it out and found the problem in his code. His C code assumes the data is 0'd out before it runs. I never assume this is the case as I don't trust it. So when my code first starts I'd set all data memory to "DEAD BEEF". If I ever loaded a value and found that value then I'd know I probably forgot to initialize my data. It turns out his code had uninitialized data and when it fetched my DEAD BEEF data (instead of the assumed 0) it would crash.

Careless Hardware: One of my arcade games was being manufactured and they found it was sometimes crashing. There were 2 processors running with my code -one for the main logic and the other would display to the screen (ever single pixel was drawn by my code 1 pixel at a time). I added additional "bad data breakpoints" so I could isolate the problem, and found that when data was being sent from one processor to the other it would occasionally drop a bit. I told the hardware designer and he thought it had to be a software bug. I had to setup a logic analyzer with almost a hundred wires to track down what happened and then showed the results to him proving it was a hardware fault. He shrugged and said I had to fix it in software. I found that I had to re-read every byte of transferred data multiple times and if the data changed (which it shouldn't) then I'd reread it until it didn't. The bug was so infrequent that it never occurred multiple times in a row, and then the crashing went away. After talking with him he told me that the entire board design was "slightly out of spec" where they used parts that were not guaranteed fast enough for the clock speed they were using (they'd save money with slower & cheaper parts). Most parts would run appreciably faster than they were spec'd to do, but a few wouldn't and would fail when overclocked. So the workers stuffing the boards with parts would just "swap different chips" in and out until the problem board started to work and they'd ship it.

None of these problems were due to "more complexity". They were due to carelessness and the need to get product out quickly and cheaply, and the tolerance of the folks who do it, which is common human nature. I have no doubt that it still the reason why almost all of the failures occur.
 
Last edited:
  • Like
Reactions: 5 users
Upvote 0
IMHO, there are only 2 reasons why errors occur frequently. The first is sloppy programming, the second is careless engineering design. I'm a retired arcade & console programmer and will give a few examples of what I have seen in my career:

Careless Programming: I started programming when personal computers & arcade machines first came out. I wrote assembly language from scratch to run everything (there was no prior code or operating system I had to build upon). I double checked every section of code and data and single stepped every instruction until I was sure it was correct, including testing the maximum limits of data instead of just the typical ranges. When I found a bug I could quickly track it down since there was usually only a single bug in what I just wrote. This took a lot of time, maybe three times as long as other fellow programmers who quickly wrote code and ran it and assumed it was correct if they didn't initially see any problem and then they'd repeat adding code over and over until the project was finished. When they got bugs (often fatal) they took forever to track them down (and they never found them all) because they had a house of cards with hundreds of bugs just waiting for infrequent combinations to occur. In the meantime, I got over a dozen arcade and early personal computer games produced without a single bug ever found in the field.

That was back in the day when you could do that. In my later career they had "operating system" code you had to start with and build upon. I dreaded that as it was never bug-free. Or maybe you had to take over a fellow programmer's code and build upon that. Once I had to track down a bug in another programmers C++ code and found that they had redefined the "+" operation for a particular type of data with the wrong assembly code. Now you'd run some code that didn't work and yet your simple C code looked perfect - what a pain. Another time a truly brilliant programmer wrote a big piece of code on the PS3 which ran fine. I wrote another piece which ran fine. When that programmer ran my code and then ran his code afterwards, his code would crash. He told me that I did something wrong. I checked it out and found the problem in his code. His C code assumes the data is 0'd out before it runs. I never assume this is the case as I don't trust it. So when my code first starts I'd set all data memory to "DEAD BEEF". If I ever loaded a value and found that value then I'd know I probably forgot to initialize my data. It turns out his code had uninitialized data and when it fetched my DEAD BEEF data (instead of the assumed 0) it would crash.

Careless Hardware: One of my arcade games was being manufactured and they found it was sometimes crashing. There were 2 processors running with my code -one for the main logic and the other would display to the screen (ever single pixel was drawn by my code 1 pixel at a time). I added additional "bad data breakpoints" so I could isolate the problem, and found that when data was being sent from one processor to the other it would occasionally drop a bit. I told the hardware designer and he thought it had to be a software bug. I had to setup a logic analyzer with almost a hundred wires to track down what happened and then showed the results to him proving it was a hardware fault. He shrugged and said I had to fix it in software. I found that I had to re-read every byte of transferred data multiple times and if the data changed (which it shouldn't) then I'd reread it until it didn't. The bug was so infrequent that it never occurred multiple times in a row, and then the crashing went away. After talking with him he told me that the entire board design was "slightly out of spec" where they used parts that were not guaranteed fast enough for the clock speed they were using (they'd save money with slower & cheaper parts). Most parts would run appreciably faster than they were spec'd to do, but a few wouldn't and would fail when overclocked. So the workers stuffing the boards with parts would just "swap different chips" in and out until the problem board started to work and they'd ship it.

None of these problems were due to "more complexity". They were due to carelessness and the need to get product out quickly and cheaply, and the tolerance of the folks who do it, which is common human nature. I have no doubt that it still the reason why almost all of the failures occur.
Thanks for an interesting reply. I guess the main cause of carelessness is time pressure - all the manufacturers are in fierce competition and want their product out first, so everything from design to programming to assembly to final testing is rushed, and any issues have to be sorted out afterwards. I'm pretty sure that the complexity of the firmware is also an issue though - the more complex it is, the longer it will take to write and test the code, and when folk are under time pressure, they're more likely to rush and become careless.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0