Canon EOS R8 specifications

Right now I'm using an R5 and half a dozen RF lenses, but what incentive is there for me and others in a similar position to buy further products? I honestly can't see a R5 Mkii, R1 (or a future Nikon or Sony) offering anything additional that I actually want or need.

All companies periodically need to launch a new range of products that are perceived as radically different, otherwise customers have little incentive to buy further products from them. New technologies will result in major, unforeseen changes to the gear we use, especially with the inevitable advent of 3D photography/videography.

I think, based on what I've read about the greater potential of smaller formats, that when RF and Z become "stale" in a few years time, that there's a good chance that they'll be superseded by smaller formats, that are better suited to computational photography, and have lighter, smaller equivalent lenses.

On the other hand, it's probably equally likely that cameras, as we currently know them, will be almost entirely replaced by smartphones or head-worn gear.

Really, it's just a case of how far into the future these things will happen.
The flow of camera development has been the other way thus far - larger formats becoming more common and relatively more affordable; even in phones, sensors have got slightly bigger over time. It would take some interesting marketing to reverse the narrative. Ultimately, the bigger the sensor the more light you gather, and the limitations of bandwidth etc notwithstanding, anything you can do on a small sensor you can do with a bigger one.

I won't make predictions about what comes next, I would just observe paradigm shifts are rare. For now, I am content with what we have (though unlike you I am not at the limit of current tech as my budget is very constrained).
 
  • Like
Reactions: 1 users
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
The flow of camera development has been the other way thus far - larger formats becoming more common and relatively more affordable; even in phones, sensors have got slightly bigger over time. It would take some interesting marketing to reverse the narrative. Ultimately, the bigger the sensor the more light you gather, and the limitations of bandwidth etc notwithstanding, anything you can do on a small sensor you can do with a bigger one.
Larger formats have become more common because they have become more affordable, and because up until now there has been a worthwhile increase in image quality compared to smaller formats. But there comes a time when the image quality has reached such a high level that any further improvements are unlikely to be visible in real world photography. There also comes a time when people realise that larger formats have limitations, and that those limitations can outweigh the benefits.

It may be the case that technological advances make it possible for large formats to produce frame rates as fast as is possible with small formats, and it may also be the case that computational techniques can eventually be applied as efficiently as with smaller formats, although that is a long way off.

But if small formats can produce results that are good enough to satisfy 99.9% of users (and I think ultimately that will be the case), no one is going to want to carry and use lenses that are twice as large and heavy as is necessary.

I'm content at the moment with my R5 gear, and as I'm already 73, I doubt if I'll still be taking photographs in 10 years time, but nevertheless I follow developments with great interest, and my gut feeling is that full frame will lose favour as the benefits of smaller sensors gradually outweigh those of full frame. Nikon, Canon, Fujifilm and Sony users will revert to APS-C, but M43 is likely to be at the forefront of computational photography for the reasons I've noted previously, and in my opinion, the future of photography is computational.
 
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
Most birds flap their wings with the frequency less than 10Hz, so 100Hz time sequences can still be useful. Hummingbirds are a special case with a special approach to the solution: they flap their wings with the frequency around 50 Hz, so, by taking every 2nd 100Hz image you can stroboscopically "slow down" their wing movements.
It's not frequency per se that is important, it is the amount of movement that occurs in the time interval, which is a combination of frequency and amplitude, that is crucial: for example a high frequency motion with low amplitude may be undetectable whereas a low frequency with high amplitude may have large movement during the time of exposure. The tips of birds wings often have movement blur where the inner parts don't, and the frequency of flapping is the same along the wing.
The reference image is the single one whose geometry you want (could even be chosen out of the sequence after the shooting). Then you calculate optical flows from this single image to all the other images in the sequence, geometrically distort those images back against their optical flows, and merge the resulting image stack.
In theory you could do that but you will be doing it on noisy images and have to extract the detail obscured by noise in each to reconstruct the final image. That might be worthwhile if you were, say, solving the structure of a protein by cryoelectron microscopy but for a photo a simple application of Topaz AI Denoise would be far, far simpler and suffice.
 
  • Like
Reactions: 1 user
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
for a photo a simple application of Topaz AI Denoise would be far, far simpler and suffice.
I use Topaz DeNoise AI. It's amazing how the noise is virtually eliminated, even at ISO 3200, while at the same time the sharpness and fine detail is actually enhanced. Even at ISO 6400 the results are quite amazing, although fine detail gets lost and the image can end up looking "plasticky".

Unfortunately, I've found that the results inconsistent - being prone to often throwing up weird artefacts in the form of random large soft-edged rectangles. This precludes using it for batch processing and makes it necessary to very carefully check each image in case one of these artefacts is overlooked (as it can be when occurring in a dark part of the image).

Consequently, I've been very reluctant to "upgrade" to the new Photo AI version. Comparison tests in dpr show that DXO DeepPrime produces even better results, suppressing noise even further, and resolving greater detail. I shall carry on with Topaz DeNoise AI for a while, but would be interested to hear what users of DXO DeepPrime have to say about it, and whether they encounter any issues.
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
I use Topaz DeNoise AI. It's amazing how the noise is virtually eliminated, even at ISO 3200, while at the same time the sharpness and fine detail is actually enhanced. Even at ISO 6400 the results are quite amazing, although fine detail gets lost and the image can end up looking "plasticky".

Unfortunately, I've found that the results inconsistent - being prone to often throwing up weird artefacts in the form of random large soft-edged rectangles. This precludes using it for batch processing and makes it necessary to very carefully check each image in case one of these artefacts is overlooked (as it can be when occurring in a dark part of the image).

Consequently, I've been very reluctant to "upgrade" to the new Photo AI version. Comparison tests in dpr show that DXO DeepPrime produces even better results, suppressing noise even further, and resolving greater detail. I shall carry on with Topaz DeNoise AI for a while, but would be interested to hear what users of DXO DeepPrime have to say about it, and whether they encounter any issues.
My main weapon is DxO PL6, which is usually more than good enough, and I have used it with success on heavy crops at iso 10,000. I've learned to use Topaz software, and you do have to careful with it. It can easily oversharpen or give blotches of colour. When I use it with DxO, I usually export the images as unsharpened jpegs with the DxO lens sharpening turned off. Topaz gives less artefacts when you do this. I find DxO with lens sharpening on gives unnatural images from the 100-500mm + 2xTC, and I export those images unsharpened into Topaz for sharpening. DxO DeepPrime is a slight improvement over the former Prime, and @neuroanatomist has posted some comparisons. I thoroughly recommend it. But, I do like Topaz in addition.
 
  • Like
Reactions: 2 users
Upvote 0

RMac

R6ii 5DSR 5Diii 7D M5 C300
So take an R6ii, drop the IBIS, a card slot, and some controls and ergo, and charge about a thousand less for it... Got it.
Thinking about this a bit more...
Honestly I'm a bit surprised at the feature set being discussed. I would sort of expect a camera at this price point to have several of these features nerfed - like no 180 FPS, no oversampled 4k60 (honestly would expect no 4k60 oversampled or otherwise), and no 40 fps ES photo burst. I guess kudos to Canon for including these features.

That said, I bet there will be several things missing/lower-spec'd that will make this camera less suitable for pro work that the R6ii/R6/R5 bodies are targeted for:
  1. Single card slot (as mentioned above)
  2. No built-in EVF (mentioned by several others)
  3. No rear thumb dial or joystick.
  4. Fastest shutter speed 1/4000
  5. Slower flash sync speed (maybe 1/180 - same as RP)
  6. Lower burst rate with mechanical shutter - maybe 5 or 8 fps.
  7. Smaller battery
  8. Minimal to no "weather sealing"
  9. No place for your pinky
  10. Maybe this one overheats in 4k60 considering the smaller body.
Things they could nerf that I bet don't get nerfed:
  1. Probably no artificial record time limit (since the R10 doesn't have one). If so, that would be great and a sign that Canon is done with that silliness.
  2. Probably has the new multi-function hot shoe (again, since the R10 did). Makes me wonder if the EVF-DC2 physically interfaces with the new multi-function hot shoe...
Finally, things that may get nerfed for no technical reason that would sort of show that Canon is still trying to protect/differentiate the value of the R6ii:
  1. High-Frequency Anti-Flicker
  2. Burst mode with pre-shooting
  3. Focus Bracketing
  4. A quick video/photo mode switch with retained settings in each respective mode.
  5. RAW output over external HDMI (maybe this doesn't even have an HDMI output).
Overall, if this ends up being the same sensor as the R6ii (which is a pretty good sensor only bested by the R5 and R3 among Canon cameras) in a package more like an M6/M6ii, then at $1500 it is quite a good value - compelling for more casual use. Also really interesting for more straight-up image capture - things like timelapse and for use in astrophotography (where it's helpful to have a less massive camera).

Note - everything I've said above is pure speculation. It's fun to guess and then see how wrong you are about it a few days later when the full specs come out.
 
Upvote 0
Apr 25, 2011
2,520
1,900
It's not frequency per se that is important, it is the amount of movement that occurs in the time interval, which is a combination of frequency and amplitude, that is crucial: for example a high frequency motion with low amplitude may be undetectable whereas a low frequency with high amplitude may have large movement during the time of exposure.
It looks like you are still thinking about an approach in which any possible motion compensation from optical flow is ignored.

In theory you could do that but you will be doing it on noisy images and have to extract the detail obscured by noise in each to reconstruct the final image. That might be worthwhile if you were, say, solving the structure of a protein by cryoelectron microscopy but for a photo a simple application of Topaz AI Denoise would be far, far simpler and suffice.
It's not a dichotomy. One can use both approaches. One can even merge both into a single neural network.

An obvious drawback of purely static "AI" denoising methods comes from the notion that they are basically local texture libraries with an advanced (but not infallible) index. If the original texture is recognized incorrectly (due to the lack of its presentation in the database or due to confusing noise), the output texture of the method could be believable, but wrong. So, the resulting images:
1. Should be used very cautiously in any scientific publication or guide.
2. Shall never be used for AI training, in order to prevent amplification of such mistakes.
 
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
It looks like you are still thinking about an approach in which any possible motion compensation from optical flow is ignored.
No I am not ignoring it, I just pointed out the difficulties you hadn't considered that complicate the deconvolution procedure for removing noise from a photograph.
It's not a dichotomy. One can use both approaches. One can even merge both into a single neural network.

An obvious drawback of purely static "AI" denoising methods comes from the notion that they are basically local texture libraries with an advanced (but not infallible) index. If the original texture is recognized incorrectly (due to the lack of its presentation in the database or due to confusing noise), the output texture of the method could be believable, but wrong. So, the resulting images:
1. Should be used very cautiously in any scientific publication or guide.
2. Shall never be used for AI training, in order to prevent amplification of such mistakes.
I specifically stated the complicated procedure you proposed without AI could be worthwhile for solving a scientific problem like solving a protein structure but was overkill for removing noise from a simple photograph. As a matter of interest, Machine Learning trained on the entire protein database has proven to be very successful in predicting the structure of proteins from their amino acid sequence as implemented in AlphaFold. Amusingly, I was asked to write a perspective on this for a scientific journal and I used the AF of the R5 as an example of ML of pattern recognition impacting on my everyday experience.
 
  • Like
Reactions: 1 user
Upvote 0
Apr 25, 2011
2,520
1,900
No I am not ignoring it, I just pointed out the difficulties you hadn't considered that complicate the deconvolution procedure for removing noise from a photograph.
It's not a deconvolution, so these particular "difficulties" don't apply.

Any significant "difficulties" in this approach were already solved when inter-frame compression in digital video became a thing.

I specifically stated the complicated procedure you proposed without AI could be worthwhile for solving a scientific problem like solving a protein structure but was overkill for removing noise from a simple photograph.
Actually, it is already used by Google and Apple for quite a while to remove noise from smartphone photos. It has its limitations, of course, but it's definitely not "an overkill".

It is not useful for BiF yet, but with a rate of 100 fps it might as well be.
 
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
I am genuinely surprised a photo of this camera hasn't leaked yet.
You can bet your life that dpreview, ImagingResource and at least 30 youtubers have had the camera in their hands for a few days. They'll have taken photos with it, written initial reviews about it, produced videos about it, and photographed it. But they're all subject to NDA's, and know that if any leaked photo is traced back to them that they'll get sued by Canon and never be trusted with one of their cameras again (even if they are suspected of a leak). Leaking photos is a very risky business. When "leaked" photos appear, you can be sure that they come direct from Canon, at a time when Canon wants them "leaked", and not before.
 
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
Actually, it is already used by Google and Apple for quite a while to remove noise from smartphone photos. It has its limitations, of course, but it's definitely not "an overkill".
Google, Apple, who next?

My bet is OM Systems will be the first to incorporate this technology into a "real" camera -

..... or just possibly Samsung, who have the experience with conventional cameras, and could re-emerge in the "real" camera market....
 
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
Actually, it is already used by Google and Apple for quite a while to remove noise from smartphone photos. It has its limitations, of course, but it's definitely not "an overkill".

It is not useful for BiF yet, but with a rate of 100 fps it might as well be.
Do Google and Apple actually on a slower time scale merge a series of images with the equivalent of say a birds wing at different angles relative to its body, distort each frame to one view with the birds wing in the same position and then use those to lower noise? For example, it could be a burst of a head and shoulders portrait where the person’s head changes angle with the body.
 
Upvote 0

entoman

wildlife photography
May 8, 2015
1,998
2,438
UK
Do Google and Apple actually on a slower time scale merge a series of images with the equivalent of say a birds wing at different angles relative to its body, distort each frame to one view with the birds wing in the same position and then use those to lower noise? For example, it could be a burst of a head and shoulders portrait where the person’s head changes angle with the body.
When I first started bird photography, the first thing I noticed was how rapidly and frequently they turn their heads. It's *so* easy to miss the right moment, even when shooting bursts. Almost enough to convince me to get an R7, just for the pre-capture...
 
Upvote 0
Apr 25, 2011
2,520
1,900
Do Google and Apple actually on a slower time scale merge a series of images with the equivalent of say a birds wing at different angles relative to its body, distort each frame to one view with the birds wing in the same position and then use those to lower noise? For example, it could be a burst of a head and shoulders portrait where the person’s head changes angle with the body.
Not my corporate iPhone 11, at least. Cannot check on Google Pixel, the only one I had is broken at the moment.

Edit: or maybe the iPhone does it, but the result is still much poorer than with a non-rotating head.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0

AlanF

Desperately seeking birds
CR Pro
Aug 16, 2012
12,441
22,878
When I first started bird photography, the first thing I noticed was how rapidly and frequently they turn their heads. It's *so* easy to miss the right moment, even when shooting bursts. Almost enough to convince me to get an R7, just for the pre-capture...
It comes with experience! I used to manage well enough with the 5DSR in slow silent shutter mode. Now I am lazy with the R5 and R7 and pay for it byn spending too much time deleting duplicates.
 
Upvote 0