The Best and Worst of 2025

The point of the question is to lead to a discussion of what happens when we process RAW data and the choice of RAW converters. If you are unaware of your software doing correction, then how on earth could that make you a liar? (Lying is deliberately telling an untruth.)
I agree with this, no shame in making a mistake when we admit it. Digging our heals in and doubling down to avoid embarrassment is more shameful.
 
  • Love
Reactions: 1 user
Upvote 0
While optical correction is bending light, you don't disagree with the statement that optical correction doesn't stretch light. Arguing about bending light is a bit pointless because that's the whole point of a lens - to bend light such that it lands on the sensor.
The light that is digitally corrected to fill the corners when required still falls on the sensor.

Some lenses deal with this better than others. Some subjects are impacted by this more than others. Measuring CA is what a lot of lens test websites do when they shoot specific subjects to measure lpmm, etc. Your generalizations here are no better than mine.
The difference is that I’ve provided empirical evidence to support my points. Have you? Has anyone who claims that optical correction of geometric distortion is inherently superior to digital correction.

Let me make your day: I don't use distortion correction when processing images, I can't even remember when I last used CA correction.
So you shoot RAW, and you don’t use a lens profile in your RAW converter? I’m skeptical. Especially after your intentionally evasive reply to @AlanF.
 
  • Like
Reactions: 1 user
Upvote 0
The point of the question is to lead to a discussion of what happens when we process RAW data and the choice of RAW converters. If you are unaware of your software doing correction, then how on earth could that make you a liar? (Lying is deliberately telling an untruth.)

Good software that does raw conversion gives you the option of whether or not to do correction based on known lens profiles or CA elmination. I leave those check boxes to the default software position - off.. Maybe DPP turns it all on by default, IDK 'cause I don't use it. As I said above/previously, I don't use lens profiles/CA correction and I said that knowing that the software I use doesn't have those things turned on for images that I process.
 
Upvote 0
The light that is digitally corrected to fill the corners when required still falls on the sensor.

But the ability to pull apart some of the individual beams is lost.

To go to an extreme point, why even bother with a full frame lens if all we need to do is put an APS-C lens on the front of a full frame model and then stretch that image such that it "fill the picture".. Afterall, what's a few dark corners/boundary between friends if digital corection is ok? Where's the cutoff point between too much stretching vs acceptable stretching?

Canon's asking people using its equipment to take it on good faiith that the dark corners from various lens is acceptable. Or at least I say that because I haven't see Canon say anything with authority on this subject matter and I'm pretty sure if you had then you'd have quoted it by now.

The detail that gets lost in the squashed iamge (it doesn't fill the srnsor, so I'm using "squash" as the term to refer to it being made small) can't be made to reappear with some magic process. Even if you take into account the blur from the AA, there must be less refined data to work from in an image that's only 19.96mm "high".

The difference is that I’ve provided empirical evidence to support my points. Have you? Has anyone who claims that optical correction of geometric distortion is inherently superior to digital correction.

You've eyeballed some images and made some claims that you're asking us to accept on no better grounds than faith.

I don't trust humans to be a good judge of the evidence because humans are unreliable and all too frequently plagued by biases.

So you shoot RAW, and you don’t use a lens profile in your RAW converter? I’m skeptical. Especially after your intentionally evasive reply to @AlanF.

Correct. Using a lens profile is not a requirement of using a raw converter, nor is using CA correction.

Faith is an interesting word to being up in the discussion of this topic because there is practically no verifiable analysis done on it but we're alll excepted to accept the new lay of the land as being ok. Summary, Canon's asking us all to take a huge leap of faith in it.
 
Upvote 0
But the ability to pull apart some of the individual beams is lost.

To go to an extreme point, why even bother with a full frame lens if all we need to do is put an APS-C lens on the front of a full frame model and then stretch that image such that it "fill the picture".. Afterall, what's a few dark corners/boundary between friends if digital corection is ok? Where's the cutoff point between too much stretching vs acceptable stretching?

The detail that gets lost in the squashed iamge (it doesn't fill the srnsor, so I'm using "squash" as the term to refer to it being made small) can't be made to reappear with some magic process. Even if you take into account the blur from the AA, there must be less refined data to work from in an image that's only 19.96mm "high".
This is how I look at it:
  • If the image circle of a lens covers the entire sensor then there is no a priori pixel loss, but the lens that requires the most geometric correction will result in a (slightly) less image quality since more pixels will be "stretched" / extrapolated.
  • If it doesn't then there is an additional (small) loss of quality: an optically corrected lens may still require stretching, but the data used to create the corrected image is based on the full mp count of the sensor, while with digitally corrected lens whose image circle does not cover the full sensor, the stretching will be done using less data (less pixels), therefore more pixels are "created" with digitally corrected lenses.
This is based on my own reasoning that, essentially, the less data you interpolate and / or the more data you start from, the better.
I do not have a scientific proof of this. It makes sense to me. But no one has given me reasons to reject my reasoning so far.
So I will continue to believe that optical corrections, all else being equal, are better IQ-wise, and obviously worse size- and weight-wise. Maybe marginally, but better. And therefore I will continue to have a slight preference for optically corrected lenses... the good ones at least ;)
You've eyeballed some images and made some claims that you're asking us to accept on no better grounds than faith.

I don't trust humans to be a good judge of the evidence because humans are unreliable and all too frequently plagued by biases.
@neuroanatomist has freely admitted that his evidence is empirical and therefore potentially imprecise. And it is entirely possible that the differences, while present (imho), are not meaningful enough to make a difference in real life shooting scenarios. But I do not believe that Neuro has an agenda here.
 
  • Like
Reactions: 1 user
Upvote 0
But the ability to pull apart some of the individual beams is lost.
I think your conception of optics is a bit idealistic tbh.
To go to an extreme point, why even bother with a full frame lens if all we need to do is put an APS-C lens on the front of a full frame model and then stretch that image such that it "fill the picture".. Afterall, what's a few dark corners/boundary between friends if digital corection is ok? Where's the cutoff point between too much stretching vs acceptable stretching?
Is your argument here that because an extreme and somewhat contrived situation is unacceptable, that every gradation between that and your ideal setup must also be rejected? If it is a continuum, why is zero the only acceptable position?
Canon's asking people using its equipment to take it on good faiith that the dark corners from various lens is acceptable.

I don't trust humans to be a good judge of the evidence because humans are unreliable and all too frequently plagued by biases.

Faith is an interesting word to being up in the discussion of this topic because there is practically no verifiable analysis done on it but we're alll excepted to accept the new lay of the land as being ok. Summary, Canon's asking us all to take a huge leap of faith in it.
Canon is producing novel lenses with new compromises that weren't possible before. You don't have to buy them. I suspect the alternative, especially in a much smaller market than 20 years ago is that these lenses simply wouldn't exist. More choice is better, no?

As for faith/evidence, you clearly have an entrenched view but haven't presented anything to support it except high-minded principles (such as your comment on "separating beams of light" above), Neuro has asked for evidence. And somehow you are turning that into, he is blinded by faith in the new optics?
 
  • Like
Reactions: 1 user
Upvote 0
But the ability to pull apart some of the individual beams is lost.
Lol. If you believe that's what is happening, your understanding of the technical aspects of optics is more flawed that I thought.

To go to an extreme point, why even bother with a full frame lens if all we need to do is put an APS-C lens on the front of a full frame model and then stretch that image such that it "fill the picture".. Afterall, what's a few dark corners/boundary between friends if digital corection is ok? Where's the cutoff point between too much stretching vs acceptable stretching?
I suppose the only reasonable cutoff point is, are you happy with the resulting images. Since you don't use distortion correction and most lenses have at least some, I suspect you have a low bar for image quality by my standards. I know that straight lines are just that, and I want them to appear that way in my images. Eschewing distortion correction means straight lines in your images are curved, to me that is highly undesirable (and I only tolerate when it's necessary for correction of volume anamorphosis, because I prioritize the appearance of faces at the edge of the frame over lines being straight).

The detail that gets lost in the squashed iamge (it doesn't fill the srnsor, so I'm using "squash" as the term to refer to it being made small) can't be made to reappear with some magic process. Even if you take into account the blur from the AA, there must be less refined data to work from in an image that's only 19.96mm "high".
Only 19.96 mm 'high', as opposed to 21.64 mm. 8% shorter on the half-diagonal. With the 24-105/2.8 at 24mm, the black corners are less than 0.05% of the image that need to be 'filled in' by 'stretching'. On my R1, that's 11,400 pixels out of the 24,000,000. If you want to lose sleep over that, be my guest.
 
  • Like
Reactions: 1 users
Upvote 0
I suppose the only reasonable cutoff point is, are you happy with the resulting images. Since you don't use distortion correction and most lenses have at least some, I suspect you have a low bar for image quality by my standards. I know that straight lines are just that, and I want them to appear that way in my images.

Find a scraggly old tree and take a photo of it. How many straight lines are in that? Put it in your raw image editor of choice, apply a lens profile and compare the before and after. Sure they're different but does one or the other make or break the image?

Eschewing distortion correction means straight lines in your images are curved, to me that is highly undesirable (and I only tolerate when it's necessary for correction of volume anamorphosis, because I prioritize the appearance of faces at the edge of the frame over lines being straight).

You're assuming I shoot straight lines. Sounds like a boring photo to me. I also don't put faces at the edge of the frame if I can help it.

Only 19.96 mm 'high', as opposed to 21.64 mm. 8% shorter on the half-diagonal. With the 24-105/2.8 at 24mm, the black corners are less than 0.05% of the image that need to be 'filled in' by 'stretching'. On my R1, that's 11,400 pixels out of the 24,000,000. If you want to lose sleep over that, be my guest.

Back in this post:
I presented some calculations from gemini about image coverage of the smaller circle on the sensor and its answer was 98.5%. On a 45MP that's ~675,000 pixels (1.5%) that aren't usable. For the R1, 1.5% is 360,000. How'd you come up with 11,400 out of 24,000,000? Did Gemini get it wrong? It's not a trivial calculation to work out the area lit by the smaller image circle

math1.png
math2.png
 
Last edited:
Upvote 0
Find a scraggly old tree and take a photo of it. How many straight lines are in that? Put it in your raw image editor of choice, apply a lens profile and compare the before and after. Sure they're different but does one or the other make or break the image?
I wouldn't think so, no. In your image of that scraggly old tree, do you believe that digital correction to fill in the corners would break the image, relative to optical correction?

You're assuming I shoot straight lines. Sounds like a boring photo to me. I also don't put faces at the edge of the frame if I can help it.
If all you shoot is landscapes, it likely doesn't matter either way. Straight lines are most often human-made, and if you're taking pictures of humans there are often straight lines in the scene.

I presented some calculations from gemini about image coverage of the smaller circle on the sensor and its answer was 98.5%. On a 45MP that's ~675,000 pixels (1.5%) that aren't usable. For the R1, 1.5% is 360,000. How'd you come up with 11,400 out of 24,000,000? Did Gemini get it wrong? It's not a trivial calculation to work out the area lit by the smaller image circle
Perhaps I could have made it more clear. Regarding the area of the image, I was referring to the RF 24-105/2.8 Z, as I stated in the post. The 19.96 mm image height value on which you based your calculation is not universal, it's the value in a patent for the wide end of one possible optical formula of a 50-150/2.8 lens (presumably not an L lens, and regardless, if such a lens is produced based on this patent it may not have that image height). Different lenses will have different image heights. As to how I arrived at my value, I empirically measured it in an uncorrected RAW image from the RF 24-105/2.8 Z at 24mm.

One other relevant bit of information is that the amount of mechanical vignetting is dependent on focus distance, it increases as the lens is focused closer. I presume that in their patents, Canon is specifying the image height as focal length is specified, with focus at infinity. In the 24-105/2.8 Z image that I used, the lens was not focused at infinity, but was focused on a reasonably distant subject (~40 m). The mechanical vignetting is noticeably greater with a close subject, for example (these are uncorrected, high ISO images):

Corner Vignetting.png
 
Upvote 0
Be fair. There are some humans even the worst AI couldn't stoop low enough to match. For calculations like these, Gemini, ChatGPT etc are very reliable.
It may depend on the topic. I've tried using them to help create charts for understanding data regarding controversial topics and they make the kind of mistakes only the worst politicians could make. Which goes back to your "some humans."
 
Upvote 0
I presented some calculations from gemini about image coverage of the smaller circle on the sensor and its answer was 98.5%. ... Did Gemini get it wrong? It's not a trivial calculation to work out the area lit by the smaller image circle.
Incidentally, maybe Gemini did get it wrong. I took a pragmatic approach rather than a mathematical one, made a circle with a diameter of 39.32 units (19.66 x 2) and centered a 24 x 36 unit rectangle on it, then measured the area of the excluded portions of the rectangle vs the whole rectangle (pixel counts of a screenshot, but that would not affect a % measurement). It came out to ~2.07% of the FF sensor area, i.e. worse than Gemini calculated.

Screenshot 2026-01-13 at 11.59.26 AM.png

Still nothing to lose sleep over, IMO, much less prevent me from buying a lens requiring such correction.
 
  • Wow
Reactions: 1 user
Upvote 0
It may depend on the topic. I've tried using them to help create charts for understanding data regarding controversial topics and they make the kind of mistakes only the worst politicians could make. Which goes back to your "some humans."
When it's routine calculations they are very good. When there is proper on-line documented information, like say government websites on tax and law, they are fantastically useful. When there is sparse or conflicting information, then they can be dreadful. Amusingly for me, I've done searches for camera info and got referred to my own threads on CR!
 
  • Haha
Reactions: 1 user
Upvote 0
Incidentally, maybe Gemini did get it wrong. I took a pragmatic approach rather than a mathematical one, made a circle with a diameter of 39.32 units (19.66 x 2) and centered a 24 x 36 unit rectangle on it, then measured the area of the excluded portions of the rectangle vs the whole rectangle (pixel counts of a screenshot, but that would not affect a % measurement). It came out to ~2.07% of the FF sensor area, i.e. worse than Gemini calculated.

View attachment 227494

Still nothing to lose sleep over, IMO, much less prevent me from buying a lens requiring such correction.
I had checked it with ChatGPT, which gave ~1.4%. I wonder what is going on?
 
Upvote 0
I had checked it with ChatGPT, which gave ~1.4%. I wonder what is going on?
Interesting. I assumed that PowerPoint can correctly draw shapes of the sizes I input, and that PhotoShop could correctly count pixels with the measurement function.

I confirmed the former in PowerPoint using a 43.2 unit diameter circle in which the inscribed, centered 36x24 unit rectangle touched the circumference at the corners exactly as it should.
Screenshot 2026-01-13 at 1.46.24 PM.png

I confirmed the latter by using FIJI (ImageJ aka NIH Image), which I know with certainty can accurately count pixels and measure areas.
Screenshot 2026-01-13 at 1.50.23 PM.png

2,343÷112,888 = 0.0208.
 
Upvote 0
Incidentally, maybe Gemini did get it wrong. I took a pragmatic approach rather than a mathematical one, made a circle with a diameter of 39.32 units (19.66 x 2) and centered a 24 x 36 unit rectangle on it, then measured the area of the excluded portions of the rectangle vs the whole rectangle (pixel counts of a screenshot, but that would not affect a % measurement). It came out to ~2.07% of the FF sensor area, i.e. worse than Gemini calculated.

View attachment 227494

Still nothing to lose sleep over, IMO, much less prevent me from buying a lens requiring such correction.
I had ChatGPT make a similar drawing (to scale) and subsequently calculate the percentage of the area of the dark corners and the result is 1.48%.

Edit: ChatGPT calculated the result using integral calculus, i.e. a different method than Gemini.
 
Last edited:
Upvote 0