Opinion: Love it or Hate it, Digital Correction is here to Stay

I would go one step further...

Your entire image with digital cameras is created with a series of digital corrections. Your sensor is not recording the colors as you will see them, they have to go through a demosaicing, in other words a digital correction. The tones from dark to light are not recorded on the sensor as you will see them after converting the RAW image, they need to go through a tonal correction algorithm. Same with White Balance, and many, perhaps most most now apply noise reduction in the RAW conversion. Your RAW file is not a negative. So your converted image is a,series of Digital Corrections. So, why the big deal when it comes to lenses? Makes no sense.
Except that they're not the same though: most of what you list are consequences of how bayer sensors work. There is no choice apart perhaps for NR applied in camera.
For lenses there are or there could be alternatives.
 
Upvote 0
Does the VCM 50/1.4 require digital corrections to stretch image corners that do not cover the entire sensor? or just some geometry correction and lightening up vignetting?
Yes, it relies on digital correction. Apparently, for distortion it is not as extreme as with other vcm lenses.
or what, should we do away with forum such as CR? :p
Noooo! Of course not! But maybe spending less time arguing over - what feels like - endlessly ongoing debates (digital corrections/ dynamic range) and writing/ reading more about we all enjoy, look forward to or just share knowledge about photography. And, of course, guessing and debating new rumors and upcoming camera gear :)
 
  • Love
Reactions: 1 user
Upvote 0
Half the people commenting are not realising this thread is about distortion corrections and image stretching, not vignette, it looks like they just skipped Richard’s piece and went straight to the comments section.

For some reason, few users seem to think that EF lenses were some holy grail on vignetting, but they were not. Vignetting is nothing new, many EF lenses had lots of it, some even more than its RF replacements.

A few examples for vignetting, EF vs RF:

Does the VCM 50/1.4 require digital corrections to stretch image corners that do not cover the entire sensor? or just some geometry correction and lightening up vignetting?
Yes, it relies on digital correction. Apparently, for distortion it is not as extreme as with other vcm lenses.
Not, not really. The 50mm VCM has similar levels of vignette as the EF 50mm f/1.4, and less distortion than the RF 50mm f/1.2 L.
 
  • Like
Reactions: 2 users
Upvote 0
Manly man lenses like the Canon EF 11-24mm f/4 that had manly barrel distortion of 4.5%, or the even more manly manny Sigma 12-24mm f/4 Art with an even more massively manly 5.3% barrel distortion (the same as the Canon RF 14-35 that 'requires' correction, oh my!).

Meh. I'll stay here in the present, thanks.
The manliest man lens in my collection is a Zeiss 3.5/18mm Distagon - it has a so-called moustache distortion (but only a mild one) ...
 
  • Haha
  • Like
Reactions: 4 users
Upvote 0
Less distortion than the RF 50mm F1.2 doesn't mean is doesn't have any. Which is exactly what I stated.
The RF 50mm f/1.2 L has 0.2% barrel distortion.
The RF 50mm f/1.4 L has a tiny amount of pincushion distortion, probably between 0.2 and 0.1% if measured.
It's a near-zero distortion lens, they both are.

So no, it does not rely on software corrections for distortion, and it has a traditional level of vignette for a f/1.4 lens.

It’s a really nice piece of glass :)
 
Last edited:
  • Like
Reactions: 2 users
Upvote 0
I want to make clear my discussion pertains to in-camera lens correction for JPEG output. My attachment sample is for chromatic aberration, but my point also applies to vignetting and geometric distortion.

Canon lens designers, with their Computer Aided Design (CAD) software, know exactly how a theoretical lens design performs regarding various aberrations. The lens design process involves numerous compromises to get to a marketable product.

One important lens design consideration is how easy it is to manufacture. A follow on from this is how consistent is unit to unit performance.

The in-camera lens correction software algorithm uses a ‘model’ of the lens to modify the internal RAW sensor data for JPEG engine output. Any ‘deviation’ of a particular lens being corrected from the model of that lens will result in a sub optimal corrected result.
This is Digital Lens Optimizer, which is different.

But DLO is pretty damned cool.
 
Upvote 0
Yes, it relies on digital correction. Apparently, for distortion it is not as extreme as with other vcm lenses.
But what I meant is whether the lens' image circle convers the full sensor or not before any kind of digital correction. Mandrake says it does so it's different from the wide angle lenses (primes or zooms at their widest fl) which do not.
Noooo! Of course not! But maybe spending less time arguing over - what feels like - endlessly ongoing debates (digital corrections/ dynamic range) and writing/ reading more about we all enjoy, look forward to or just share knowledge about photography. And, of course, guessing and debating new rumors and upcoming camera gear :)
Debates? what debates? it's just me being right and misinformed people disagreeing with me :ROFLMAO:

Seriously though. Some debates go on endlessly indeed... but it takes 2 (or more) to tango... In any case, IMHO, some "spice" is needed for a forum to be successful: if everyone was agreeing with and high-fiving everyone else 🥰 then it will become a bit boring pretty quick. Heated debate is good (again, IMHO) as long as there are no personal attacks involved 😈
 
  • Like
Reactions: 1 user
Upvote 0
But what I meant is whether the lens' image circle convers the full sensor or not before any kind of digital correction. Mandrake says it does so it's different from the wide angle lenses (primes or zooms at their widest fl) which do not.

Debates? what debates? it's just me being right and misinformed people disagreeing with me :ROFLMAO:

Seriously though. Some debates go on endlessly indeed... but it takes 2 (or more) to tango... In any case, IMHO, some "spice" is needed for a forum to be successful: if everyone was agreeing with and high-fiving everyone else 🥰 then it will become a bit boring pretty quick. Heated debate is good (again, IMHO) as long as there are no personal attacks involved 😈
The only thing better than a heated debate is an off topic heated debate!
 
  • Haha
Reactions: 1 user
Upvote 0
On the subject of digital correction, where a curved image is stretched back out to be straight again it’s worth remembering that in the movie industry when wanting to shoot in wide screen format it was common practice to shoot a compressed (and so distorted - ‘squeezed’) image in order to use the full width of (normally) 35mm film, and then distort it out the other way (desqueeze) to give the required wide screen format. And the reason ? To improve quality, where using more of the film area gave an improved quality and resolution despite having to be significantly distorted ‘post processing’ in order to view.
Incidentally the same thing is often done in digital image filming.
So digital correction is nothing to get hot under the collar about, as long as it’s not taken to the point where it is so severe that data is having to be created after the event.
 
  • Like
Reactions: 2 users
Upvote 0
So digital correction is nothing to get hot under the collar about, as long as it’s not taken to the point where it is so severe that data is having to be created after the event.
So I actually think we are already to this point and most people are oblivious to it.. its been discussed to some extent in this thread though without explicitly being stated..

Example:
If I have a 45MP sensor, and the captured image is stretched to to save a 45MP image without barrel distortion, then the camera interpolated, through 2d convolution algorithms that are commonplace in graphical signal processing, the extra detail to enlarge the image. Is that a problem.. everyone will have to decide that on their own. Im not a fan, because inherently that math softens the image from what the sensor recorded.
 
  • Like
Reactions: 1 user
Upvote 0
Example:
If I have a 45MP sensor, and the captured image is stretched to to save a 45MP image without barrel distortion, then the camera interpolated, through 2d convolution algorithms that are commonplace in graphical signal processing, the extra detail to enlarge the image. Is that a problem.. everyone will have to decide that on their own. Im not a fan, because inherently that math softens the image from what the sensor recorded.
As you say, it's a question of whether you'd prefer to live with the distortion or with the loss of sharpness. But it's important to realize that the issue is not new. As I pointed out earlier in this thread (with humorous intent), lenses that were 'optically corrected' for DSLR/film are still not perfect. Compared to the Canon RF 14-35/4L that was the subject of this thread, the Canon EF 11-24/4L has nearly as much barrel distortion and the Sigma 12-24/4 Art has essentially the same amount of barrel distortion. The difference is that with the RF lens, if you want the output to be the full MP of your sensor then you are required to correct the distortion.

Either way, the effects of both the distortion and the algorithm to correct it are most apparent in the extreme corners of the image. Personally, when I compose a shot that's typically not where I put important subjects.
 
  • Like
Reactions: 1 users
Upvote 0
I'll always err on the side of full sensor coverage because I don't think it's unreasonable for a person to want all of the pixels purchased utilized. Before anyone disagrees -- remember that a disagreement basically means that you're OK with purchasing but not using sensor pixels. Frankly, if I have to purchase four tires for a car but only get to use three then I don't care how well it drives -- I'm gonna be ticked about the dangling tire!

But I'm totally OK with Canon (or whomever) making the pixels that did capture a photo better in the final image. I use DLO shamelessly for a better final image, and I use image edits as well. Better is better. And the fact that Canon is baking a better-ing engine into their cameras can only be a good thing, especially as lenses ship with data that the better-ing engines can take advantage of. If Canon also allows third party lenses to also include DLO-enabling data then that would be even more amazing and well worth the licensing hassle for third parties -- at least from a consumer perspective.

I also admit that I simulate this bettering using my lenses anyhow, and always have. Bad night time comma on the EF 24mm L II? I stop down to f/2 which gets me to a place that makes me happy. How is that different than Canon making coma look better via software? And if they let in more light along the way then bonus.

So personal preferences for using all of my pixels the first time aside, the fact of the matter is Canon is producing stuff the majority of purchasers seem to like. I mean, if someone buys a multi-thousand dollar lens then one has to assume they like what they get. And the final product is ideally an image that makes the photographer smile or that pays the bills (which also probably elicits a smile).

Yet all of that stated, let us also not kid ourselves: Canon is imposing constraints upon itself, like the intent of a smaller lens, and so any compromise in lens design is self-imposed. I'm perfectly fine with a bigger lens if it does a full projection. I'm capable of lifting a little more iron. For mid- to top-tier lenses I'll pay for a solution that makes full use of the system. For budget lenses I'm OK with Canon cutting corners -- literally in this case. Regardless, however the gears turn, for several thousand dollars I expect an image within my talent and luck that makes me smile. And Canon has done a very good job of that over the years, long before RF.
 
Upvote 0
So digital correction is nothing to get hot under the collar about, as long as it’s not taken to the point where it is so severe that data is having to be created after the event.
How do you define 'created'? As @screenshooter correctly states, distortion correction involves interpolation and that's still 'creating' data in my opinion. The color and intensity values assigned to interpolated pixels are not 'original data' (but as has been pointed out, color is interpolated anyway since each pixel has a spectrally restricted color mask in front of it). The difference is that interpolation is a mathematically straightforward way to create those values, compared to extrapolation or AI-based generation.

Part of the problem here is that some people don't understand the difference between interpolation and extrapolation/AI, and/or they read somewhere that 'corners are filled by AI' and believed it.
 
  • Like
Reactions: 4 users
Upvote 0
I'll always err on the side of full sensor coverage because I don't think it's unreasonable for a person to want all of the pixels purchased utilized. Before anyone disagrees -- remember that a disagreement basically means that you're OK with purchasing but not using sensor pixels. Frankly, if I have to purchase four tires for a car but only get to use three then I don't care how well it drives -- I'm gonna be ticked about the dangling tire!
Utilized for what? Even if you routinely make large prints (bigger than 16x24") from your images, it probably won't make a meaningful difference. Even then, your analogy is hyperbole. A car with three tires can't be driven, an image with a few less MP is perfectly fine for most use cases. A more accurate analogy would be if you bought those four tires and they had 8.8 mm of tread depth on each of them instead of the full 9 mm. Would you even know?

As an example, if I take an uncorrected image from my RF 24-105/2.8L Z at 24mm and crop the image to remove the black corners, then my 24 MP (6000x4000) image becomes a 22 MP (5754x3836) image. Say I then took a few steps back and shot the same scene with the lens zoomed in to 28mm where the corners are filled by the lens. Do you honestly believe that if I printed the cropped 22 MP image vs the full 24 MP image at 16x24" and hung them on a wall side by side, that you or anyone else could tell the difference? I highly doubt it. And if there's no objectively meaningful difference, then it only matters in your mind.
 
Upvote 0
Utilized for what?
Capturing photos. Isn't that what the sensor is all about? 😏

Even if you routinely make large prints (bigger than 16x24") from your images, it probably won't make a meaningful difference.
I agree. I just paid for them, so I want them used. I mean, it's not like my sensor is oval or anything like that.

Even then, your analogy is hyperbole.
Of course it is! Hyperbole is a useful tool when it comes to writing and stressing a point.

Per the Oxford dictionary:
Noun: exaggerated statements or claims not meant to be taken literally.

A more accurate analogy would be if you bought those four tires and they had 8.8 mm of tread depth on each of them instead of the full 9 mm.
I disagree. A more accurate analogy would be if only the centre tread is used, which is a real world scenario and equivalent in terms of whether a full product sees use. And yeah, I'd be miffed about that too (including my incompetence of the moment for not setting or inflating the tire correctly).

As an example, if I take an uncorrected image from my RF 24-105/2.8L Z at 24mm and crop the image to remove the black corners, then my 24 MP (6000x4000) image becomes a 22 MP (5754x3836) image. Say I then took a few steps back and shot the same scene with the lens zoomed in to 28mm where the corners are filled by the lens. Do you honestly believe that if I printed the cropped 22 MP image vs the full 24 MP image at 16x24" and hung them on a wall side by side, that you or anyone else could tell the difference? I highly doubt it. And if there's no objectively meaningful difference, then it only matters in your mind.
I think that misses my point.
  1. I didn't complain about the potential losses from stretching, warping, etc. In fact, I said that I shamelessly use DLO and image editors. My point was I just prefer that all of the sensor be activated to provide pixels for the stretching etc.
  2. I acknowledged that in the end the final product is what matters most. "Better is better."

Obviously Canon's opinion is that no one is going to miss those extra pixels when the final image is produced. Anyhow, Canon is also pursuing more pixels in its sensors anyhow -- perhaps in a more tacit acknowledgement of the corner cutting than what people realized.

In the end, hey -- if you're OK with not using your full sensor while you enjoy a sweet image then rest with a peaceful mind. 😊 I'm also resting with a peaceful mind, but in the context of the question implied by CR I'm simply stating I want to use everything in my system, not just 99%. But regardless of my wants, Canon is getting the job done -- so Bravo to Canon.
 
Last edited:
Upvote 0
It's not going anywhere and, for what most people are paid to photograph professionally, it doesn't matter at all unless you don't like the result that the automatic correction produces. I'd prefer that RAW files continue to be imported without a lens profile applied and then I can choose whether or not I want to apply it, though I understand why applying it would be the default behavior.

As others have correctly pointed out, there are many decisions being made by the camera that are completely out of our control. Honestly, it was like that even back in the days of film: did you make the film stock yourself? Was every batch perfectly consistent? There have always been variables that are outside of our control and it seems like this may be another one of them.
 
  • Like
Reactions: 4 users
Upvote 0
How do you define 'created'? As @screenshooter correctly states, distortion correction involves interpolation and that's still 'creating' data in my opinion. The color and intensity values assigned to interpolated pixels are not 'original data' (but as has been pointed out, color is interpolated anyway since each pixel has a spectrally restricted color mask in front of it). The difference is that interpolation is a mathematically straightforward way to create those values, compared to extrapolation or AI-based generation.

Part of the problem here is that some people don't understand the difference between interpolation and extrapolation/AI, and/or they read somewhere that 'corners are filled by AI' and believed it.
Yes, a poor choice of word on my account. Interpolation is ‘creating’, but it’s creating directly around the same data that has already been captured. I was referring to adding unrecorded data / AI generation rather than correcting distortion and of course this has been an easily accessible post processing option to correct for at least fifteen years or so.
A case in point; a couple of years ago an acquaintance of mine was unfortunate enough to have a malignant tumour develop behind one of his eyes. His eye and part of his skull was removed and the surgeons took a lump of his thigh and stitched it over the poor guy’s face, they must have known he wasn’t going to last long. You can imagine what it looked like. Anyway, his wife had taken a picture of him on her iPhone, and it had tried to generate the outline of an eye over the patch. Quite horrible, and the sort of computational photography that we don’t want in proper cameras.
 
  • Wow
  • Sad
Reactions: 1 users
Upvote 0
Capturing photos. Isn't that what the sensor is all about? 😏


I agree. I just paid for them, so I want them used. I mean, it's not like my sensor is oval or anything like that.


Of course it is! Hyperbole is a useful tool when it comes to writing and stressing a point.

Per the Oxford dictionary:
Noun: exaggerated statements or claims not meant to be taken literally.


I disagree. A more accurate analogy would be if only the centre tread is used, which is a real world scenario and equivalent in terms of whether a full product sees use. And yeah, I'd be miffed about that too (including my incompetence of the moment for not setting or inflating the tire correctly).

I think that misses my point.
  1. I didn't complain about the potential losses from stretching, warping, etc. In fact, I said that I shamelessly use DLO and image editors. My point was I just prefer that all of the sensor be activated to provide pixels for the stretching etc.
  2. I acknowledged that in the end the final product is what matters most. "Better is better."

Obviously Canon's opinion is that no one is going to miss those extra pixels when the final image is produced. Anyhow, Canon is also pursuing more pixels in its sensors anyhow -- perhaps in a more tacit acknowledgement of the corner cutting than what people realized.

In the end, hey -- if you're OK with not using your full sensor while you enjoy a sweet image then rest with a peaceful mind. 😊 I'm also resting with a peaceful mind, but in the context of the question implied by CR I'm simply stating I want to use everything in my system, not just 99%. But regardless of my wants, Canon is getting the job done -- so Bravo to Canon.
All fair points, and choice is good.

The EF 11-24/4 uses the whole sensor, and the RF 10-20/4 needs correction to fill the corners. I've owned both and I use the 10-20/4 far more often than I used the 11-24/4 because it's trivial to include the former in the bag with several other lenses whereas including the latter (twice the weight and much larger) meant taking out another lens (usually one of my TS-Es).

Comparing the new 14/1.4 VCM with the Sigma 14/1.4 Art (also a mirrorless lens) yields a similar conclusion – the Sigma lens is twice the weight and much larger.

Screenshot 2026-02-20 at 2.56.43 PM.png

Choice is good, and I know which lenses I prefer.
 
  • Like
Reactions: 2 users
Upvote 0