Opinion: Love it or Hate it, Digital Correction is here to Stay

I have had landscaped photos ruined because of not realizing the amount of corner vignetting correction that was taking place.
Were you not stopping down? I thought that was typical for landscape shooting?
That said, I hate the concept of digital correction because of not where it is today but rather than where it could go tomorrow. What is stopping them from making smaller lenses yet that have an APS-H image circle and then they stretch/scale it back to your full frame resolution.. should we care if ultimately the image is cleaner, sharper, etc.. I would on principle.. but if they took away the toggle to see the file without corrections we would probably never know.
Slippery slope fallacy?
 
  • Like
Reactions: 1 user
Upvote 0
I see that there is an advantage with the digital correction for many applications, but there is also a disadvantage for other applications. It would be nice if we would have a larger selection of RF lenses so that each user would be able to select according to his needs (like in the good old 'EF-time').
Yes, let's go back to the good old days when men were men and lenses were lenses. When real men looked through real viewfinders. When sensors could be film and a manly man's lens knew it. Back then, we had optically corrected lenses that didn't need no stinkin' badges or digital correction. Manly man lenses like the Canon EF 11-24mm f/4 that had manly barrel distortion of 4.5%, or the even more manly manny Sigma 12-24mm f/4 Art with an even more massively manly 5.3% barrel distortion (the same as the Canon RF 14-35 that 'requires' correction, oh my!).

Lenses for MEN.png

Meh. I'll stay here in the present, thanks.
 
  • Haha
  • Like
Reactions: 6 users
Upvote 0
Main thing I don’t understand with the Digital Corrections is how they can clearly make a physical crop of the original image and discard that data, yet still show 6000x4000 pixels in the final image.

I mean, conceptually I get that they’re stretched and interpolated. But not all of the original pixels are there. It’s a crop of what was captured.

I’m not saying it bothers me a lot. I just don’t understand it. But some of my favorite travel shots were taken with the much maligned 24-240 (which I think, based on the comments at release, invented distortion and lens corrections).
 
Upvote 0
I'll happily take them. No one notices and the lenses are lighter.

The only time they have bothered me was when pushing some very low light shots taken at ISO 10000+ with my 28-70mm f/2, the noise turned into a mess of moiré rings. The solution? Just turn the corrections off.
 
  • Wow
Reactions: 1 user
Upvote 0
Were you not stopping down? I thought that was typical for landscape shooting?

....
And I thought that these 'digital correction' allows for a better image quality as most writers assume. And now I can only use these expensive 'digital correction' lenses after stopping down?? That doesn't make sense!
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Main thing I don’t understand with the Digital Corrections is how they can clearly make a physical crop of the original image and discard that data, yet still show 6000x4000 pixels in the final image.
Yes, it’s cropped…after the distortion correction stretches it out. Because of the 3:2 aspect ratio, correcting the distortion stretches the image more horizontally than vertical. In the case of a 24 MP sensor, it’s cropped down to 6000 pixels wide. For example, if you don’t crop to the original aspect ratio then the resulting corrected image is wider than 6000 pixels.

Note that this is just how barrel distortion correction works, even on EF lenses (example here).

I mean, conceptually I get that they’re stretched and interpolated. But not all of the original pixels are there. It’s a crop of what was captured.
Yes, stretched and interpolated just enough to get to the full sensor height, then the sides are (optionally) cropped off down to the full sensor width.

The other point is that all of the original light is there, it just didn’t initially cover all of the available pixels of the sensor.
 
Upvote 0
Can you stack uncorrected subframes? Something like: process raw without the lens profile, then stack, then apply the corrections to the finished stack, if you want?
That is my work-around at the moment. And that is also the reason why I worry that a much stronger image correction might create additional problems when stacking.
 
  • Like
Reactions: 2 users
Upvote 0
And I thought that these 'digital correction' allows for a better image quality as most writers assume. And now I can only use these expensive 'digital correction' lenses only after stopping down?? That doesn't makes sense!
Yes, what you say makes no sense.

I’m not sure who is saying the image quality is better, there is a difference between better and not worse. Yes, in some cases it is better and in others, it is worse. It depends on the lens. In most cases, the difference is probably not enough to have a meaningful effect, at least in most use cases.

No one is forcing you to use any particular aperture. As I pointed out, there are ‘optically corrected’ EF mount lenses with >5% distortion and/or >4 stops of vignetting. No one forces users of those lenses to only use them zoomed out or stopped down.

But…straw men are fun to stand up, right?
 
Upvote 0
Rejecting software correction implies rejecting also DXO, LR etc… editing.
What matters is how good the picture you obtain is, whether optically or electronically corrected. Period!
I would go one step further...

Your entire image with digital cameras is created with a series of digital corrections. Your sensor is not recording the colors as you will see them, they have to go through a demosaicing, in other words a digital correction. The tones from dark to light are not recorded on the sensor as you will see them after converting the RAW image, they need to go through a tonal correction algorithm. Same with White Balance, and many, perhaps most most now apply noise reduction in the RAW conversion. Your RAW file is not a negative. So your converted image is a,series of Digital Corrections. So, why the big deal when it comes to lenses? Makes no sense.
 
  • Like
Reactions: 1 users
Upvote 0
That's absolutely right. As you wrote, in film times, optical correction was a necessity, lens manufacturers didn't have a choice.
Yet, when I see how good the VCM lenses have become, "despite" software correction, I wonder how long the debate optical vs. software can still go on...
I guess the progress in computer based design, new optical materials and coatings will go hand in hand with a deeper digital correction, and users will definitely profit from that progress.
 
  • Like
Reactions: 1 user
Upvote 0
The types of corrections being discussed here result from the image circle being smaller than the sensor. Notice how we're talking about lenses like the RF 14/1.4, the RF 14-35/4, etc. With telephoto lenses, that typically does not happen so your proposals of 150-180mm lenses with a 'need for correction' is a red herring. They won't need it. Even a correction-requiring lens like the RF 24-105/2.8L Z covers the full image circle by 28mm, it's only at the very wide end that it needs digital correction to fill the corners.
The choice of my examples is not great, here you are right,, long focal lengths are easyer to correct - So lets think about 18mm f/4.5: I would not accept the need of digital correction. For a compact 15mm f/2 I would accept moderate digital correction.

About filling the image: this is in my opinion a consequence of correcting strong distortions which is the primary goal of digital corrections.
By the way the RF 24 105 Z lens has surprisingly strong pin cushion distortion at the long end too but that is a design choice of Canon outsourcing distortion correction to the image processing ... but that's a zoom specific compromise, I think.
 
Upvote 0
And I thought that these 'digital correction' allows for a better image quality as most writers assume. And now I can only use these expensive 'digital correction' lenses after stopping down?? That doesn't make sense!
I'm just curious how vignetting ruined landscape shots. Apart from the astro landscape niche discussed at length on other threads, my understanding is most landscapes shots are taken stopped well down - regardless of the lens used - so vignetting shouldn't be an issue?
 
  • Like
Reactions: 1 user
Upvote 0
The choice of my examples is not great, here you are right,, long focal lengths are easyer to correct - So lets think about 18mm f/4.5: I would not accept the need of digital correction. For a compact 15mm f/2 I would accept moderate digital correction.
Maybe just skip the examples. What would even be the point of an 18/4.5 lens? But I get your point. I’d say it’s a factor for those who care about it. I suspect more people would choose the benefits of lenses designed with digital correction required, e.g., potentially smaller, lighter, and/or cheaper, over the debatable drawbacks. I use the word debatable because if an image has similar IQ after digital correction as it would be from an optically corrected lens, the drawback is entirely mental.

About filling the image: this is in my opinion a consequence of correcting strong distortions which is the primary goal of digital corrections.
They go hand in hand. If a lens is designed with an image circle smaller that doesn’t completely cover the sensor, then the designers are incorporating the need for correction to fill the corners into the design.

You can see that from the MTF charts that Canon publishes. For example, the RF 14/1.4 has an image height (i.e., image circle radius) of 18.68mm and that is short of the 21.64mm image height of the FF sensor. The X-axis of an MTF chart is distance along the ‘diagonal’ (the radius from the center to the corner of the sensor), and if distortion correction was not incorporated into the MTF charts then the lines would stop before the right side of the plot (at 18.68mm in the case of the 14/1.4). But they don’t.

1771540985784.png
 
Upvote 0
I would go one step further...

Your entire image with digital cameras is created with a series of digital corrections. Your sensor is not recording the colors as you will see them, they have to go through a demosaicing, in other words a digital correction. The tones from dark to light are not recorded on the sensor as you will see them after converting the RAW image, they need to go through a tonal correction algorithm. Same with White Balance, and many, perhaps most most now apply noise reduction in the RAW conversion. Your RAW file is not a negative. So your converted image is a,series of Digital Corrections. So, why the big deal when it comes to lenses? Makes no sense.
To take it one additional step further...
What about perspective, vignetting, exposure and cropping corrections applied in darkrooms in film era?
Was it a form of noble creativity because it was done by hand?
 
Upvote 0
I'm glad to see some quantitative data on corner stretching. So for the 14-35 at 14 mm, the amount of distortion is 5.3%. This results in a reduction of extreme corner resolution by 10.6%. I wonder if we can use a rule-of-thumb whereby the corner resolution decrease is about twice the distortion amount.

Yes, 10.6% may seem like a hefty penalty. But I think of it this way. I picture a Canon engineer working with modeling software. Suppose that a base design can be tweaked in two directions: one with heavy barrel distortion and a high corner resolution (say 4150 lp, for argument's sake); or one with low distortion and a much lower native resolution (say 3500 lp). Even after geometric correction, the lens with barrel distortion is still the clear winner.

There is one bit that causes me hesitation. The comparison is between corner resolutions. However, the uncorrected corner resolution was not meant to ever be used. It is meant to be cropped away. A fairer comparison would be the resolution of the same part of the image, which for the uncorrected version would lie a bit closer to the center of the image. A nitpick - yes. But possibly relevant.

Which leads me to the opticallimits reviews. Why are they still so obsessed with uncorrected images? Why are they measuring vignetting of uncorrected images? to me this is a bit like slapping an APSC lens on a full-frame body and then complaining about corner resolution. The uncorrected corners simply aren't meant to be used. And if you do use them, it's at your own peril. This is off-label use.
 
  • Like
Reactions: 1 users
Upvote 0
I'm just curious how vignetting ruined landscape shots. Apart from the astro landscape niche discussed at length on other threads, my understanding is most landscapes shots are taken stopped well down - regardless of the lens used - so vignetting shouldn't be an issue?
The one coming to mind most affected was a time laps video of a sunset I shot on a Z9 with a 15-35 f4 S lens. The sky was bright orange and purple and the corners on that lens get booster 4-5 stops.. the noise in the corners was wild.
 
  • Like
Reactions: 1 users
Upvote 0
But that’s where I get confused. It can’t be cropped down to 6000 on the horizontal because there are only 6000 pixels to start. So it has to be cropped to something less than 6000, then stretched, then have the stretched pixels split back so the file shows a full 6000.
The image is stretched first, and after that stretching it is 4000 pixels high (for the 24 MP sensor in your example) and wider than 6000 pixels.

Look at the area outlined by the red rectangle in the diagram below. Notice how in the uncorrected (left) image there is a region between that rectangle and the edge of the sensor area (magenta). After the distortion is corrected, that portion of the image above and left of the red rectangle is outside the 6000 x 4000 area.

1771548367684.png

Also note how in the uncorrected image the red rectangle is closer to the top of the frame than to the left edge. Thus, the red rectangle is moved more sideways than up when the image is ‘stretched’. That exemplifies the reason there is ‘excess’ image on the sides that must be cropped away to maintain the 3:2 ratio.

It’s also worth noting that when stating you can obtain an image wider than 6000 pixels, I am talking about processing by DxO. Under the hood, when Canon processes the image (in camera or in DPP) it is still wider and cropped down to 6000 pixels, but there is no way to get that wider image out of the camera/software.
 
Upvote 0