I completely agree with that
Why, I still ask? Three of you guys, who surely know what you're talking about, are convinced. What convinced you of this? Do you have a simple A/B comparison image? Do you have a technical explanation you can walk me through? I understand optics pretty well for someone outside the field, and general engineering principles as an aged engineer.
I have a slightly different take on the stretching based on the lenses I have: I have a 17-40L, which has very bad corners. The stretched corners on the non-L RF16mm have much more sharpness and detail at f/2.8 than the 17-40L has at any aperture.
I'd still like non-stretched corner for even better results, but for me, the RF16 is already a massive improvement over the lens I used previously. I'd likely feel different if I hadn't used the 17-40L before
At first glance your 17-40 vs. 16/2.8 observations could be seen to support my point but to be honest I can't take a victory lap on that, it's a bit apples and orangutans once you reflect that one's a decade or whatever old, and a zoom, and using the old mount, and conversely the 16/2.8 is a shockingly low price point, etc. etc. Your example's really good, and maybe the best possible under the circumstances, but I feel we can't quite bet the farm on it.
And specifically,
I'd still like non-stretched corner for even better results
again leads to the question: why do you believe non stretched would produce better results?
----------
Here's my take. Lens development has a dozen (or more!) trade-offs. You can get more aperture for less sharpness, more sharpness for worse size and cost and weight, better cost for worse out-of-focus highlight behavior or worse lateral chromatic aberration, and so on. If we can take any given image defect, such as rectilinearity, and fix them in software nearly perfectly, that's as close as you can get to a free lunch, no? We don't have to improve rectilinearity at the expense of size or color fringes or cost! We can get it without hurting anything else.
And that's just the first step. Once you realize how easily and accurately it can be fixed, you can actually use it as a toxic dumping ground.
This is the important thing: we can then trade practically everything else off at the expense of rectilinearity! Improve size at the expense of rectilinearity. Then improve sharpness at the expense of rectilinearity. Improve LCA at the expense of rectilinearity. Improve flare at the expense of rectilinearity. Suddenly the lens is halfway towards being a fisheye... and yet... we've improved perhaps every aspect of the lens, at the expense of rectilinearity, then made the nightmare horrible resulting distortion simply magically disappear.
Result is a $299 16/2.8 that's far sharper than the 14/2.8 of the 90s that cost 10-15x more in real terms and was 4x the weight. (645g vs 165g!)
So how is this rectilinearity so easy to fix?
Basically, we have to magnify (in this case), or shrink, pixels when converting from what the sensor saw to what's actually being output. This magnification is at worst something like 10% or so. It may look horrible, but it's not mathematically huge. If our source pixel overlaps a destination pixel once remapped, that destination pixel will be filled with the source pixel, and furthermore at most 10% of its area will crowd into neighboring pixels to the sides, and likewise vertically. (Could be evenly 5% each way, could be 10% all one way.) So that lowers contrast 10%. But of what? Only of 1-pixel-wide details that are in perfect focus, not running into diffraction limit, and not running into subject motion or camera motion-induced blur even at the most pixel-peeping level, not hidden under a high-ISO noise floor...
and that are themselves perfectly aligned with a sensor pixel and not say split between two or four. At 8000 pix (R5) width and 36mm (full-frame sensor), that is 222 pix per mm, or 111 line pairs per millimeter.
Again, changing the magnification in software will cut contrast as much as 10%... at 100 lp/mm. Canon doesn't even supply 100 lp/mm info. No-one does. No testing site even tests this, because such details do not exist outside of test charts. Why don't any tests even try to test this? When such a 1-pixel-wide scene feature falls between 4 pixels, your contrast may be as low as 25% anyway, even with a perfect lens in perfect focus, no subject or camera movement, no noise, and so on.
In addition to the magnification issue, we have a movement issue. The image will move 10, 20, or even who knows, 100+ pixels, between where they physically hit the sensor, and where the software recomputes it. It sounds like it would get worse and worse... but it doesn't. This is because it can never be more than one pixel off
some pixel in the output file. For instance even if a sensor pixel's value is moved exactly 100 pixels away, it will still be aligned with an output pixel 100% in this case. The worst any alignment can get is if the sensor pixel is split between four output pixels. That would lower your contrast 75%! Sounds bad, but again, what details are we talking about? Exactly as before: Only of 1-pixel-wide details that are in perfect focus, not running into diffraction limit, and not running into subject motion or camera motion-induced blur even at the most pixel-peeping level, with no sensor noise, and if the lens is otherwise perfect...
and that are themselves perfectly aligned with a single sensor pixel and not say split between two or four.
If you're having trouble picturing this, imagine you have a perfect black background and 1 white square on the test scene that is exactly 1/8000 of the scene width and your camera has an 8000-pixel-wide sensor. If your camera is aligned perfectly, then all those white square photons fit on one sensor pixel and we see values: 0 0 0 255 0 0 0, right? But if we turn it 1/400th of 1 degree (on a 50mm lens) (inverse tan(18mm/4000/2/50mm)) horizontally and the same vertically, then that white square is now falling on four different pixels, which each only see 1/4 of its photons, a 2-stop reduction. With 16 stops dynamic range, two stops is 1 bit. We'll see 0 0 127 127 0 0 now. Despite our perfect lens, perfect focus, perfectly noise-free sensor, perfectly motionless subject and camera... our MTF falls from 100% to 25% just by changing aim 1/400th of 1 degree. More generally, if the target has squares 1/8000th the width but our sensor has some other width, say 8192 pixels (R5), then no matter how we point the camera, SOME test scene squares will fall perfectly between pixels and only give a 25% MTF.
So now the crux of my thesis: the movement of a pixel during the correcting distortion, even a LOT of distortion, is actually of no greater magnitude than this perfect shot of a 1-pixel feature, that is nonetheless misaimed by a half pixel.
At worst.
It cannot get worse than that.
It cannot get worse than having a perfect lens, perfect focus, noise-free sensor, zero diffraction, zero movement, and simply being off by half a pixel.
And that's not just good, or very good, or good enough for me. It's as good as you can hope unless you're shooting only scenes with features that all align with your sensor's pixel grid.
---------------
But again, I may be totally wrong, which is why I want to hear you guys out.
Do you have an argument or an image showing how the results are actually substantially worse than I'm arguing here?