The Best and Worst of 2025

Interesting. I assumed that PowerPoint can correctly draw shapes of the sizes I input, and that PhotoShop could correctly count pixels with the measurement function.

I confirmed the former in PowerPoint using a 43.2 unit diameter circle in which the inscribed, centered 36x24 unit rectangle touched the circumference at the corners exactly as it should.
View attachment 227495

I confirmed the latter by using FIJI (ImageJ aka NIH Image), which I know with certainty can accurately count pixels and measure areas.
View attachment 227497

2,343÷112,888 = 0.0208.
Maybe the difference stems from your using square pixels of finite size rather than inifinitesimal steps of size tending to zero. Your arcs for example will cross pixels so one part is in the image circle and another isn’t. So is its position in or out in your calculation?
 
  • Love
Reactions: 1 user
Upvote 0
I think your conception of optics is a bit idealistic tbh.

Possibly.

Is your argument here that because an extreme and somewhat contrived situation is unacceptable, that every gradation between that and your ideal setup must also be rejected? If it is a continuum, why is zero the only acceptable position?

We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?

What's wrong with requesting/demanding that the information gained that led to that decision be provided?

Doesn't anyone else care about what Canon's doing here?

Are you all just sheeple here?

Is it all ok just because Canon does it? (which rests on the laurels of the "Canon's #1 in the marketplace, therefore anything Canon does is automatically right" which is the most boring and intellectually bankrupt argument ever.)

If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.

As for faith/evidence, you clearly have an entrenched view but haven't presented anything to support it except high-minded principles (such as your comment on "separating beams of light" above), Neuro has asked for evidence. And somehow you are turning that into, he is blinded by faith in the new optics?

Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
 
Upvote 0
Incidentally, maybe Gemini did get it wrong. I took a pragmatic approach rather than a mathematical one, made a circle with a diameter of 39.32 units (19.66 x 2) and centered a 24 x 36 unit rectangle on it, then measured the area of the excluded portions of the rectangle vs the whole rectangle (pixel counts of a screenshot, but that would not affect a % measurement). It came out to ~2.07% of the FF sensor area, i.e. worse than Gemini calculated.

Always show your working is what they say in exams - which is why I screen grabbed the equations.
 
Upvote 0
I had ChatGPT make a similar drawing (to scale) and subsequently calculate the percentage of the area of the dark corners and the result is 1.48%.

Edit: ChatGPT calculated the result using integral calculus, i.e. a different method than Gemini.

Can you include the equations as screen grabs? There should be other methods, eg calculating the size of the rectangles and then doing the area of the segments.
 
Upvote 0
Maybe the difference stems from your using square pixels of finite size rather than inifinitesimal steps of size tending to zero. Your arcs for example will cross pixels so one part is in the image circle and another isn’t. So is its position in or out in your calculation?
Thanks, Alan – that is the correct explanation! The light of an image circle falling on a sensor would still be quantized into discrete pixels, but my ‘pixels’ were much larger than those on a sensor. When I repeated the previous steps with 10-fold larger shapes, which would minimize the effect of quantization error, my value came out to 1.46%, and I’ll take that as close enough to the AI-generated answers.

Much appreciated!
 
  • Like
  • Love
Reactions: 1 users
Upvote 0
We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?

What's wrong with requesting/demanding that the information gained that led to that decision be provided?

Doesn't anyone else care about what Canon's doing here?

Are you all just sheeple here?

Is it all ok just because Canon does it? (which rests on the laurels of the "Canon's #1 in the marketplace, therefore anything Canon does is automatically right" which is the most boring and intellectually bankrupt argument ever.)

If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.



Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
For the lens in question with a 19.96 mm image height, we are discussing ~1.5% of the resulting image, and that ~1.5% is in the extreme corners of the image. If you want to get your proverbial panties in a twist over that, as you seem to be doing, go right ahead.

I suspect most people don’t care because it’s ~1.5% of the image and it’s the extreme corners of the image.

If something absolutely critical to your image is located in the highlighted area of the frame below, then I'd suggest you need to reframe your shot.

Screenshot 2026-01-14 at 9.08.46 AM.png

Even without the need for digital correction to fill those corners, that's where lenses perform their worst.
 
Upvote 0
We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?
No, we're being presented with products and are able to buy them or not, based on a whole raft of factors.
What's wrong with requesting/demanding that the information gained that led to that decision be provided?
You can do it, but you can't possibly believe they will respond.
Are you all just sheeple here?
Do you think we are, just because we disagree with you? If so, don't expect civility henceforth.
If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.
Ask away, but if you constantly reject all responses that don't agree with your preconceptions, and start to call people names on that basis, you'll get little more than contempt in future.
Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
To me it sounds like you have a bee in your bonnet, with little reason, and are having a minor tantrum and lashing out at people for deigning to feel differently.

I don't really care how the lenses are designed, I care how much they cost, and what sort of images they produce. I suspect that is also how most consumers choose. Feel free to disdain that approach (I suspect you will).
 
  • Like
Reactions: 1 user
Upvote 0
If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.

Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
I find this hilarious coming from someone who posts misinformation and, when subsequently challenged, states (emphasis mine):
I actually wasn't that concerned about being correct/right and more concerned with sharing something that I thought others might find interesting - which is how a lot of social media works. I think there have been enough others that found it informative to have been worthwhile. If someone wants to argue about whether something is right/wrong, fill your boots while I get a beer, sit back and watch some tiktok.

See: Post in thread 'Katharine Burr Blodgett: Inventor of non-reflective coatings for glass?'
https://www.canonrumors.com/forum/t...lective-coatings-for-glass.44862/post-1036024
 
Last edited:
  • Like
Reactions: 1 users
Upvote 0
I find this hilarious coming from someone who posts misinformation and, when subsequently challenged, states (emphasis mine):
I actually wasn't that concerned about being correct/right and more concerned with sharing something that I thought others might find interesting - which is how a lot of social media works. I think there have been enough others that found it informative to have been worthwhile. If someone wants to argue about whether something is right/wrong, fill your boots while I get a beer, sit back and watch some tiktok.

See: Post in thread 'Katharine Burr Blodgett: Inventor of non-reflective coatings for glass?'
https://www.canonrumors.com/forum/t...lective-coatings-for-glass.44862/post-1036024

I find it interesting that poeple still want to diminish those trying to promote the achievements of women. Personally, I'd never heard about her before but mysoginists gotta do their thing.
 
Upvote 0
No, we're being presented with products and are able to buy them or not, based on a whole raft of factors.
I agree, that's a good summary of the marketplace.
You can do it, but you can't possibly believe they will respond.
Correct.
Do you think we are, just because we disagree with you? If so, don't expect civility henceforth.
Not because you don't disagree with me, but because you don't care about the answer.
Ask away, but if you constantly reject all responses that don't agree with your preconceptions, and start to call people names on that basis, you'll get little more than contempt in future.
Your contempt came without the names.
To me it sounds like you have a bee in your bonnet, with little reason, and are having a minor tantrum and lashing out at people for deigning to feel differently.
I'm concerned that we're effectively increasingly being lied to by Canon and other lens manufacturers about the lenses they produce because the focal range for full frame lenses is no longer covering the sensor. If you're ok with that, then that's your business.
I don't really care how the lenses are designed, I care how much they cost, and what sort of images they produce. I suspect that is also how most consumers choose. Feel free to disdain that approach (I suspect you will).
Another way to say that: what you care about is the images that software produces from the data collected by your sensor because the lens is no longer the final arbiter of the image you end up with. You care about the destination, I care about the journey.
 
Upvote 0
I suspect most people don’t care because it’s ~1.5% of the image and it’s the extreme corners of the image.

If something absolutely critical to your image is located in the highlighted area of the frame below, then I'd suggest you need to reframe your shot.

View attachment 227502

Even without the need for digital correction to fill those corners, that's where lenses perform their worst.

Something that bothers me about this is what's the algorithm used to fix this? Just push out the corners? Use a smaller rectangle and expand that?
 
Upvote 0
Something that bothers me about this is what's the algorithm used to fix this? Just push out the corners? Use a smaller rectangle and expand that?
Nothing special, the same algorithms that are used to correct the geometric distortion present in pretty much all lenses. Canon’s DLO does it based on a mathematical model of the lens, reconstructing the position of the incoming light based on the optical formula. 3rd party RAW converter developers don’t have the lens model, so they take a picture of a grid pattern and use a algorithm to map the resulting image to the orthogonal grid, then incorporate that warp into the lens profile.

The process isn’t just about filling the corners, it’s about correcting the distortion (barrel, pincushion, mustache, and complex combinations thereof).
 
Upvote 0
I find it interesting that poeple still want to diminish those trying to promote the achievements of women. Personally, I'd never heard about her before but mysoginists gotta do their thing.
I mean this is just offensive. You're calling people who point out your factual errors mysoginists (sic) just because the subject they were forced to correct you on involves a woman's achievements? People in that thread consistently recognised Blodgett's achievements.

I mean, in that very thread you admitted "I was lazy and cut-n-pasted from something on facebook".

Why not just admit your mistakes when they are pointed out (we all make them) rather than this - frankly embarrassing - litany of denial, obfuscation and abuse?
 
  • Like
Reactions: 1 user
Upvote 0
I find it interesting that poeple still want to diminish those trying to promote the achievements of women. Personally, I'd never heard about her before but mysoginists gotta do their thing.
If you are trying to come across as intelligent by using “difficult” words, it helps:
  1. If you look up their meaning, so you know they have nothing to do with my post.
  2. If you spell them correctly.
 
Upvote 0