The Best and Worst of 2025

Interesting. I assumed that PowerPoint can correctly draw shapes of the sizes I input, and that PhotoShop could correctly count pixels with the measurement function.

I confirmed the former in PowerPoint using a 43.2 unit diameter circle in which the inscribed, centered 36x24 unit rectangle touched the circumference at the corners exactly as it should.
View attachment 227495

I confirmed the latter by using FIJI (ImageJ aka NIH Image), which I know with certainty can accurately count pixels and measure areas.
View attachment 227497

2,343÷112,888 = 0.0208.
Maybe the difference stems from your using square pixels of finite size rather than inifinitesimal steps of size tending to zero. Your arcs for example will cross pixels so one part is in the image circle and another isn’t. So is its position in or out in your calculation?
 
  • Love
Reactions: 1 user
Upvote 0
I think your conception of optics is a bit idealistic tbh.

Possibly.

Is your argument here that because an extreme and somewhat contrived situation is unacceptable, that every gradation between that and your ideal setup must also be rejected? If it is a continuum, why is zero the only acceptable position?

We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?

What's wrong with requesting/demanding that the information gained that led to that decision be provided?

Doesn't anyone else care about what Canon's doing here?

Are you all just sheeple here?

Is it all ok just because Canon does it? (which rests on the laurels of the "Canon's #1 in the marketplace, therefore anything Canon does is automatically right" which is the most boring and intellectually bankrupt argument ever.)

If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.

As for faith/evidence, you clearly have an entrenched view but haven't presented anything to support it except high-minded principles (such as your comment on "separating beams of light" above), Neuro has asked for evidence. And somehow you are turning that into, he is blinded by faith in the new optics?

Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
 
Upvote 0
Incidentally, maybe Gemini did get it wrong. I took a pragmatic approach rather than a mathematical one, made a circle with a diameter of 39.32 units (19.66 x 2) and centered a 24 x 36 unit rectangle on it, then measured the area of the excluded portions of the rectangle vs the whole rectangle (pixel counts of a screenshot, but that would not affect a % measurement). It came out to ~2.07% of the FF sensor area, i.e. worse than Gemini calculated.

Always show your working is what they say in exams - which is why I screen grabbed the equations.
 
Upvote 0
I had ChatGPT make a similar drawing (to scale) and subsequently calculate the percentage of the area of the dark corners and the result is 1.48%.

Edit: ChatGPT calculated the result using integral calculus, i.e. a different method than Gemini.

Can you include the equations as screen grabs? There should be other methods, eg calculating the size of the rectangles and then doing the area of the segments.
 
Upvote 0
Maybe the difference stems from your using square pixels of finite size rather than inifinitesimal steps of size tending to zero. Your arcs for example will cross pixels so one part is in the image circle and another isn’t. So is its position in or out in your calculation?
Thanks, Alan – that is the correct explanation! The light of an image circle falling on a sensor would still be quantized into discrete pixels, but my ‘pixels’ were much larger than those on a sensor. When I repeated the previous steps with 10-fold larger shapes, which would minimize the effect of quantization error, my value came out to 1.46%, and I’ll take that as close enough to the AI-generated answers.

Much appreciated!
 
  • Like
  • Love
Reactions: 1 users
Upvote 0
We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?

What's wrong with requesting/demanding that the information gained that led to that decision be provided?

Doesn't anyone else care about what Canon's doing here?

Are you all just sheeple here?

Is it all ok just because Canon does it? (which rests on the laurels of the "Canon's #1 in the marketplace, therefore anything Canon does is automatically right" which is the most boring and intellectually bankrupt argument ever.)

If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.



Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
For the lens in question with a 19.96 mm image height, we are discussing ~1.5% of the resulting image, and that ~1.5% is in the extreme corners of the image. If you want to get your proverbial panties in a twist over that, as you seem to be doing, go right ahead.

I suspect most people don’t care because it’s ~1.5% of the image and it’s the extreme corners of the image.

If something absolutely critical to your image is located in the highlighted area of the frame below, then I'd suggest you need to reframe your shot.

Screenshot 2026-01-14 at 9.08.46 AM.png

Even without the need for digital correction to fill those corners, that's where lenses perform their worst.
 
Upvote 0
We're being asked to accept that 19.96 is ok. Why is that number ok and not some other number? Where is the limit on what's acceptable for stretching if lens manufacturers are going to require cameras/software to do that?
No, we're being presented with products and are able to buy them or not, based on a whole raft of factors.
What's wrong with requesting/demanding that the information gained that led to that decision be provided?
You can do it, but you can't possibly believe they will respond.
Are you all just sheeple here?
Do you think we are, just because we disagree with you? If so, don't expect civility henceforth.
If we can't ask such questions of Canon then people are being forced into a blind faith situation with Canon - 1 person's empiral tests make no difference there.
Ask away, but if you constantly reject all responses that don't agree with your preconceptions, and start to call people names on that basis, you'll get little more than contempt in future.
Not principals, theory. And as I've alluded to (if not said), providing the test framework to actually validate Canon's position is very difficult and certainly beyond my ability - if not the ability of most (and Neuro's test does not qualify.) I wish I could do the required testing, but I can't, and I doubt anyone that isn't Canon can which just sucks.

As it stands, Neuro has a theory that it doesn't make any noticible difference based on his eyeballing of images from different lenses. My theory is that because of what's being done, there should be a measurable difference in image quality when comparing stretched vs non-stretched. The proper resolution is to do scientific testing to establish the facts, however the barrier to doing that is higher than either I or Neuro can facilitate.
To me it sounds like you have a bee in your bonnet, with little reason, and are having a minor tantrum and lashing out at people for deigning to feel differently.

I don't really care how the lenses are designed, I care how much they cost, and what sort of images they produce. I suspect that is also how most consumers choose. Feel free to disdain that approach (I suspect you will).
 
Upvote 0