IBIS and 100mp coming to an EOS R camera? [CR2]

Jul 28, 2015
3,368
570
Since half the pixels of a Bayer-masked sensor are filtered for green light, and the green range is what human eye/brain systems use to discriminate fine details, the effective resolution of a Bayer-masked sensor is NOT one-fourth the resolution of the sensor. It's much closer to one-half, as tested using alternating black and white lines of decreasing width.

I seem to recall someone saying they often sharpen the blue channel selectively to increase sharpness in their images which led me to suspect that it is blue information that gives the fine resolution.
Or am I misunderstanding something?
 
Upvote 0
Except, there is no optical super telephotos for MFT (or anything close). And there won't ever be, because the size of the image circle makes it so it doesn't make sense to have a gigantic lens to a little tiny camera. Plus, I don't want a MFT sized camera; I want a 5D sized body, to be paired with such a lens.

Imagine how cool it would be with an optical 400mm or 600mm and being able to switch with a button press to crop mode, or go to a wider crop. A lens like a 400mm DO would have a whole new utility to it.

Then the upcoming Olympus OMD-M1x that is the size of a D series Canon with 18FPS and 20MP. Fit the native Oly 300 and you have a compact super tele system. They also offer TCs to inexpensively give you the reach of even longer lenses.
 
Upvote 0
I seem to recall someone saying they often sharpen the blue channel selectively to increase sharpness in their images which led me to suspect that it is blue information that gives the fine resolution.
Or am I misunderstanding something?

Early digital camera files were often examined to see the amount of blue channel noise as that was a real issue as ISOs climbed.
That seems to be not an issue anymore.
Sharpening that channel only would make the image noisier but then noise can give the impression of increased sharpness. Selective channel sharpening and other sharpening exercises have long been part of the photo community's quest for the magic juju that somehow transform their images from average to excellent. Sharpening any channel will have varying effects depending on the content. OTOH absent extreme sharpening in a channel the effects will be largely invisible.
 
  • Like
Reactions: 1 users
Upvote 0
Marginal utility explained

Greater than marginal improvements in dynamic range would likely require a new sensor technology and great improvements in computing power, power management, etc. I'll not hold my breath waiting for these technical leaps.

Greater improvements in DR are limited by physics. 16 bit files can help but we are still very close to the limits of the natural world.
 
Upvote 0

Michael Clark

Now we see through a glass, darkly...
Apr 5, 2016
4,722
2,655
I seem to recall someone saying they often sharpen the blue channel selectively to increase sharpness in their images which led me to suspect that it is blue information that gives the fine resolution.
Or am I misunderstanding something?

The human eye/brain vision system relies on wavelengths in the middle of the visible spectrum for perception of details. Those wavelengths are squarely within the band of wavelengths for which the green filter in Bayer masked sensors is most efficient. Wavelengths near the edge of the visible spectrum on the blue end are the hardest for the human eye to resolve. That's why a green or red LED will look sharper across the room than a blue LED will. Our eyes are not able to focus blue light as well as they can focus green and , to a lesser extent, red light.

Speaking of which, the "red" cones in our retinas are actually centered on a wavelength that is more like "yellow-green" than red. There's a LOT of overlap between the M (green) and L (red) cones, and much less overlap between the M (green) and S (blue) cones. It's the difference between what we see without M and L cones that produces the perception of red in our brains. Likewise, the "red" filters in most Bayer masks are closer to yellow than to red. Trichromatic vision systems do not require that the same three colors are most responsive in the capture devices and in the display devices that display the results.
 
  • Like
Reactions: 3 users
Upvote 0
Mar 2, 2012
3,187
542
The human eye/brain vision system relies on wavelengths in the middle of the visible spectrum for perception of details. Those wavelengths are squarely within the band of wavelengths for which the green filter in Bayer masked sensors is most efficient. Wavelengths near the edge of the visible spectrum on the blue end are the hardest for the human eye to resolve. That's why a green or red LED will look sharper across the room than a blue LED will. Our eyes are not able to focus blue light as well as they can focus green and , to a lesser extent, red light.

Speaking of which, the "red" cones in our retinas are actually centered on a wavelength that is more like "yellow-green" than red. There's a LOT of overlap between the M (green) and L (red) cones, and much less overlap between the M (green) and S (blue) cones. It's the difference between what we see without M and L cones that produces the perception of red in our brains. Likewise, the "red" filters in most Bayer masks are closer to yellow than to red. Trichromatic vision systems do not require that the same three colors are most responsive in the capture devices and in the display devices that display the results.
Thanks for that explanation!
 
Upvote 0
You can take the marketing numbers literally. A 24 Megapixel sensor has 12 Million green pixels, and 6 million red and blue ones. And talking about the resolution of screens like the LiveView and ViewFinder Canon at least also calls it 2.1 Million dots when they should say 0.7 million dots of each color (or 0.7 Megapixel, a.k.a not even HD).

By the time you open a raw file in any editor, a step called debayering will have been applied to it. For soc JPEG images this is done in camera. This will overlay information extracted from the sourround pixels over each of the single colour pixels to estimate the two missing components to form an actual pixel in the digital sense.

Thanks. I didn‘t know that :oops:
 
Upvote 0

Talys

Canon R5
CR Pro
Feb 16, 2017
2,127
451
Vancouver, BC
Then the upcoming Olympus OMD-M1x that is the size of a D series Canon with 18FPS and 20MP. Fit the native Oly 300 and you have a compact super tele system. They also offer TCs to inexpensively give you the reach of even longer lenses.

I don't think you understand. MFT (or APSC) gives you more reach by increasing pixel density. So you get the "equivalent" of a 600mm, for example. However, if you were to attach an optical 600mm to the same pixel density, you'd have an even further reach (like 900mm or more). In other words, a 100megapixel full frame would offer us the benefit of a super high density sensor PLUS an long focal length.

Olympus will never make lenses like an optical 600/4 for MFT, because it makes no sense to. Those lenses would be exactly the same size as an EF (or RF) 600/4, and anyone buying one would just mount it on a bigger body with a bigger sensor. I mean, why not spend a little more to have a bigger sensor when the relative cost of that is very small compared to the $10,000 lens, and the relative weight/size of the body is immaterial when compared to the big lens. In other words, use a sensor that will take advantage of the smallest image circle possible.

It's totally fair to say, that you're happy with a higher density sensor with a shorter FL lens. But all I'm saying is, this lets us have the higher density sensor with the longer FL lens for super-duper reach, or to use an existing excellent shorter FL length to achieve greater reach. Or, to have more flexibility which whichever lens happens to be mounted.

My concern is that either the deep crop or the reduced image is not as pleasing as, for example, a 1DX with a 600mm and an extender.
 
Last edited:
Upvote 0
You're probably mixing up the MTF/resolution and 'information'. Again sharpening may increase visual sharpness, but it never actually recovers anything. You always lose some detail, sacrificing it for 'sharpness', best case it's imperseptible but it's a loss and it may affect further postprocessing.
You're applying binary thinking to a physical phenomenon which is not binary. An AA filter shifts the MTF curve slightly. In theory that should result in a loss of resolution (ability to distinguish line pairs) at the very extinction limit of the MTF curve. In practice you would be very hard pressed to observe the loss even while pixel peeping a resolution chart because contrast at extinction is so low. The difference you can actually see with your own eyes occurs at higher contrast levels. It's also a difference which can be mitigated through software.

Note that analysis software which takes a resolution chart image and spits out a single number does so based on a specific contrast point that is close to but above actual extinction. You can sharpen the AA filter image before feeding it to the software and get the same output resolution number. By the same token you can, to a point, sharpen either image to boost the output numbers. In other words: the software's output is an imperfect approximation of reality, not the gospel truth. And it couldn't be any other way since resolution is not a single number. It's a curve.



Sharpening does not sacrifice image data (except perhaps in extreme cases or with poor algorithms) and can in fact allow recovery of some information.
 
  • Like
Reactions: 1 user
Upvote 0

Joules

doom
CR Pro
Jul 16, 2017
1,801
2,247
Hamburg, Germany
You're probably mixing up the MTF/resolution and 'information'. Again sharpening may increase visual sharpness, but it never actually recovers anything. You always lose some detail, sacrificing it for 'sharpness'
Have you read my reply to you on the previous page? There are definitively sharpening methods which truly recover detail. I'm not denying losing some may be a side effect of other methods, but saying that is always the case seems wrong to me.
 
Upvote 0
I don't think you understand. MFT (or APSC) gives you more reach by increasing pixel density. So you get the "equivalent" of a 600mm, for example. However, if you were to attach an optical 600mm to the same pixel density, you'd have an even further reach (like 900mm or more). In other words, a 100megapixel full frame would offer us the benefit of a super high density sensor PLUS an long focal length.

Olympus will never make lenses like an optical 600/4 for MFT, because it makes no sense to. Those lenses would be exactly the same size as an EF (or RF) 600/4, and anyone buying one would just mount it on a bigger body with a bigger sensor. I mean, why not spend a little more to have a bigger sensor when the relative cost of that is very small compared to the $10,000 lens, and the relative weight/size of the body is immaterial when compared to the big lens. In other words, use a sensor that will take advantage of the smallest image circle possible.

It's totally fair to say, that you're happy with a higher density sensor with a shorter FL lens. But all I'm saying is, this lets us have the higher density sensor with the longer FL lens for super-duper reach, or to use an existing excellent shorter FL length to achieve greater reach. Or, to have more flexibility which whichever lens happens to be mounted.

My concern is that either the deep crop or the reduced image is not as pleasing as, for example, a 1DX with a 600mm and an extender.

Trust me, I understand.
I am responding to your original idea of a crop mode on a high density sensor. Thus the µ43 sensor is already that sensor without the extraneous real estate that is not used.
The 300 Oly does have the same AOV as the FF 600. No need to make the 600. Yes the pixel density is high. That is what you proposed in the first post.

To wit: "If so, a 100 megapixel mirrorless could be a wonderful tool. For example, I can imagine that it could have a crop mode that would turn it into a 25 megapixel camera, using only the center 1/4 of the image circle at very high pixel density, yet filling up the EVF."

Your proposal is exactly the model of the µ43 but with the added bulk and expense of FF optics.
The only "advantage" being that at FF one would now have 100MP, a dubious advantage.
 
Upvote 0
I would note that in the only comparison I've seen of Canon in-lens vs other in-body stabilization, the in-lens won. The tester was Tony Northrup who is pretty good at these things & included a chart. I would also note that a lot of landscape work is done on a tripod, and every manufacturer I've seen says to turn the IBIS off on a tripod. Last of all, a lot of motion blur is removable in Photoshop with the motion blur tool.
 
  • Like
Reactions: 1 user
Upvote 0
Trust me, I understand.
I am responding to your original idea of a crop mode on a high density sensor. Thus the µ43 sensor is already that sensor without the extraneous real estate that is not used.
The 300 Oly does have the same AOV as the FF 600. No need to make the 600. Yes the pixel density is high. That is what you proposed in the first post.

To wit: "If so, a 100 megapixel mirrorless could be a wonderful tool. For example, I can imagine that it could have a crop mode that would turn it into a 25 megapixel camera, using only the center 1/4 of the image circle at very high pixel density, yet filling up the EVF."

Your proposal is exactly the model of the µ43 but with the added bulk and expense of FF optics.
The only "advantage" being that at FF one would now have 100MP, a dubious advantage.
~~~~
Is there a metric or spec for the ration of total pixel area to total sensor area? Should there be?
 
Upvote 0

Ozarker

Love, joy, and peace to all of good will.
CR Pro
Jan 28, 2015
5,933
4,336
The Ozarks
Upvote 0

Talys

Canon R5
CR Pro
Feb 16, 2017
2,127
451
Vancouver, BC
Trust me, I understand.
I am responding to your original idea of a crop mode on a high density sensor. Thus the µ43 sensor is already that sensor without the extraneous real estate that is not used.
The 300 Oly does have the same AOV as the FF 600. No need to make the 600. Yes the pixel density is high. That is what you proposed in the first post.

To wit: "If so, a 100 megapixel mirrorless could be a wonderful tool. For example, I can imagine that it could have a crop mode that would turn it into a 25 megapixel camera, using only the center 1/4 of the image circle at very high pixel density, yet filling up the EVF."

Your proposal is exactly the model of the µ43 but with the added bulk and expense of FF optics.
The only "advantage" being that at FF one would now have 100MP, a dubious advantage.
No, sorry, I must be unclear.

Currently, the most reach (pixels of a small distant object) you can get without getting super exotic lenses is 50 megapixel on 600mm at f4, plus extenders. With APSC or MFT, you dont get more density than a 5DSR, so you dont get more reach. You also need to let in enough light for snappy autofocus, so it is not ok to not just put extenders on extenders and end up with an f/11 or worse lens.

100 megapixels would clearly increase that reach (again, more pixels of the faraway small object) . There will probably never be a MFT or APSC sensor with the same pixel density as a 100 megapixel AND supertelephotos with as much focal length as what will be available for a 100 megapixel full frame.

Mirrorless would be nice because that crop mode in EVF makes composition and manual focus much easier than taking the shot and then checking it after.

The closest thing to this that works like what I describe today is the a7R3, but not many people would be happy with paired EF super telephotos (slow AF, unusable with TCs), plus 43 megixels really isn't much different from 30 when it comes to cropping (surprisingly)
 
Upvote 0

100

Nov 9, 2013
183
11
There will probably never be a MFT or APSC sensor with the same pixel density as a 100 megapixel AND supertelephotos with as much focal length as what will be available for a 100 megapixel full frame.

1" sensors are at 20mp (~150mp translated to FF), so 30 or 40 megapixel MFT sensors are certainly possible unless MFT dies before we get there.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0

Architect1776

Defining the poetics of space through Architecture
Aug 18, 2017
583
571
122
Williamsport, PA
Stitching counts as adding another pizza. :)

Not discounting your need and I'm sure you're more experienced at building high res images than I am. However, I think there's an open question regarding how much addition benefit you get by going to smaller and smaller photocells on 135 format sensors. I'm not convinced that solution won't create more problems than it solves but I freely admit that I don't know where the tipping point is. I'm guessing it's less than 100 MP but I suppose we won't know until somebody puts an ultra high res sensor out there.

Have you seen the photos of the current 120 mp Canon sensor. Looks absolutely incredible.
 
Upvote 0