Canon EOS RP Specifications & Images

Foveon sensors sample RGB at every pixel site. Their color detail reproduction is only slightly better than a Bayer sensor of the same physical resolution. When I say "slightly" it's something you can pick up in some areas while pixel peeping, but which is meaningless in print. A 100mp Bayer image will absolutely look better in print than a 25mp Foveon image, assuming a print size large enough to tell the difference (since 25mp can saturate many common print sizes).

The other problem with this idea is that the CFA is fixed so you can't go into a 25mp mode where every pixel samples RGB. A pixel covered by a green filter still only sees green. (Though that is a gross simplification, per Michael Clark's posts. He is correct that there is bandwidth overlap of the three filters.)

Foveon: Foveon suffers from the fact that different wavelengths travel up to different depths into a silicon block. The deepest layer gets strongly attenuated light in that wavelength region. Hence you have different QEs for different colors which counteracts the great principle of evaluating three different wavelength (or color) channels.

CFA issues: CFA must NOT be changed to go from 100MP bayer image reconstrugtion to pixel binning of four R-G-G-B pixel quadruplets. NOT CHANGING THE CFA is the solution to sample ALL COLOR CHANNELS for ONE FINAL IMAGE PIXEL. Trying to do that in ASCII graphics (fixed font not worked, so used dots as spacers):

CAMERA in HIGH RESOLUTION MODE
100 MPixel bayer sensor read out to 100 MPixel Image = a lot of deriving non-existing per pixel data from RAW)
R . . . G . . . . . . . . . . . . . . . RGB . . . RGB
. . . . . . . . . = debayering =>
G . . . B . . . . . . . . . . . . . . . RGB . . .
RGB

CAMERA in HIGH COLOR QUALITY MODE + low light
100 MPixel bayer sensor read out to 25 MPixel IMAGE = getting measurement values from RAW
R . . . G . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . = binning => RGB
G . . . B . . . . . . . . . . . . . . .

Switching the mode on site helps to reduce file size under certain circumstances while retaining qualities needed in these cases.
 
Upvote 0
Your understanding is based on the false assumption that only "green" light (or light between, say, 480-580 nanometers) gets p [...]

It's also based on the false assumption that our color reproduction systems use the same three colors as the colors of a Bayer filter array as primaries. [...]

(1) I really know that I can see sodium light (yellow) and that it is essential that both green and red cones need their gaussian-like shaped spectra with overlap to trigger BOTH cone types to say "Hey this is not green or red, it is in between" and that orange light means that there is a higher potential in the "red cone" compared to the "green cone".
So your assumption is false that I assumed that :)

(2) I never assumed that the R G and B sensitivity spectra are the same for eyes (just not beween different people) and all our technical stuff. I know that they have to chose from existing dyes which not must have the right spectral response but also have to show long term stability and can be managed in the CFA production process. The same for monitors where maybe the spectra of the used LEDs are only adjustable in narrow limits. ... And that is the reason why SAMSUNG introduced quantum dots as light convertes from blue to R and G because they are tunable.

Please see my post above where I try to picture how I see the advantage of having a 25MPix mode on a 100MPix camera where pixel binning allows for full color pixels - that you have to apply some functions from sensor ADC raw values to the RGB values in the final image ... for sure, that has always to be done.[/QUOTE]
 
Last edited:
Upvote 0
Massive processing. Ansell was a master inthe darkroom. But he also had skill. And patience. And determination. Ansell would wait hours or even days for that perfect moment of light and then enhance what he got in the darkroom. Now we have photographers who want 234.2stops of DR so they can drive to a location in any conditions, take one image and then spend an hour in photoshop to put in all the bits that weren't there. Light included. Don't get me wrong. I love DR. I shoot wildlife so for me more would be better but i do get the feeling that those who harp on about it the most tend to want it to make up for shortcomings in the things that Ansell had in spades.

It may leave an impression that a low DR improves patience and compositional skills.
I'm not sure which shortcomings exactly you can overcome with the higher DR, apart from inability to capture high contrast sceneries. I'm confident you can't overcome the lack of compositional skills or lack of patience and effort. If you're willing to make fake skies or fake light, high or low DR won't stop you from doing that. And on the other hand, having a high DR won't make you less skillful, as well as low DR won't give you any additional skills.
 
Upvote 0
Apr 25, 2011
2,509
1,884
In the good old days they were just limited technically, compared to digital era. However they used very heavy postprocessing, dodging and burning, chemicals to increase contrast etc. Some of most famous landscapes were made with heavy processing.
Actually, postprocessing has nothing to do with the topic. It does not increase the DR of the sensor. It plays a different role: to compress the dynamic range of the captured image into a very shallow range of densities of the output medium (paper) without losing "natural" local contrast.
 
Upvote 0

Aussie shooter

https://brettguyphotography.picfair.com/
Dec 6, 2016
1,183
1,817
brettguyphotography.picfair.com
It may leave an impression that a low DR improves patience and compositional skills.
I'm not sure which shortcomings exactly you can overcome with the higher DR, apart from inability to capture high contrast sceneries. I'm confident you can't overcome the lack of compositional skills or lack of patience and effort. If you're willing to make fake skies or fake light, high or low DR won't stop you from doing that. And on the other hand, having a high DR won't make you less skillful, as well as low DR won't give you any additional skills.

Having high DR will allow ypu to shoot in ppor light and even the exposure out in post while retaining acceptable quality.Far far more than in days gone by. Then adding all the BS can be achieved more effectively. You no longer have to wait for the light. Now like I said. DR is great. The more the better for certain applications. But it has allowed less skilled and dedicated photographers to get results. Again. That is fine but like I said those who are obsessed with it are likely the ones to abuse it. I guess it would be my opinion that rhe more the technology advances the less photographers seem to understand what they are doing. Because they don't need to. Knowing and understanding light is not as important as it used to be. Composition of course is not being affected but that is about all.
 
Last edited:
Upvote 0
Actually, postprocessing has nothing to do with the topic. It does not increase the DR of the sensor. It plays a different role: to compress the dynamic range of the captured image into a very shallow range of densities of the output medium (paper) without losing "natural" local contrast.

All true, however I don't think anybody claimed that postprocessing increases DR.
 
Upvote 0
Upvote 0
Apr 25, 2011
2,509
1,884
Postprocessing? I didn't. I was responding to this post https://www.canonrumors.com/forum/i...ecifications-images.36678/page-28#post-763821 and the statement that good old photographers relied on skill, and nowadays they do photoshopping. Even then nobody claimed that postprocessing increases DR.
The claim in that post was, as I understand it, that higher DR makes it easier to produce an image of what never existed in the first place. Not a reproduction of what an eye can see in the natural settings, but a kind of photocollage, with collage techniques trying to mask deficiencies in photographic ("drawing with light") skills.

Burning and dodging were actually "prepress" (lab technician) skills, not photographic skills. You did not need to do burning and dodging for a slide projector.
 
Upvote 0
Having high DR will allow ypu to shoot in ppor light and even the exposure out in post while retaining acceptable quality.Far far more than in days gone by. Then adding all the BS can be achieved more effectively. You no longer have to wait for the light. Now like I said. DR is great. The more the better for certain applications. But it has allowed less skilled and dedicated photographers to get results. Again. That is fine but like I said those who are obsessed with it are likely the ones to abuse it. I guess it would be my opinion that rhe more the technology advances the less photographers seem to understand what they are doing. Because they don't need to. Knowing and understanding light is not as important as it used to be. Composition of course is not being affected but that is about all.

You don't generally compensate bad light with postprocessing. It's either impossible or hugely time-consuming. Playing with shadows and highlights and/or tone curve doesn't help. If there is faint but good light in the shadows, you'll be able to pull it out, if there's no light, you get nothing.
Maybe if your photoshop-fu is beyond imagination and you can dodge and burn, basically, per pixel - but then it's the same as digital painting and it requires high artistic skills.
 
Upvote 0
The claim in that post was, as I understand it, that higher DR makes it easier to produce an image of what never existed in the first place. Not a reproduction of what an eye can see in the natural settings, but a kind of photocollage, with collage techniques trying to mask deficiencies in photographic ("drawing with light") skills.

Burning and dodging were actually "prepress" (lab technician) skills, not photographic skills. You did not need to do burning and dodging for a slide projector.

But again, even then it wasn't a claim that postprocessing increases DR. Vice versa, it's the higher DR that increases the room for post. I'd agree with that. But it doesn't compensate the need of good composition and light.
 
Upvote 0
About the DR and postprocessing posts:

The human eye has a DR of about 20 stops including sensor mods (in the retina according to light level), iris and "sensor cell" DR.

If you look at a scenery the iris might regulate 3 stops (6 ... 2 mm diameter) for an adult - younger people have more DR by iris diameter adaption.

Changes between night- and day-vision might add another 3 stops but this is a slow process where the cells in the retina change their positions.

Finally there is a DR of roughly 17 stops what we can see in a single scene. BW film has that DR but all the usual media to display images do not have it at the moment. This is ~128 000 levels of brightness! (2 ^17 = 2 ^10 * 2^7 = 1024 * 128). To put that vast DR into e.g. 7-8 bits (128 ... 256 levels) on very good paper or 10 bits on a good monitor it is essential to postprocess the original image:
compress the DR of e.g. 14 bit of a good sensor into 7-10 bits globally or locally (dodging / burning) for presentation media - that's the idea of applying a tone curve locally or globally.

The same as 30 years ago in a (BW) darkroom where we had gradations of paper (1...5) from low contrast to high contrast and some other variables like exposure + duration of the sheets in the developper bath. For local "corrections" you needed a mask made from paper or formed with your hands. And very important: different paper types lead to very different results - I liked the ORWO baryt paper most which had great blacks (lots of silver) and fine transitions between different grays.

With Vantablack (the blackest black we would like in lens interiors and lens hoods EDIT: only 99.965% absorption leading to ~12 bits DR on very white paper) as printer black (just kidding) or OLED / micro LED screens a DR of 15 ... 20 bits seems possible because it is just managing the electric current (not the voltage) through the LEDs which is mostly proportional to its light output.
 
Last edited:
  • Like
Reactions: 4 users
Upvote 0
Hi Yasko!

Sorry, but your arguments have already been disproved:

When you take a look at the recent Canon RF patents you can see that even tele zooms can be made a little bit smaller, although this was not expected.
When you think about standard FL, WA, and UWA here we had a lot of discussions in this forum that those lenses could be built smaller than their EF equivalent because of the smaller backfocus distance possible.
Of course, if Canon decides to make a more complex optical formula ir if they add macro functionality like at the RF 35 this could make a bigger again.

And if you compare this Canon RF 35mm f/1.8 IS Macro STM with the EF 35mm f/2 IS USM you can find this:
size (diameter x length):
74,4 x 62,8 mm vs. 77,9 x 62,6

So the RF lens has the same length and slightly smaller diameter ALTHOUGH macro feature was included AND the aperture got slightly wider.
So shallower DOF and less ISO in the same package.

Ans as icing on top the IQ / sharpnes and CA have been improved over the already very good EF lens:
https://www.the-digital-picture.com...4&Sample=0&CameraComp=453&FLIComp=0&APIComp=0

So that would be my German rule-of-thumb.
Yes, they can make the lenses more compact, but not an order of magnitude as I read it through your lines (just quick interpretation, don‘t take this too serious).
Look at the 28-70 f/2, it‘s a huge lens and it makes compromises at the lower focal length as compared to a 24-70 f/2.8 EF counterpart.

Where mirrorless FF shines is with compact primes. Anything else will more or less lead you to the compromise I described. (I am only talking about those people that want to be as compact as with smaller sensor cameras. For me FF is also about giving the sensor the best lens in front of it possible for the purpose. For compactness that‘s just a prime.)
And if you compare it to the M system, this becomes more evident. Comparing an APS-C with EF DSLRs to a mirrorless fullframe is of course valid, but e.g. an M5 with a 22 f/2 prime states my point, too.
APS-C EF-M lenses will in general be smaller than their EF-R counterparts as they do not need to cover the larger FF sensor.

Physics is not off its hinges due to the introduction of a new approach to photography in terms of ripping out a mirror and bring the rear of the lens in closer proximity to the plane of focus.

I am btw not in anyway saying, that I don‘t like the R system. If I had the money to spend (and didn‘t already have a 6D mk II) I would love to try it out and buy myself a EOS R :).
 
Last edited:
Upvote 0
Apr 25, 2011
2,509
1,884
But again, even then it wasn't a claim that postprocessing increases DR. Vice versa, it's the higher DR that increases the room for post. I'd agree with that. But it doesn't compensate the need of good composition and light.
I wonder... if I create a Photoshop plugin that automatically makes masks for daylight shadow areas, will it sell well?

About the DR and postprocessing posts:
The human eye has a DR of about 20 stops including sensor mods (in the retina according to light level), iris and sensor cell DR.
If you look at a scenery the iris might regulate 3 stops (6 ... 2 mm diameter) for an adult - younger people have more DR by iris diameter adaption.
Changes between night- and day-vision might add another 3 stops but this is a slow process where the cells in the retina change their positions.
So there is a DR of roughly 17 stops what we can see in a single scene.
Have you actually tried it in practice?

My attempts tell me that I stop considering what I look at as a single scene long before my 5D2 loses the ability to capture it in one frame due to its DR limitations.
 
Upvote 0
I wonder... if I create a Photoshop plugin that automatically makes masks for daylight shadow areas, will it sell well?

there are plugins already for creating luminosity masks, you can make masks for highlights or shadows or even mid-range. Luminosity masking is a powerful tool but by no means it creates non-existent light.
 
  • Like
Reactions: 1 user
Upvote 0
Dec 13, 2010
4,932
1,608
I am German, in fact I live in Hesse, the German state where Leitz/ Leica is located, so I am allowed to write such a comment about the Leica M3 ;). I didn't say that the M3 was no good camera, in fact it offered the best rangefinder technology of its time when it hit the market. Such a bright and precise rangefinder was a revolution for 35mm cameras. But Leica needed a few more years to move to such a clean design like the Canon P already had in the late 50s - in fact, the M6 has it and for me it is the most beautiful and ageless Leica ever made. But that's a matter of personal preferences. A well working M3 is of course a gem, no question.

That said, I have and use two Canon 7 rangefinders when I shoot film. They are not such beauties, but the Seven was the most capable rangefinder ever made for the old Leica M39 thread.
To me there is not a prettier camera ever made than the M7 with the 50 Lux on, now that is gorgeous. Second favorite is the M4 black.
 
Upvote 0
Apr 25, 2011
2,509
1,884
there are plugins already for creating luminosity masks, you can make masks for highlights or shadows or even mid-range. Luminosity masking is a powerful tool but by no means it creates non-existent light.
Luminosity masks are totally not what I am talking about. I am talking about recognizing the shadows (and creating the masks for them) as the objects in the picture, like my team at work does with cars and pedestrians.
 
Upvote 0

Maximilian

The dark side - I've been there
CR Pro
Nov 7, 2013
5,664
8,492
Germany
Yes, they can make the lenses more compact, but not an order of magnitude as I read it through your lines (just quick interpretation, don‘t take this too serious).
Look at the 28-70 f/2, it‘s a huge lens and it makes compromises at the lower focal length as compared to a 24-70 f/2.8 EF counterpart.
I was not refering to such lenses. I know optical physics and I know that it can not be bended. ;)

Where mirrorless FF shines is with compact primes.
THIS!!! Especially if those are WA or UWA primes. And exactly this was what I was refering to in my original post, that you've quoted.
AS you could read here:
Now let's go for some native small primes :)

And then your reply got me confused.
Now I see that we are almost in line in opinion and wishes.
But that sounded different in your first reply.
 
Upvote 0