Canon Inc. Boss Wants to See More Innovation

Jan 29, 2011
10,673
6,120
angrykarl said:
That's precisely what I said: The same exposure means the same aperture, shutter speed and ISO.

But how do you reckon a camera with eg. crop factor 10 manages the same exposure as a fullframe camera when the latter receives ten times more light over the whole sensor area? Sure, both receive the same amount of photons per 1mm², but crop cameras usually have smaller pixels so each pixel receives less light. How come they have the same brightness as full frame ones? The crop must digitally boost the signal to provide the same exposure with the same ISO. And boosting a signal clamps dynamic range and amplifies noise.

Now what happens when you double the ISO on any camera? It cannot magically catch more photons, so it digitally boosts the signal, which suprisingly clamps dynamic range and amplifies noise...

That's what I meant by (I agree not exactly ideal term) "internal ISO".

But I am no camera engineer, so if you understand the field better, I am all ears. ;)

Yes you are completely wrong, there is no "internal iso".

Think of a sensor as a car park and rain as photons that make a picture. The asphalt of the car park gets wet as it rains, it doesn't matter how big or small an area of the car park you measure the wetness of all areas is the same. Same with a big sensor and a small sensor, they all get the same photons for the same exposure. The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference. The smaller sensor got less total photons so has less to show, as a unit area it is as wet as any other area of the carpark but it took less water/photons to get it there than a bigger area.
 
Upvote 0
privatebydesign said:
Same with a big sensor and a small sensor, they all get the same photons for the same exposure. The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference. The smaller sensor got less total photons so has less to show, as a unit area it is as wet as any other area of the carpark but it took less water/photons to get it there than a bigger area.

First you said they get the same numbers of photons, then you said the smaller area gets fewer photons. Which is it?

privatebydesign said:
The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference.

Enlarged? What enlargening do you mean?

An image consists of pixels and pixels are buckets collecting rain when using your analogy. When you have a big area covered by ten big buckets and a small area covered by ten smaller buckets, the buckets with smaller diameter inevitably catch less rain during the same amount of time. Right? So the smaller sensor pixels catch less light. How come they have the same brightness as big sensor pixels?
 
Upvote 0
Jul 28, 2015
3,368
570
angrykarl said:
privatebydesign said:
The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference.

Enlarged? What enlargening do you mean?

An image consists of pixels and pixels are buckets collecting rain when using your analogy. When you have a big area covered by ten big buckets and a small area covered by ten smaller buckets, the buckets with smaller diameter inevitably catch less rain during the same amount of time. Right? So the smaller sensor pixels catch less light. How come they have the same brightness as big sensor pixels?

Pixel size is irrelevant and a distraction. It is the total area covered by those pixels (buckets) that defines the amount of light you have captured: if you have the 5D3 with 22P and the 5DSR with 56MP, give them the same lens at the same aperture and the same exposure time and same ISO they both capture the same amount of light and will have the same brightness on the computer screen.
Each sensor has a certain amount of thermal noise and assuming the same technology in each body, having more sensors will increase the amount of noise and that is what raises the noise floor. But you can pretty much overcome that by downsampling the 5DSR image to the 'pixel count' of the 5D3.
 
Upvote 0
Jan 22, 2012
4,486
1,352
Antono Refa said:
goldenhusky said:
Sure there are some caveats with Sony but with dual card slots and decent lens line up Sony is an attractive option for even professional photographers except for wildlife, sports and astro photography.

Which Canon cameras that pros would consider buying lacks dual card slots? At most the xxD line, if any.

goldenhusky said:
Canon is losing its user base slowly

IIRC, sales figures have shown otherwise.

goldenhusky said:
and sure enough if they do not come up with a camera that shoots 4k with a decent codec in 2018, I am sure they will loose more market share.

4K is niche.

Is not.
 
Upvote 0
Mikehit said:
Pixel size is irrelevant and a distraction. It is the total area covered by those pixels (buckets) that defines the amount of light you have captured: if you have the 5D3 with 22P and the 5DSR with 56MP, give them the same lens at the same aperture and the same exposure time and same ISO they both capture the same amount of light and will have the same brightness on the computer screen.
Each sensor has a certain amount of thermal noise and assuming the same technology in each body, having more sensors will increase the amount of noise and that is what raises the noise floor. But you can pretty much overcome that by downsampling the 5DSR image to the 'pixel count' of the 5D3.

You're comparing 5D3 and 5DSR.

5D3 has 22.3 milion pixels, each 6.25 micrometers
5DSR has 50.6 milion pixels, each 4.14 micrometers

5D3 total pixel area = 6.25 * 22.3M = 139.4M micrometers.
5DSR total pixel area = 4.14 * 50.6M = 209.4M micrometers.

5DSR has 2.27 bigger resolution, but only 1.5 smaller pixels. Clearly they are not the same technology.

You said pixel size is irrelevant and the amount of light on two fullframe sensors is the same. But I was asking how can crop and fullframe sensors have the same brightness if the crop covers smaller area and therefore there must be less light to collect?
 
Upvote 0
Jul 28, 2015
3,368
570
angrykarl said:
Mikehit said:
Pixel size is irrelevant and a distraction. It is the total area covered by those pixels (buckets) that defines the amount of light you have captured: if you have the 5D3 with 22P and the 5DSR with 56MP, give them the same lens at the same aperture and the same exposure time and same ISO they both capture the same amount of light and will have the same brightness on the computer screen.
Each sensor has a certain amount of thermal noise and assuming the same technology in each body, having more sensors will increase the amount of noise and that is what raises the noise floor. But you can pretty much overcome that by downsampling the 5DSR image to the 'pixel count' of the 5D3.

You're comparing 5D3 and 5DSR.

5D3 has 22.3 milion pixels, each 6.25 micrometers
5DSR has 50.6 milion pixels, each 4.14 micrometers

5D3 total pixel area = 6.25 * 22.3M = 139.4M micrometers.
5DSR total pixel area = 4.14 * 50.6M = 209.4M micrometers.

5DSR has 2.27 bigger resolution, but only 1.5 smaller pixels. Clearly they are not the same technology.

You said pixel size is irrelevant and the amount of light on two fullframe sensors is the same. But I was asking how can crop and fullframe sensors have the same brightness if the crop covers smaller area and therefore there must be less light to collect?

I was pointing out that pixel size is irrelevant to your question and that it is a distraction yet you have referred to it every time.

I was asking how can crop and fullframe sensors have the same brightness if the crop covers smaller area and therefore there must be less light to collect?
That is the actual question, and has no reference to pixel size.

This article gives a good technical explanation of ISO and as you will see, it is based on the amount of light needed to saturate the photosites.
https://photography.tutsplus.com/articles/what-is-iso-a-technical-exploration--photo-11963

This is not dependent on sensor size.

So you ask that if the APS-C captures less light than the FF sensor, why is it just as bright as the FF image?
Take a FF image and crop it. Does the reduced image size darken? That is what you are effectively doing with the APS-C sensor.
 
Upvote 0
Jan 29, 2011
10,673
6,120
angrykarl said:
privatebydesign said:
Same with a big sensor and a small sensor, they all get the same photons for the same exposure. The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference. The smaller sensor got less total photons so has less to show, as a unit area it is as wet as any other area of the carpark but it took less water/photons to get it there than a bigger area.

First you said they get the same numbers of photons, then you said the smaller area gets fewer photons. Which is it?

privatebydesign said:
The difference in image quality is because the smaller sensor data has to be enlarged more for the same sized output, on a phone screen it makes little difference, in a big print viewed up close there is a massive difference.

Enlarged? What enlargening do you mean?

An image consists of pixels and pixels are buckets collecting rain when using your analogy. When you have a big area covered by ten big buckets and a small area covered by ten smaller buckets, the buckets with smaller diameter inevitably catch less rain during the same amount of time. Right? So the smaller sensor pixels catch less light. How come they have the same brightness as big sensor pixels?

You get the same number of photons per unit area, in the carpark analogy all the asphalt is as wet in any specific area irrespective of how large or small that area is.

If it rains at 1" an hour for an hour there is an inch of rain on every bit of asphalt. It doesn't matter if the bit you sample is the size of a postage stamp or a basketball court, they both have an inch of water on them. Ergo in our analogy they are the same brightness because they received the same amount of photons per unit area.

If you were to paint the carpark it wouldn't matter what sized area you looked at to determine its color. Imagine the rain is paint, now imagine the paint is photons. Because the carpark is wet/painted/received photons at an even rate per unit area any one section is as bright as any other.

Enlarged: When you look at an image on your screen or in print it has been enlarged from the original sensor capture, if you use a 17" screen the enlargement from a 135 format sensor is factor of 10 for 100 times the area, if you use a crop camera it is 15 for 225 times the area, if you use a phone it is 70 for 4900 times the area. Now if you painted your car park with one coat of paint when you looked close enough there would be misses and imperfections (digital noise). It is obvious that if you enlarge those imperfections by 10 you often won't see them, if you enlarge those imperfections by 4900 then you will see them much more often.

Noise in the image works the same. On any given same size reproduction in print or on screen the phone image imperfections are 4900 times more obvious than a ff capture. That is why phones can never equal the IQ of crop cameras and why crop cameras can never equal the IQ of ff sensors and ff sensors can never equal the IQ of even larger sensors. Obviously most of the time and especially in good light any sensor size works well enough for most people most of the time.

When talking about same sized comparisons across sensor sizes individual pixel size is irrelevant, we are comparing the whole image and the output of all the pixels combined, so 1 pixel is the same as 4 pixels a 1/4 the size.
 
Upvote 0
Mikehit said:
I was pointing out that pixel size is irrelevant to your question and that it is a distraction yet you have referred to it every time.

You're dismissing pixel size effect and proving it by comparing sensors of 5D3 and 5DSR, which I've shown is not a valid comparison. 5DSR has bigger total collecting area. I am keen to hear better reasons why pixels size is irrelevant.

To quote from your link:

Thus slow films like ISO 25, 50 and 100 have very fine grains to reduce the amount of light hitting them, useful for capturing fine detail. Conversely, very fast films like ISO 1600 and 3200 have relatively huge grains for the maximum possible chance of capturing photons

In analogue films slow ISO means small particles that capture light and high ISO means big particles. These particles are similar to pixels. So I assume smaller pixels = less light. Why would physics work differently for analogue and digital?

Mikehit said:
So you ask that if the APS-C captures less light than the FF sensor, why is it just as bright as the FF image?
Take a FF image and crop it. Does the reduced image size darken? That is what you are effectively doing with the APS-C sensor.

You're comparing FF and APS-C with the same pixel size. But APS-C has usually smaller pixel to offer bigger resolution. You'll hate me for bringing pixel size again, I know... :)
 
Upvote 0
Jul 21, 2010
31,186
13,043
angrykarl said:
Not true. You would use exactly the same shutter speed and ISO when shooting FF vs crop with both f/2. The difference is that crop ISO 100 is like FF ISO 250 in terms of noise and maybe dynamic range. Not exposure as the crop sensor camera internally compensates it with hidden internal "ISO".

There is no 'hidden internal ISO'.


angrykarl said:
Enlarged? What enlargening do you mean?

The enlargement comes from the way we (appropriately) compare images, which is at the same output size. Say you capture 'the same exposure' on crop and FF – 1/100 s, f/2, ISO 800. Light per unit area of the sensor is the same. The full frame sensor gathers more total light. Now, you compare those images on a 20" retina cinema display. The FF image was enlarged from 36x24mm to that display size, the crop image was enlarged from 22x15mm. The smaller the sensor, the more you enlarge it when comparing output. The images will appear to have the same brightness (because that's how your computer handles the enlargement), but the crop image will appear to have more noise (similar to ISO 2000 on the FF sensor). But if you view the FF image on that 20" cinema display, and the crop image next to it on a 13" retina laptop, the relative enlargement is the same, and the crop image will not appear any noisier (but as I said, that's not how we generally compare things).


angrykarl said:
You said pixel size is irrelevant and the amount of light on two fullframe sensors is the same. But I was asking how can crop and fullframe sensors have the same brightness if the crop covers smaller area and therefore there must be less light to collect?

The brightness issue is something of a red herring because of digital imaging. When a computer doubles the size of an image (one pixel becomes four), it's simply applies the luminance value of the original pixel to all four of the new pixels. In other words, the computer is in effect brightening the image (i.e., it's not happening at the sensor level in the camera...there is no hidden internal ISO).

A better way to conceptualize this might be thinking back to the film era. Say you captured the same exposure (1/100 s, f/2, ISO 100) on a 35mm negative and an APS-C negative (and for our example, you've cropped that negative from its original APS format with a pair of scissors). Now you put both negatives in an enlarger and make 4x6 prints. To fill that 4x6 print, you'll have to move the enlarger head further from the paper with the smaller negative (that's the 'more enlargement with a smaller sensor' part of the analogy). The greater enlargement will enhance the visibility in the print of the film grains in the negative (that's the 'more image noise with a smaller sensor' part of the analogy). But with the smaller negative, since its further from the paper, you'll also need to either increase the brightness of the bulb or expose the paper for longer, because light intensity decreases as the inverse square of distance (that's the 'adding brightness for display output' part of the analogy). To drive the final nail in this analogy's coffin, the film camera isn't somehow adding light or amplification to the APS negative on the roll (that's the 'there is no hidden internal ISO' conclusion to the analogy).
 
Upvote 0
neuroanatomist said:
The enlargement comes from the way we (appropriately) compare images, which is at the same output size. Say you capture 'the same exposure' on crop and FF – 1/100 s, f/2, ISO 800. Light per unit area of the sensor is the same. The full frame sensor gathers more total light. Now, you compare those images on a 20" retina cinema display. The FF image was enlarged from 36x24mm to that display size, the crop image was enlarged from 22x15mm. The smaller the sensor, the more you enlarge it when comparing output. The images will appear to have the same brightness (because that's how your computer handles the enlargement), but the crop image will appear to have more noise (similar to ISO 2000 on the FF sensor). But if you view the FF image on that 20" cinema display, and the crop image next to it on a 13" retina laptop, the relative enlargement is the same, and the crop image will not appear any noisier (but as I said, that's not how we generally compare things).

So if I had one crop and one FF sensor with the same resolutions and compare their RAWs at 100%, the pixels would be equally enlarged on the screen and they would be of the same brightness and noise? :eek:

Such situation would look like this when we take equaly sized parts of both sensors (assuming crop factor 4).

ZGcETkH.png


You have sensor A with one pixel and sensor B with four pixels. Both sensors cover the same area. A collects 100 photons per pixel, B collects 25 photos per pixel... How come the pixels from B have the same brightness, even if they collected less light?

I was suggesting that in such situation sensor B would amplify the signal x4 and therefore have more noise. Otherwise why there are no 50MP m43 cameras with noise level of 5DSR?

neuroanatomist said:
angrykarl said:
You said pixel size is irrelevant and the amount of light on two fullframe sensors is the same. But I was asking how can crop and fullframe sensors have the same brightness if the crop covers smaller area and therefore there must be less light to collect?

A better way to conceptualize this might be thinking back to the film era. Say you captured the same exposure (1/100 s, f/2, ISO 100) on a 35mm negative and an APS-C negative (and for our example, you've cropped that negative from its original APS format with a pair of scissors). Now you put both negatives in an enlarger and make 4x6 prints. To fill that 4x6 print, you'll have to move the enlarger head further from the paper with the smaller negative (that's the 'more enlargement with a smaller sensor' part of the analogy). The greater enlargement will enhance the visibility in the print of the film grains in the negative (that's the 'more image noise with a smaller sensor' part of the analogy). But with the smaller negative, since its further from the paper, you'll also need to either increase the brightness of the bulb or expose the paper for longer, because light intensity decreases as the inverse square of distance (that's the 'adding brightness for display output' part of the analogy). To drive the final nail in this analogy's coffin, the film camera isn't somehow adding light or amplification to the APS negative on the roll (that's the 'there is no hidden internal ISO' conclusion to the analogy).

You're comparing films/sensors with the same size of grain/pixels. But in reality crop digital cameras have much smaller pixels, so in your analogy the APS-C negative would have to be of slower ISO (=smaller grain) than 35mm one. Now would the images be equally exposed?
 

Attachments

  • photons-collect.png
    photons-collect.png
    3.1 KB · Views: 218
Upvote 0
Jul 28, 2015
3,368
570
angrykarl said:
You're comparing films/sensors with the same size of grain/pixels. But in reality crop digital cameras have much smaller pixels, so in your analogy the APS-C negative would have to be of slower ISO (=smaller grain) than 35mm one. Now would the images be equally exposed?

You are confusing the issue by bringing in number of pixels. Pixel number is to do with resolution not light gathering - about 10 years ago sensor manufacturers developed gapless sensors where the microlens picks up light outside the boundary of the actual pixel sensor. So to go back to the rainy day in the car park analogy:
cover a 6x4m area with appropriately sized buckets.
Cover a 3x2 meter area with the same number of smaller buckets.
The larger buckets have bigger gaps between them so will miss more rain.
Now put appropriately sized square shaped funnels into each bucket so that there are no gaps (the gapless sensor). The amount of rain captured is now totally dependent on the area covered.

And in that respect my comparison of the 5D3 and 5DSR was totally valid: same sensor area, different number of pixels giving 100% sensor coverage (and, incidentally, the pretty much the same technology giving the same mount of noise if processed appropriately): and you have not 'shown' it to be invalid. You simply stated it wasn't based on your prior presumption that pixel count matters.


Now, Neuro was quite correct. The APS-C collects less light and the fact they appear the same brightness on the computer screen comes from the how the computer renders the image, not anything in the camera.

I believe what you are proposing is along the lines of the manufacturers adding a program into the APS-C that says 'I am an APS-C camera. So to maintain standards across the product range, I cannot have the same calculations used in a FF body so to keep the exposure time the same I need to add more gain than the ISO 400 implies to give the same output brightness." In other words, the body would be using (for example) ISO500 and calling it ISO400 on the metadata.

But think about this: what is the reference model for that calculation? When the sensor is used in 'FF' body? medium format? 5x4?
'FF' has become a misleading term because it implies that 35m is some sort of magic reference point. It isn't. And by your reckoning each body format would need to have its own internal 'correction' according to sensor size so as to give the same 'apparent' ISO output.
 
Upvote 0
Mar 2, 2012
3,188
543
It’s not correct to call it a hidden internal ISO, however there is more gain applied with smaller sensors for a given ISO speed.

Even if you review two photos at sensor dimensions, i.e. no enlargement (the digital equivalent of a contact print), your noise performance will be better with the larger sensor, given the same exposure time and t-stop with a format-appropriate image circle (focal plane exposure) and the same composition. Why? You sampled the scene with more photons, thus generating more charge and requiring less amplification when digitized to a standardized output signal. This indeed happens in camera. That’s the entire point of ISO 12232: “The equations used in this International Standard have been chosen to create a link between electronic and conventional silver-halide-based photographic systems. Using a particular ISO speed value as the exposure index on a DSC should result in the same camera exposure settings, and resulting focal plane exposures, as would be obtained using the same exposure index on a film camera or other photographic exposure meter.”
 
Upvote 0

Jack Douglas

CR for the Humour
Apr 10, 2013
6,980
2,602
Alberta, Canada
Upvote 0
Feb 8, 2013
1,843
0
I think part of the confusion here might be from the ISO adjustments made at wide apertures.
Interestingly, DXO took down their "F-stop Blues" article, which is ironic when that was one of the most useful things they've ever published.
At least we have a decent summary on the forum: https://www.canonrumors.com/forum/index.php?topic=31598.0
 
Upvote 0
neuroanatomist said:
@angrykarl – comparing pictures is not the same thing as comparing pixels. You're trying to do both at the same time, and confusing yourself in the process.

Sure, so let's compare only pixels! :)

Mikehit said:
You are confusing the issue by bringing in number of pixels. Pixel number is to do with resolution not light gathering - about 10 years ago sensor manufacturers developed gapless sensors where the microlens picks up light outside the boundary of the actual pixel sensor. So to go back to the rainy day in the car park analogy:
cover a 6x4m area with appropriately sized buckets.
Cover a 3x2 meter area with the same number of smaller buckets.
The larger buckets have bigger gaps between them so will miss more rain.
Now put appropriately sized square shaped funnels into each bucket so that there are no gaps (the gapless sensor). The amount of rain captured is now totally dependent on the area covered.

I never disputed that there is the same amount of light captured from the same area. I am simply telling that distributing the same amount of anything into more buckets means there is less in each.

Mikehit said:
And in that respect my comparison of the 5D3 and 5DSR was totally valid: same sensor area, different number of pixels giving 100% sensor coverage (and, incidentally, the pretty much the same technology giving the same mount of noise if processed appropriately): and you have not 'shown' it to be invalid. You simply stated it wasn't based on your prior presumption that pixel count matters.

5D3 has max native ISO 25600, 5DSR only 6400... I wonder why is that.

Mikehit said:
Now, Neuro was quite correct. The APS-C collects less light and the fact they appear the same brightness on the computer screen comes from the how the computer renders the image, not anything in the camera.

So you finally agree that APS-C collect less light per pixel? And according to you the pixels are less bright, but when viewing the whole picture the downsampled pixels are more bright? What about histogram -- which shows distribution of bright and dark pixels -- does that also magically change when I zoom the picture? Or do you think histograms from these FF and APS-C images differ? And what if I create a large resolution uniform gray picture in Photoshop and downsample it, would I also get a more white picture?

Mikehit said:
I believe what you are proposing is along the lines of the manufacturers adding a program into the APS-C that says 'I am an APS-C camera. So to maintain standards across the product range, I cannot have the same calculations used in a FF body so to keep the exposure time the same I need to add more gain than the ISO 400 implies to give the same output brightness." In other words, the body would be using (for example) ISO500 and calling it ISO400 on the metadata.

And would that be a problem? You want to have exactly exposed image at ISO400 as from other cameras. That's the point of ISO. How the camera handles it is just a technical detail.

3kramd5 said:
It’s not correct to call it a hidden internal ISO, however there is more gain applied with smaller sensors for a given ISO speed.

You're right, the term is misleading, I won't be using it again. My fault.

3kramd5 said:
Even if you review two photos at sensor dimensions, i.e. no enlargement (the digital equivalent of a contact print), your noise performance will be better with the larger sensor, given the same exposure time and t-stop with a format-appropriate image circle (focal plane exposure) and the same composition. Why? You sampled the scene with more photons, thus generating more charge and requiring less amplification when digitized to a standardized output signal. This indeed happens in camera. That’s the entire point of ISO 12232: “The equations used in this International Standard have been chosen to create a link between electronic and conventional silver-halide-based photographic systems. Using a particular ISO speed value as the exposure index on a DSC should result in the same camera exposure settings, and resulting focal plane exposures, as would be obtained using the same exposure index on a film camera or other photographic exposure meter.”

Exactly.
 
Upvote 0
Jul 21, 2010
31,186
13,043
3kramd5 said:
Even if you review two photos at sensor dimensions, i.e. no enlargement (the digital equivalent of a contact print), your noise performance will be better with the larger sensor, given the same exposure time and t-stop with a format-appropriate image circle (focal plane exposure) and the same composition. Why? You sampled the scene with more photons, thus generating more charge and requiring less amplification when digitized to a standardized output signal.

How do differently-sized image constitute a standardized output?
 
Upvote 0
neuroanatomist said:
3kramd5 said:
Even if you review two photos at sensor dimensions, i.e. no enlargement (the digital equivalent of a contact print), your noise performance will be better with the larger sensor, given the same exposure time and t-stop with a format-appropriate image circle (focal plane exposure) and the same composition. Why? You sampled the scene with more photons, thus generating more charge and requiring less amplification when digitized to a standardized output signal.

How do differently-sized image constitute a standardized output?

The image is not differently-sized. You've got the same shutter speed, t-stop, composition, resolution, equivalent focal length, just different size of sensors. The resulting images are equal sized.

He wrote "standardized output signal". Signal is electric charge from collected photons in one photosite. Which results in one pixel brightness. Raw collected signal is smaller, but according to ISO the brightness must equal, therefore there must be amplification.
 
Upvote 0