How (and why) does sensor size change DOF?

Status
Not open for further replies.
Jul 21, 2010
31,182
13,032
Metaphorically speaking, it appears that many people are trapped within the circle of confusion, when it comes to discussions of DoF.

1. The formulas used to calculate DoF all contain CoC as a variable.
2. CoC is dependent on the observer's visual acuity, viewing distance, and output size.

Therefore,

3. DoF is dependent on the observer's visual acuity, viewing distance, and output size.

It's really that simple.

As for Ecka's argument about one pixel, the typically assumed values for CoC, and the practical range of CoC values for other print sizes and viewing distances, are much larger than a single pixel, so spatial quantization is not an issue.
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
Wild said:
ecka said:
So here we go. Once that optical physics hits the sensor it's no longer optical, it's information. The sensor cannot capture the infinitely thin plane of focus made of sharp points. Instead, it captures everything between the two distances where "a circle" has the size of a pixel or smaller, so everything in that range is same sharp, because there can't be anything sharper than a pixel. At that level, enlarging the image isn't going to decrease the DoF, only soften it, because there is no hidden information.
How do we call THAT THING? The sharpest area between the two distances where "a circle" meets the pixel?

Hey Ecka, I was in a similar state of skepticism here for a while so I just want to help get you where I am now using some common sense.

Forget all the technical stuff about aperture and sensor size for a second and let's just think about this from a simpler perspective. Imagine we just took a picture of a friend. You look at the picture later at 100% on your 30" computer monitor and realize that you accidentally focused on their nose, so their eyes are slightly out of focus. Wouldn't you agree that the plane of focus is on the nose, but doesn't extend to the eyes? Yes of course.

Now let's imagine the friend we took the photo of wants to post that photo to Facebook, and so you post it and it becomes their new profile photo. The whole time you're thinking man that photo wasn't even in focus, but then you go onto their Facebook page, and voila, nobody can even tell if their eyes are in focus because the photo's so small. Basically, the photo looks great. Now I think we can agree that as far as we can tell, their profile picture is in focus. Which would mean that either:

A. You uploaded the wrong picture
B. Magic
C. Our DOF changed because we're now looking at a much smaller picture

And just in case you're not sure. It's obviously B. ;D

Thank you for trying, but I think the answer is F.

F. You are trying to answer a question I didn't ask.
F. "It doesn't matter" - is not an answer.
F. You don't need a DSLR for shooting thumbnails.
F. Now I know who uses the 720x480 small JPEG shooting format :).
F. Perhaps, my English is so bad, that nobody can understand what I'm saying :). Here's a picture:
omgwtf2013.jpg

What is A-B?
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
neuroanatomist said:
Metaphorically speaking, it appears that many people are trapped within the circle of confusion, when it comes to discussions of DoF.

1. The formulas used to calculate DoF all contain CoC as a variable.
2. CoC is dependent on the observer's visual acuity, viewing distance, and output size.

Therefore,

3. DoF is dependent on the observer's visual acuity, viewing distance, and output size.

It's really that simple.

As for Ecka's argument about one pixel, the typically assumed values for CoC, and the practical range of CoC values for other print sizes and viewing distances, are much larger than a single pixel, so spatial quantization is not an issue.

I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.
 
Upvote 0
Jul 21, 2010
31,182
13,032
ecka said:
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
neuroanatomist said:
ecka said:
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.
 
Upvote 0
Jul 21, 2010
31,182
13,032
ecka said:
neuroanatomist said:
ecka said:
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

True, but your definition is something of a tautology. In terms of the real world being captured by the image sensor as sampled by the lens, how do you define 'sharpest area'? Specifically, does that area have 'depth' relative to the sensor? Pixel size represents the least quantifiable unit of XY resolution. What about Z-axis resolution? After all, the latter is what this thread is about...
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
neuroanatomist said:
ecka said:
neuroanatomist said:
ecka said:
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

True, but your definition is something of a tautology. In terms of the real world being captured by the image sensor as sampled by the lens, how do you define 'sharpest area'? Specifically, does that area have 'depth' relative to the sensor? Pixel size represents the least quantifiable unit of XY resolution. What about Z-axis resolution? After all, the latter is what this thread is about...

I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.
 
Upvote 0
neuroanatomist said:
Therefore,

3. DoF is dependent on the observer's visual acuity, viewing distance, and output size.

And, I would add, on how bad the observer's OCD is.

As for Ecka's argument about one pixel, the typically assumed values for CoC, and the practical range of CoC values for other print sizes and viewing distances, are much larger than a single pixel, so spatial quantization is not an issue.

Exactly. He is pushing the discussion into an extreme territory, where the usual (and reasonable) assumptions used to calculate DOF do not apply. If we really want to get there, "the plane of focus" is a bit ticker than a plane, actually. If he ever tried to MA an f/1.2 lens, he will understand what I mean. There is no single measure of sharpness and there is a whole range where the image appears kinda focused but not quite in different ways.
 
Upvote 0
Jul 21, 2010
31,182
13,032
ecka said:
I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.

I did read them, and it appears that you don't really understand what DoF is, or at least such understanding isn't coming across in your posts. For example, "...CoC is about perception. DoF is not, it is about information..., and several iterations thereof. That seems to sum up your argument, but that statement is fundamentally wrong. You cannot determine DoF without a CoC value, either arbitrarily chosen or empirically determined. DoF is based on CoC and other factors, so if you believe that you can determine DoF without CoC, you don't understand what DoF means.

As for 'the sharpest area', that's the focal plane, the plane in space at which the lens is focused. It's a plane, with effectively no depth (although practically, it has some - just as real lenses aren't the infinitely thin lenses we pretend they are for optical calculations). Everything in front and behind that plane is less sharp, progressively more so at increasing distance along the optical axis. Whether or not regions outside the focal plane appear sharp...that's DoF, and it is affected by several factors, including CoC.
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
neuroanatomist said:
ecka said:
I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.

I did read them, and it appears that you don't really understand what DoF is, or at least such understanding isn't coming across in your posts. For example, "...CoC is about perception. DoF is not, it is about information..., and several iterations thereof. That seems to sum up your argument, but that statement is fundamentally wrong. You cannot determine DoF without a CoC value, either arbitrarily chosen or empirically determined. DoF is based on CoC and other factors, so if you believe that you can determine DoF without CoC, you don't understand what DoF means.

As for 'the sharpest area', that's the focal plane, the plane in space at which the lens is focused. It's a plane, with effectively no depth (although practically, it has some - just as real lenses aren't the infinitely thin lenses we pretend they are for optical calculations). Everything in front and behind that plane is less sharp, progressively more so at increasing distance along the optical axis. Whether or not regions outside the focal plane appear sharp...that's DoF, and it is affected by several factors, including CoC.

I understand the physics and if I don't quote books and articles, or post links for others to go read something, it doesn't mean that I don't understand a thing. I'm using my own head, because it is science, not a religion. Science provides tools, but you cannot use the same one for everything.
Does the CoC theory work for upscaling images? - No.
Are imaging sensors rendering DoF in a way you described - "a plane, with effectively no depth" where "everything in front and behind that plane is less sharp"? - No.
"Whether or not regions outside the focal plane appear sharp...that's DoF" - That applies to your eyes, not the original image. Think about it. It's like photographing a photograph.
End of the discussion ;).
 
Upvote 0
Jul 21, 2010
31,182
13,032
ecka said:
"Whether or not regions outside the focal plane appear sharp...that's DoF" - That applies to your eyes, not the original image.

Yes, it still applies to the original image. Tell me...how do you calculate your (incorrect) concept of the "DoF" of 'the original image'. I'd like to see the math behind that, if you could share it. Also, what do you even call THAT THING - because it's not the DoF, by definition.

Regardless, while a tree falling in the forest with no one around to hear it does, in fact, make a sound, a photograph without someone to look at it is a completely meaningless collection of 0's and 1's or an equally meaningless collection of developed silver halide grains in an emulsion. The moment someone views it, all of my points about CoC apply...and that, for all purposes relevant to photographs, is the real end of the discussion.
 
Upvote 0

ecka

Size Matters!
Apr 5, 2011
965
2
Europe
www.flickr.com
neuroanatomist said:
ecka said:
"Whether or not regions outside the focal plane appear sharp...that's DoF" - That applies to your eyes, not the original image.

Yes, it still applies to the original image. Tell me...how do you calculate your (incorrect) concept of the "DoF" of 'the original image'. I'd like to see the math behind that, if you could share it. Also, what do you even call THAT THING - because it's not the DoF, by definition.

Regardless, while a tree falling in the forest with no one around to hear it does, in fact, make a sound, a photograph without someone to look at it is a completely meaningless collection of 0's and 1's or an equally meaningless collection of developed silver halide grains in an emulsion. The moment someone views it, all of my points about CoC apply...and that, for all purposes relevant to photographs, is the real end of the discussion.

Imagine that you are a cyborg and you see the world through cameras instead of eyes. One camera has 8mp APSC sensor from 20D and another one has 21mp FF sensor from 5D2 (yes, it's weird, you must be made in China, or something). Both with 40mm f/2.8 lenses. You see everything in clearest details up to a single pixel, all of them, all the time. FF camera has 60% wider FoV, but with both lenses focused at the same distance you would see that they both produce the same DoF.
This is a very simplified concept (no need to tell me that), because I don't want to waste any more time on this, but it is real and correct. That's how your camera sees it and renders the DoF. The question is not "How do my eyes deal with DoF?". It is "How the camera does it?". It may not be useful for thumbnails and snapshots, but there is a need for it in photography with extremely shallow DoF and a lot of cropping, like macro (where you can't bring it back if it's oof).
 
Upvote 0
My 2c...Choose your lens for DOF. Choose your camera for framing.

Now, for the revolutionary paradigm shift - you only ever need one lens. You just have to carry a Pentax Q for getting in close, headshots etc , Nikon V1, a micro 4/3 camera, an APS-C camera, a FF camera, a medium format camera, and a large format camera for the wide angle work. You'll also need a few adapters. I'd suggest a 100mm lens. DOF would be equivalent for all cameras, but you'd have a nice 16-550mm FF equivalence.
 
Upvote 0
Jul 21, 2010
31,182
13,032
ecka said:
Imagine that you are a cyborg and you see the world through cameras instead of eyes. One camera has 8mp APSC sensor from 20D and another one has 21mp FF sensor from 5D2 (yes, it's weird, you must be made in China, or something). Both with 40mm f/2.8 lenses. You see everything in clearest details up to a single pixel, all of them, all the time. FF camera has 60% wider FoV, but with both lenses focused at the same distance you would see that they both produce the same DoF.
This is a very simplified concept (no need to tell me that), because I don't want to waste any more time on this, but it is real and correct. That's how your camera sees it and renders the DoF. The question is not "How do my eyes deal with DoF?". It is "How the camera does it?". It may not be useful for thumbnails and snapshots, but there is a need for it in photography with extremely shallow DoF and a lot of cropping, like macro (where you can't bring it back if it's oof).

Seriously? A cyborg? Sure, the DoF from the same lens, with the shot taken at the same distance and aperture but two different sensor formats, will have the same DoF. While true (and something quite obvious that doesn't require a preposterous story to illustrate), it's irrelevant. They yield the same DoF, but what is that DoF? Imagine you have been preserved as a plastinated cadaver for an exhibit at the 24th-and-a-half Century Body Worlds exhibit, and me the cyborg is staring at your outstretched hand. Are your wide, strangely lifelike eyes rendered sharply enough to appear unblurred?

The camera doesn't render DoF. DoF is based on 'acceptable sharpness' and only exists when an image is viewed, and thus viewing conditions like output size and viewing distance matter.

Sorry, but it's now obvious beyond doubt that you just don't understand the concept of DoF (and apparently haven't bothered to even read the definition of the term), so this really is a waste of time.
 
Upvote 0
Status
Not open for further replies.