February 01, 2015, 04:15:58 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ecka

Pages: 1 ... 20 21 [22] 23 24 ... 49
316
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

True, but your definition is something of a tautology.  In terms of the real world being captured by the image sensor as sampled by the lens, how do you define 'sharpest area'?  Specifically, does that area have 'depth' relative to the sensor?  Pixel size represents the least quantifiable unit of XY resolution. What about Z-axis resolution?  After all, the latter is what this thread is about...

I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.

317
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

318
Metaphorically speaking, it appears that many people are trapped within the circle of confusion, when it comes to discussions of DoF.

1. The formulas used to calculate DoF all contain CoC as a variable.
2. CoC is dependent on the observer's visual acuity, viewing distance, and output size.

Therefore,

3. DoF is dependent on the observer's visual acuity, viewing distance, and output size.

It's really that simple.

As for Ecka's argument about one pixel, the typically assumed values for CoC, and the practical range of CoC values for other print sizes and viewing distances, are much larger than a single pixel, so spatial quantization is not an issue.

I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

319
So here we go. Once that optical physics hits the sensor it's no longer optical, it's information. The sensor cannot capture the infinitely thin plane of focus made of sharp points. Instead, it captures everything between the two distances where "a circle" has the size of a pixel or smaller, so everything in that range is same sharp, because there can't be anything sharper than a pixel. At that level, enlarging the image isn't going to decrease the DoF, only soften it, because there is no hidden information.
How do we call THAT THING? The sharpest area between the two distances where "a circle" meets the pixel?

Hey Ecka, I was in a similar state of skepticism here for a while so I just want to help get you where I am now using some common sense.

Forget all the technical stuff about aperture and sensor size for a second and let's just think about this from a simpler perspective. Imagine we just took a picture of a friend. You look at the picture later at 100% on your 30" computer monitor and realize that you accidentally focused on their nose, so their eyes are slightly out of focus. Wouldn't you agree that the plane of focus is on the nose, but doesn't extend to the eyes? Yes of course.

Now let's imagine the friend we took the photo of wants to post that photo to Facebook, and so you post it and it becomes their new profile photo. The whole time you're thinking man that photo wasn't even in focus, but then you go onto their Facebook page, and voila, nobody can even tell if their eyes are in focus because the photo's so small. Basically, the photo looks great. Now I think we can agree that as far as we can tell, their profile picture is in focus. Which would mean that either:

A. You uploaded the wrong picture
B. Magic
C. Our DOF changed because we're now looking at a much smaller picture

And just in case you're not sure. It's obviously B.   ;D

Thank you for trying, but I think the answer is F.

F. You are trying to answer a question I didn't ask.
F. "It doesn't matter" - is not an answer.
F. You don't need a DSLR for shooting thumbnails.
F. Now I know who uses the 720x480 small JPEG shooting format :).
F. Perhaps, my English is so bad, that nobody can understand what I'm saying :). Here's a picture:

What is A-B?

320
DOF is subjective?  Hmm.  If my DOF is 8 feet in a photo, that is, 8 real-life feet out in the field, how in the world does that ever change after I take the photo??  8 feet is 8 feet isn't it? 

Actually, I wouldn't even need to take the photo.  The DOF is still 8 feet.  :)

Are you suggesting that by being subjective, it could be 8 feet, or 6 feet, or 10 feet, or 7.23838383 feet?  How silly.

It seems you think that based on your equipment, there's a 'slice' of the photo that's in perfect focus, say 3.8 feet in front of where you focused, and 4.2 feet behind it, then WHAM like magic at 4.3 feet behind the focal plane, everything gets blurry.  That's not how it works.

Light from the plane of focus (which is best approximated by a plane in the geometric sense - 2D and infinitely thin) is focused on the image sensor (we're ignoring field curvature, of course).  Everything outside that plane, even a few millimeters, is blurry...and the further from the focal plane, the blurrier it gets. That's optical physics.  Whether it looks blurry to you depends on viewing size and distance and your visual acuity.

Tell me - how do you know your hypothetical shot has that 'real' 8 foot DoF?  Did you use a DoF calculator?  That calculator determines the 8 foot DoF based on an assumed specific print size and viewing distance (commonly 8x10" viewed at 1 foot).  Change those assumptions, you change the calculated DoF.

So here we go. Once that optical physics hits the sensor it's no longer optical, it's information. The sensor cannot capture the infinitely thin plane of focus made of sharp points. Instead, it captures everything between the two distances where "a circle" has the size of a pixel or smaller, so everything in that range is same sharp, because there can't be anything sharper than a pixel. At that level, enlarging the image isn't going to decrease the DoF, only soften it, because there is no hidden information.
How do we call THAT THING? The sharpest area between the two distances where "a circle" meets the pixel?

321
... I thought that DoF is the thickness of the sharp focus plane your camera can capture (how is it called then?)...
... the thickness in reality, not in the picture.

...There is no such thing as an 'objective' DoF...

Do we need a new definition here? Because I'm pretty sure that OP was asking about THAT THING, not the CoC.

The plane of focus has no depth. Imagine it as a sheet of the thinnest paper, only much thinner. Everything in front of, and behind, that sheet of paper is less sharp than whatever is on the sheet of paper. Because of limitations to our eyesight something very close to the paper might look in focus, but it isn't, at some point as you move towards the paper things become more obviously out of focus, you have now surpassed your DoF/CoC criteria, but, step back and you again can't see the differences because your eyesight can't resolve it.

I agree that the focus plane of an optical image projection is thinner than it looks like. That's what the CoC thing is all about. However, the sensor resolution is limited and it has it's smallest possible dot size which is a pixel and which is a constant for a given camera.

Maintain a reproduction size and viewing distance ratio such that the CoC (point at which you can't see the difference between a point and a circle) and you can go as big, or as small, as you'd like.

No, you cannot do that. It could only apply to a camera with an infinite number of pixels. If a pixel is too big, then it becomes a square. If it's too small, then it disappears.
In all my statements I assumed that both FF and APSC sensors had the same pixel pitch. Otherwise, even the same format cameras (same sensor size) with different megapixel numbers (like 12 vs 36) should have different DoF/CoC characteristics.

322
... I thought that DoF is the thickness of the sharp focus plane your camera can capture (how is it called then?)...
... the thickness in reality, not in the picture.

...There is no such thing as an 'objective' DoF...

Do we need a new definition here? Because I'm pretty sure that OP was asking about THAT THING, not the CoC.

323
Also here, http://www.josephjamesphotography.com/equivalence/ for a very detailed insight into sensor sizes and their interaction with focal length, dof, aperture and iso. Yes, even iso has a crop factor!
Privatebydesign, thanks for the great link. Lots of fun explanations in there. However, I'm still not sure how print size and viewing distance affect DOF. Is there another explanation somewhere, or maybe some examples?

Btw, I've tried just resizing some of my photos on my monitor and seeing if they seem to have more/less DOF and can't really tell a difference...maybe I'm doing it wrong.

Here is an example. I took this image for an artists show, it was printed to 46"x31". As a 700px web image most would agree the zip picture left, by her right cheek, is within acceptable focus, at f7.1 with a 100mm lens it is well within a dof calculators range. The second image is what that zip looks like when I printed it at 46" and viewed from the same distance. Clearly it is not now in acceptable focus. The only thing that has changed is the subject magnification. We have increased the CoC to such an extent that it no longer holds true, we can clearly differentiate between a point and a circle. To bring it back into acceptable focus we all we need to do is increase our viewing distance, step back from your monitor, across a room, and the zip will become sharp again.

Cool isn't it?  :)

This is only an illusion of sharpness. The truth is what really matters (the information). Looking at the print from far away only proves that human vision is very limited. At close-up you can see all the information captured by your camera, both sharp and blurry parts. So, sharpness = information. Then from the distance you see much much less information despite that it looks sharper. This kind of sharpness ≠ information. This trick is about the CoC of your eyes, DoF has nothing to do with it.

324
EOS Bodies - For Stills / Re: EOS-M sharper than 6D?
« on: July 18, 2013, 01:14:05 PM »
Another possible reason - cheap UV filters.

He used the same lens for both cameras.  I sort of think he would have mentioned putting on a cheap UV filter for the 6D shots and taking it off for the EOS M shots, don't you?  ;)

Yes, I thought the same thing. However, nobody here did mention this possibility, so I did :).

325
EOS Bodies - For Stills / Re: EOS-M sharper than 6D?
« on: July 18, 2013, 12:55:15 PM »
Did you use something weird to clean your 6D sensor?

That's a good thought - maybe something dried and left a film (sorry) over the sensor?

That, or even damaging it.
Another possible reason - cheap UV filters.

326
EOS Bodies - For Stills / Re: EOS-M sharper than 6D?
« on: July 18, 2013, 12:40:56 PM »
Did you use something weird to clean your 6D sensor?

327
Shrinking the picture simply makes it's details imperceptible to you.

If I may, can I suggest that this one sentence sums up some of the disagreement in this thread.   DoF is, in fact, a concept that is rooted in human visual perception.  DoF is defined as the distance in front of and behind the plane of focus that appears in focus to a human being.  The calculation requires assumptions regarding human visual acuity, print size, and viewing distance.

I believe, others can correct me if I'm wrong, it is also implicitly assumed that the print size and resolution is such that the individual pixels in the print are too small for the viewer to see them at the assumed print size and viewing distance.  If the pixels are visible then the entire image would not appear sharp.  That is why sensor resolution does not appear in the calculation.

So yes,  print size matters and yes, if you print small enough the entire image would "magically" appear sharp.  "Appear" is the operative word in that statement but it is relevant because "appears sharp" is fundamental to the concept of DoF.  If you also shrunk yourself down, your visual acuity would likely also change so in fact DoF would be the same.

And it is a concept.  It is a defined value based on some reasonable assumptions.  DoF is not something that exists independent of human vision and is not a strictly defined measurement like mass, distance, size, etc.

If you're looking for a physically defined parameter, it exists.  That is focus distance.  The distance from the image plane that is precisely in focus (in practical terms it would be maximally in focus because there is no perfect focus).  And there is only one distance that is maximally in focus... every plane in front of and behind the focus plane is less focused.  If human visual acuity was infinite and the resolution of a print was infinite you would be able to see the tiniest difference in sharpness.  But that's not the case, more than just the exact plane of focus appears sharp and we can define the depth in the image that appears sharp... i.e. Depth of Field.

I don't know. I disagree, that when I photograph a ruler which clearly shows that the DoF is, let's say, ~15mm, I must let the shrink size decide that the DoF is actually half a meter or that I was shooting at hyperfocal and I'm crazy :).
I thought that DoF is the thickness of the sharp focus plane your camera can capture (how is it called then?). Turns out I was wrong, it is what anyone wants it to be and if it can't be, then just get a better printer :D.
I say, if you have to shrink your images to make everything look sharp, then you are using the wrong camera format.
DoF area is sharp, but sharpness ≠ DoF

328
No. In reality there is nothing that is a 0. Zero is not a thing, zero is just a tool in mathematics.

You are wrong about DOF but I will let others argue about it. But as a mathematician, I strongly object the statement that zero is not a thing, and that there is noting at zero. How many 200-400 lenses do you own?

Quote
Every point has dimensions and it can be represented as an image of at least 1 pixel,

What if you are shooting film?

I don't use superteles.
Film has it's minimum dot size that can be captured.

329
I'm glad that you are enjoying the discussion. It wasn't my intent to offend anyone.
I didn't confirm the nonsense. Shrinking the picture simply makes it's details imperceptible to you. If you can't see a bacteria, it doesn't mean that there are none. In case with a magic shrinking machine, the details are made smaller so you can still see them by shrinking yourself or maybe using a microscope. However, when you shrink the image on your screen or print a thumbnail, you are just losing the information. Just like for a half-blind person all your images can look same "sharp" or same "blurry". In fact, for him, sharp and blur looks the same. CoC is about perception. DoF is not, it is about information, same as photography. Once the light of an optical image hits the sensor, it is gone, all that's left is the information gathered by the electronics. If you shoot a picture that has nothing in focus, it doesn't matter to what resolution you downsize it, no new information will occur (except the false one). You can manipulate the image in any way you want, but in relativity to reality DoF won't change a bit. If photography is just a form of art for you and perception is the only thing that matters, then perhaps you are not even trying to understand what I'm talking about.

Well by your standard, in reality there is nothing really 'in focus'. The focus 'plane' is a hypothetical thing that has zero thickness. Also on the 'true' focus plane every light point has diameter of 0. Anything in front, or behind this zero thickness hypothetical plane is deemed out of focus because they have a CoC > absolute 0.

The sensor sees something in focus not because they are in focus, but simply because the CoC is smaller than sensor's pixel could distinguish. So what you say? That the image the sensor captures is the real world? It is not.

If above assumption is correct, then take an example, if I shoot a photo with a 320x240 pixel FF sensor, what is my DoF? Even my lens gives a blurry mess I would still get a 320x240 photo that is sharp at pixel level. Does this represent the 'reality'?

The thing is, reality is far weirder than you can ever imaging. We are in a photographic forum, so yes, photography is just a form of art for me and perception is the only thing that matters. I learn from my output photos and prints so I can control my equipment to get the result I want.

Then we leave the underlying physical, electricial or philosophical discussions for some one else or somewhere else.


No. In reality there is nothing that is a 0. Zero is not a thing, zero is just a tool in mathematics. Every point has dimensions and it can be represented as an image of at least 1 pixel. When you are viewing ~18mp image on a ~2mp screen, then 1 dot (color) on the screen represents a group of 9 pixels of the image. Sensor does not capture the real world. The projection of an optical image on the sensor is limited by all kinds of information manipulation by the lens (diffraction, aberrations, vignetting, coma, color tint, distortion, flares and CoC). If the 9 combined pixels carry enough information to represent 1 real world dot, then it will be sharp. If not, then it will be blur (or noise). At 1:1 (100%) it is similar, but with much more false color and noise. If you shrank the blur into oblivion and got some kind of real world information, then it only means that you've destroyed all the rest and the whole blurriness carried only this little.

The sensor and electronics "sees" nothing in focus, just color and contrast of the neighboring pixels.320x240 pixel FF sensor cannot mimic human vision. There are artificial eye implants that allow blind people to see the world in just a few hundred pixels and trust me, it's nothing like the real thing. It's a blurry mess and they can only see a letter or a digit in close-up.

330
Prepared to shrink yourself 100 times and tell me that I am mad, LOL! I'm laughing and crying at the same time! ;D :'( ;D :'(

Even you have confirm that shrinking picture increase DoF do have a real world implication, BUT

Shrink a picture 100 times? Yes! Shrink a human 100 times? OMGWTFBBQChickenWings!

In the end, isn't photography all about perception?  ::)

I'm glad that you are enjoying the discussion. It wasn't my intent to offend anyone.
I didn't confirm the nonsense. Shrinking the picture simply makes it's details imperceptible to you. If you can't see a bacteria, it doesn't mean that there are none. In case with a magic shrinking machine, the details are made smaller so you can still see them by shrinking yourself or maybe using a microscope. However, when you shrink the image on your screen or print a thumbnail, you are just losing the information. Just like for a half-blind person all your images can look same "sharp" or same "blurry". In fact, for him, sharp and blur looks the same. CoC is about perception. DoF is not, it is about information, same as photography. Once the light of an optical image hits the sensor, it is gone, all that's left is the information gathered by the electronics. If you shoot a picture that has nothing in focus, it doesn't matter to what resolution you downsize it, no new information will occur (except the false one). You can manipulate the image in any way you want, but in relativity to reality DoF won't change a bit. If photography is just a form of art for you and perception is the only thing that matters, then perhaps you are not even trying to understand what I'm talking about.

Pages: 1 ... 20 21 [22] 23 24 ... 49