October 23, 2014, 12:35:21 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ecka

Pages: 1 ... 15 16 [17] 18 19 ... 45
241
Lenses / Re: Sigma 24-70 f/2 OS HSM Coming? [CR1]
« on: July 30, 2013, 03:27:41 AM »
Sigma 24-70 f/2 OS HSM - great! It could be THE lens for videographers. Now let's think about it realistically:
very BIG, very HEAVY, not weather-sealed (most likely);
much more expensive than 24-70/2.8L'II, could be $3k+ (just look at the 120-300/2.8 ).

242
Software & Accessories / Re: Importing images method...???
« on: July 30, 2013, 12:12:29 AM »
Copy images, never move.  Make sure your copy is good, and then make sure you have a backup of that location.  Only then, format the card to make it squeaky clean.

+1
I load many cards as copy and paste after a trip and keep the originals until I am done with all the PP and copy / backup the final images.  Only then will I delete or reformat the card.

+1
+NEVER touch the contacts on SD cards, because static electricity may corrupt the data.

243
Software & Accessories / Re: Importing images method...???
« on: July 29, 2013, 03:46:38 PM »
Transcend USB 3.0 Super Speed Multi-Card Reader (SD/SDHC/SDXC/MS/CF)
TS-RDF8K

Importing images directly from camera using the USB cable has a higher chance to corrupt your images, specially if it is connected through a hub along with other USB devices. USB 3.0 is much faster.

244
I'm using Sigma 150/2.8HSM since 2009. It is my favorite lens. Can't say anything bad about it. For the price/size/weight in 150-180mm range f/2.8 Macro it has no competition. The new one may be a little bit better +stabilized, but it is bigger, heavier and more expensive as well. It is an excellent outdoor portrait and tele lens too. The bokeh is very nice.

245
6D Sample Images / Re: Anything shot with a 6D
« on: July 24, 2013, 12:10:24 PM »
Thanks, guys. Actually, it was 1/200sec, ISO 100, f/5.6, using shorty-40 at MFD.
I have more :)


IMG_1105 by ecka84, on Flickr


IMG_1062 by ecka84, on Flickr


IMG_1087 by ecka84, on Flickr

246
6D Sample Images / Re: Anything shot with a 6D
« on: July 24, 2013, 02:23:44 AM »

IMG_1083 by ecka84, on Flickr

247
"Whether or not regions outside the focal plane appear sharp...that's DoF" - That applies to your eyes, not the original image.

Yes, it still applies to the original image. Tell me...how do you calculate your (incorrect) concept of the "DoF" of 'the original image'.  I'd like to see the math behind that, if you could share it.  Also, what do you even call THAT THING - because it's not the DoF, by definition.

Regardless, while a tree falling in the forest with no one around to hear it does, in fact, make a sound, a photograph without someone to look at it is a completely meaningless collection of 0's and 1's or an equally meaningless collection of developed silver halide grains in an emulsion. The moment someone views it, all of my points about CoC apply...and that, for all purposes relevant to photographs, is the real end of the discussion.

Imagine that you are a cyborg and you see the world through cameras instead of eyes. One camera has 8mp APSC sensor from 20D and another one has 21mp FF sensor from 5D2 (yes, it's weird, you must be made in China, or something). Both with 40mm f/2.8 lenses. You see everything in clearest details up to a single pixel, all of them, all the time. FF camera has 60% wider FoV, but with both lenses focused at the same distance you would see that they both produce the same DoF.
This is a very simplified concept (no need to tell me that), because I don't want to waste any more time on this, but it is real and correct. That's how your camera sees it and renders the DoF. The question is not "How do my eyes deal with DoF?". It is "How the camera does it?". It may not be useful for thumbnails and snapshots, but there is a need for it in photography with extremely shallow DoF and a lot of cropping, like macro (where you can't bring it back if it's oof).

248
I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.

I did read them, and it appears that you don't really understand what DoF is, or at least such understanding isn't coming across in your posts.  For example, "...CoC is about perception. DoF is not, it is about information..., and several iterations thereof.  That seems to sum up your argument, but that statement is fundamentally wrong. You cannot determine DoF without a CoC value, either arbitrarily chosen or empirically determined. DoF is based on CoC and other factors, so if you believe that you can determine DoF without CoC, you don't understand what DoF means.

As for 'the sharpest area', that's the focal plane, the plane in space at which the lens is focused. It's a plane, with effectively no depth (although practically, it has some - just as real lenses aren't the infinitely thin lenses we pretend they are for optical calculations).  Everything in front and behind that plane is less sharp, progressively more so at increasing distance along the optical axis. Whether or not regions outside the focal plane appear sharp...that's DoF, and it is affected by several factors, including CoC.

I understand the physics and if I don't quote books and articles, or post links for others to go read something, it doesn't mean that I don't understand a thing. I'm using my own head, because it is science, not a religion. Science provides tools, but you cannot use the same one for everything.
Does the CoC theory work for upscaling images? - No.
Are imaging sensors rendering DoF in a way you described - "a plane, with effectively no depth" where "everything in front and behind that plane is less sharp"? - No.
"Whether or not regions outside the focal plane appear sharp...that's DoF" - That applies to your eyes, not the original image. Think about it. It's like photographing a photograph.
End of the discussion ;).

249
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

True, but your definition is something of a tautology.  In terms of the real world being captured by the image sensor as sampled by the lens, how do you define 'sharpest area'?  Specifically, does that area have 'depth' relative to the sensor?  Pixel size represents the least quantifiable unit of XY resolution. What about Z-axis resolution?  After all, the latter is what this thread is about...

I suggest you to read all my posts in this thread, it might help to understand my position (if you didn't already).
Z-axis resolution? Where did that come from? The only thing that sensor gathers is the light to determine a color for each pixel. Everything else is just information manipulation. If you really don't understand how to define "the sharpest area", then you should study the principles of CDAF, it's all there.

250
I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

Of course that's related to pixel size (but less spatial information than the pixel size suggests, due to AA filter effects, lens aberrations, etc.).

The question is, how do you define the 'sharpest area'?

Not all sensors have AA filters, not all are based on Bayer filter technology.
The sharpest area carries the highest amount of information about reality, compared to the rest of the image.

251
Metaphorically speaking, it appears that many people are trapped within the circle of confusion, when it comes to discussions of DoF.

1. The formulas used to calculate DoF all contain CoC as a variable.
2. CoC is dependent on the observer's visual acuity, viewing distance, and output size.

Therefore,

3. DoF is dependent on the observer's visual acuity, viewing distance, and output size.

It's really that simple.

As for Ecka's argument about one pixel, the typically assumed values for CoC, and the practical range of CoC values for other print sizes and viewing distances, are much larger than a single pixel, so spatial quantization is not an issue.

I'm not asking about the illusion of sharpness, I'm asking about the information that camera can capture in the sharpest area.

252
So here we go. Once that optical physics hits the sensor it's no longer optical, it's information. The sensor cannot capture the infinitely thin plane of focus made of sharp points. Instead, it captures everything between the two distances where "a circle" has the size of a pixel or smaller, so everything in that range is same sharp, because there can't be anything sharper than a pixel. At that level, enlarging the image isn't going to decrease the DoF, only soften it, because there is no hidden information.
How do we call THAT THING? The sharpest area between the two distances where "a circle" meets the pixel?

Hey Ecka, I was in a similar state of skepticism here for a while so I just want to help get you where I am now using some common sense.

Forget all the technical stuff about aperture and sensor size for a second and let's just think about this from a simpler perspective. Imagine we just took a picture of a friend. You look at the picture later at 100% on your 30" computer monitor and realize that you accidentally focused on their nose, so their eyes are slightly out of focus. Wouldn't you agree that the plane of focus is on the nose, but doesn't extend to the eyes? Yes of course.

Now let's imagine the friend we took the photo of wants to post that photo to Facebook, and so you post it and it becomes their new profile photo. The whole time you're thinking man that photo wasn't even in focus, but then you go onto their Facebook page, and voila, nobody can even tell if their eyes are in focus because the photo's so small. Basically, the photo looks great. Now I think we can agree that as far as we can tell, their profile picture is in focus. Which would mean that either:

A. You uploaded the wrong picture
B. Magic
C. Our DOF changed because we're now looking at a much smaller picture

And just in case you're not sure. It's obviously B.   ;D

Thank you for trying, but I think the answer is F.

F. You are trying to answer a question I didn't ask.
F. "It doesn't matter" - is not an answer.
F. You don't need a DSLR for shooting thumbnails.
F. Now I know who uses the 720x480 small JPEG shooting format :).
F. Perhaps, my English is so bad, that nobody can understand what I'm saying :). Here's a picture:

What is A-B?

253
DOF is subjective?  Hmm.  If my DOF is 8 feet in a photo, that is, 8 real-life feet out in the field, how in the world does that ever change after I take the photo??  8 feet is 8 feet isn't it? 

Actually, I wouldn't even need to take the photo.  The DOF is still 8 feet.  :)

Are you suggesting that by being subjective, it could be 8 feet, or 6 feet, or 10 feet, or 7.23838383 feet?  How silly.

It seems you think that based on your equipment, there's a 'slice' of the photo that's in perfect focus, say 3.8 feet in front of where you focused, and 4.2 feet behind it, then WHAM like magic at 4.3 feet behind the focal plane, everything gets blurry.  That's not how it works.

Light from the plane of focus (which is best approximated by a plane in the geometric sense - 2D and infinitely thin) is focused on the image sensor (we're ignoring field curvature, of course).  Everything outside that plane, even a few millimeters, is blurry...and the further from the focal plane, the blurrier it gets. That's optical physics.  Whether it looks blurry to you depends on viewing size and distance and your visual acuity.

Tell me - how do you know your hypothetical shot has that 'real' 8 foot DoF?  Did you use a DoF calculator?  That calculator determines the 8 foot DoF based on an assumed specific print size and viewing distance (commonly 8x10" viewed at 1 foot).  Change those assumptions, you change the calculated DoF.

So here we go. Once that optical physics hits the sensor it's no longer optical, it's information. The sensor cannot capture the infinitely thin plane of focus made of sharp points. Instead, it captures everything between the two distances where "a circle" has the size of a pixel or smaller, so everything in that range is same sharp, because there can't be anything sharper than a pixel. At that level, enlarging the image isn't going to decrease the DoF, only soften it, because there is no hidden information.
How do we call THAT THING? The sharpest area between the two distances where "a circle" meets the pixel?

254
... I thought that DoF is the thickness of the sharp focus plane your camera can capture (how is it called then?)...
... the thickness in reality, not in the picture.

...There is no such thing as an 'objective' DoF...

Do we need a new definition here? Because I'm pretty sure that OP was asking about THAT THING, not the CoC.

The plane of focus has no depth. Imagine it as a sheet of the thinnest paper, only much thinner. Everything in front of, and behind, that sheet of paper is less sharp than whatever is on the sheet of paper. Because of limitations to our eyesight something very close to the paper might look in focus, but it isn't, at some point as you move towards the paper things become more obviously out of focus, you have now surpassed your DoF/CoC criteria, but, step back and you again can't see the differences because your eyesight can't resolve it.

I agree that the focus plane of an optical image projection is thinner than it looks like. That's what the CoC thing is all about. However, the sensor resolution is limited and it has it's smallest possible dot size which is a pixel and which is a constant for a given camera.

Maintain a reproduction size and viewing distance ratio such that the CoC (point at which you can't see the difference between a point and a circle) and you can go as big, or as small, as you'd like.

No, you cannot do that. It could only apply to a camera with an infinite number of pixels. If a pixel is too big, then it becomes a square. If it's too small, then it disappears.
In all my statements I assumed that both FF and APSC sensors had the same pixel pitch. Otherwise, even the same format cameras (same sensor size) with different megapixel numbers (like 12 vs 36) should have different DoF/CoC characteristics.

255
... I thought that DoF is the thickness of the sharp focus plane your camera can capture (how is it called then?)...
... the thickness in reality, not in the picture.

...There is no such thing as an 'objective' DoF...

Do we need a new definition here? Because I'm pretty sure that OP was asking about THAT THING, not the CoC.

Pages: 1 ... 15 16 [17] 18 19 ... 45