March 03, 2015, 10:17:13 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - TrumpetPower!

Pages: 1 2 3 [4] 5 6 ... 64
Technical Support / Re: Grey card and spot metering
« on: April 22, 2013, 12:23:24 PM »
The histogram is built from the entire image. Unless the gray card fills 100% of the screen, you should expect to see all sorts of other stuff represented there.

Indeed, if it's only a linear 50% of the frame that you're filling the gray card with, the area that covers is much less than 50%....

But, if you're using a gray card as you describe, the histogram is pretty irrelevant unless you're watching against clipping of especially bright which case, the meter reading off the gray card becomes irrelevant.

You might also consider a handheld light meter instead of a gray card. Logically, they function the same way, but the gray card has disadvantages -- not the least of which is that even the low levels of gloss on a gray card can wreak havoc with your camera's meter readings. Try it -- step outside on a sunny day, stand in a single spot with the Sun at your back, hold the card at arm's length, and rotate the card. See how much its brightness changes, even to your eye, depending on whether or not it's pointing near or away from the Sun! Now, which of those angles is the "right" one for setting exposure?

A meter doesn't have those problems, due to its design. Stand in the place of your subject (or as close as you can get), aim the dome at the camera, press the button, and you've got your exposure.



Oh, and those defending their Rebel-grade 6D's as being superior to the 7D ...  :P

Oh. I see.

You think that the 7D has superior image quality to the 6D.

Well, enjoy that fantasy world you've built for yourself. But do be careful at zebra crossings....



So, it seems, yet again, a bit of clarification is in order with respect to our latest Canon / Nikon flame fest.

First, all that white balance does is set linear multipliers for the individual channels. In order for a physical object in the real world to appear to have a neutral color under all light conditions, it must equally reflect all wavelengths of light equally. Most objects don't, which is why they have color. But a few objects, including some common and inexpensive ones, do. PTFE / Teflon does; get a roll of thread tape, and anything that appears to be a different color from it isn't white. (If it looks brighter, it's got fluorescent whitening agents added to it -- very common with papers and fabrics.) Tyvek (a common material for un-tearable envelopes) also shares this property. Polystyrene / styrofoam does, as well, but it's not quite as bright as the other two.

The spectral distribution of the light itself varies, and almost always even varies within the scene itself. There's much more blue in outdoor shadows than in direct sunlight; to understand why that's the case, look at the sky. As a result, a white object (such as a piece of PTFE) will result in an image recorded on your camera with a much higher ratio of blue to red and green when photographed in the shade than when photographed in direct sunlight.

Your eyes and brain, however, are wired to automatically re-interpret those color shifts on the fly, and the general perception is that the objects are the same color regardless of the actual light you're viewing them in. But, with a bit of practice, you can learn to see the differences in color from different light sources. And, for artistic effect, there's a lot to be said for slightly skewing your white balance in the direction of the actual light of the scene -- but that's another matter.

Most light sources are black-body radiators, and the color of the light emitted by a black-body radiator is very predictable and associated with a temperature. Heat something to 8,500°F (well past the boiling point of everything that comes to mind as I type) and it'll produce a glow very similar to sunlight. Heat it to a mere 4,000°F, which is what happens to the tungsten filament in an incandescent bulb, and you get the much redder color we know so well from indoor lighting.

When you set your camera's white balance to a color temperature, it uses a built-in lookup table (or whatever) to know that an object heated to that temperature will radiate light of such-and-such a distribution of colors and that, if the camera multiplies the three channels by these factors, they'll render neutral objects with equal RGB values. The catch, though, is that there aren't any perfect black-body radiators; though many objects are very close, all will have various bumps and dips in their spectral distributions.

But not all light sources are actually black-body radiators. Sodium vapor street lamps, for example, work by a completely different mechanism and only produce a single frequency of yellow light. Fluorescent lights work on a very similar principle, except they produce more frequencies of light -- but, again, generally in a pretty spiky distribution.

Your camera again has some pre-set values for some common light sources that again tells the camera to set a particular combination of linear multipliers that will result in a neutral object being rendered with equal RGB values. The catch here is that there's even more variation with non-black-body light sources.

That's where the manual white balance comes in. The idea is to take a picture of something that actually does have a flat reflective spectral response, and the camera calculates from that picture what multipliers are necessary to render it with equal RGB values. This gets you the closest of common methods, but it again has a catch: most objects that people use for white balance aren't very good candidates. That QP card that Mikael so loves to flog is a great example. For one, it's way too dark, meaning that the sample that the camera measures is going to have to average out the noise in the signal. That's especially a problem at higher ISO settings. But, worse, I'll bet you lunch that it doesn't have anywhere near as flat a spectral response as a $0.01 styrofoam coffee cup. Those things actually make far superior white balance targets to anything you can buy for less than a thousand bucks. And I mean that in absolute terms, too -- not just price / performance.

In the real world, though a coffee cup can get you very, very close to a perfect white balance, the only way to actually get it truly perfect is through ICC profiling in a process much too involved for me to discuss here. But the idea is to shoot not just a single target with a single color that's hopefully white, but rather to shoot a chart with a great many colors and to let special software calculate the various distributions of everything to figure out the actual value. Think of the difference between LensAlign and FoCal for an analogous comparison.

So, it's very reasonable to expect minor differences between white balancing algorithms from different cameras, especially considering the differences between sensor designs and what-not. But it's not reasonable to expect those algorithms to differ by more than a minor amount, and Nikon cameras are notorious for royally screwing up white balance in exactly the way the original poster has discovered. I'd go so far as to suggest that the cameras are unacceptable as shipped, though the problems should vanish in an ICC managed workflow.



An 18MP sensor can record finer details than an 8MP sensor (of similar size), therefore the 18MP image can be enlarged more whilst still retaining more detail than the 8MP image. So, "enlarging" to a fixed size is a flawed normalisation method. Besides, DoF is a characteristic of lenses, not sensors ... so again normalising different sensor sizes through enlargement to a fixed size produces flawed results.

There is a great deal of misunderstanding in that paragraph. Too much for me to feel like trying to help you understand. Suffice it to state that you're conflating a great many variables with reckless abandon.

I'll just leave the matter (for now) by noting that this is all stuff covered in elementary photography (and physics / optics) textbooks and suggest that you take a trip to your local library (or bookstore) to get up to speed.



Why should a photographer care more about comparing sensors than comparing camera systems? In what situation is the type of comparison you insist on making ever relevant to a photographer using a camera to create photographs?

Then why are members of the CanonRumors forum complaining about the "same" 18MP sensor in the latest cameras, or discussing if the upcoming 7D2 will have a 21MP sensor?

A) Ask those who're complaining; and
ii) I thought we were comparing two different formats -- APS-C v 135 -- not different resolutions of the same format?

Somebody here is very, very confused, and I don't think that person is me....


To come out with a meaningful figure across sensor sizes you have to standardise output. The "standard" DOF calculator assumes an 8"x10" print viewed at 12" and average eyesight. If you magnify a sensors output more to get to that 8x10 then you have to use a smaller CoC value, because you are enlarging it more.

Oh, I understand. It's totally wrong and completely skews the results, but I do understand.

If you do, then nobody else does.

Let's try again.

The general idea in photography is that you will choose a particular location to achieve the perspective you wish, and then use your equipment to make an exposure that will optimize the quality of a print of a certain size.

If I read you right, that's completely bass-ackwards, and you expect the photographers to change perspective, composition, and now print size just so that they don't unfairly stack the deck against your favorite piece of equipment.

Perhaps you could take a step back and explain to the rest of us what, exactly, it is that you think you're comparing and why?




Out of context quote.

Let's not be silly!

So then enlighten us, oh not-silly one.

Why should a photographer care more about comparing sensors than comparing camera systems? In what situation is the type of comparison you insist on making ever relevant to a photographer using a camera to create photographs?


By trying to keep the framing and/or DoF the same, you are changing lenses. But the objective is not to compare lenses, but sensors.

Strange. I thought we were photographers trying to compare camera systems.

Or do your clients consider it reasonable for you to hand them portraits with the head chopped off at the mouth and the body chopped off at the navel, because that's what happens when you switch to a crop format camera?



Now let's say this new sensor has noise performance as good as the 5D Mark II.

Not a chance.



First: an APS-C sensor is smaller than a FF sensor, and thus a lower amount of light hits the total sensor surface. However, the framing is also different, and the image from the APS-C sensor has exactly the same size on the FF sensor because that is defined by the lens, not the sensor. Consequently, the amount of light per unit surface is the same on both sensors, and the noise performance can not depend on the sensor size.



The framing is the same only if you multiply the focal length by the ratio of the diagonals of the two lenses.

And the amount of light collected by the sensors is only the same if you perform a similar modification of the apertures.

To make the math simpler, let's start with a 100mm f/2 lens mounted to a 135 format ("full frame") camera. And we'll try to figure out what we need to get an equivalent image with a 4/3 camera, which has close enough to a 2x "crop factor" as makes no difference.

If you just mount the 100mm lens to the 4/3 camera, you'll only get the inner quarter of the image that you would have gotten on the 135 camera. On the 135 camera, you can get the exact same image by cropping out all but the inner quarter.

To match the field of view, we need to use a lens of half the focal length -- a 50mm lens. But what aperture?

The 100mm f/2 lens has a 100 / 2 = 50mm aperture. All the light headed to the lens that falls within that 50mm diameter circle makes its way to the sensor. To gather the same amount of light, we need a 50mm lens with a 50mm aperture. That means that our 4/3 camera needs a (50 / 50 = 1) 50mm f/1 lens to gather as much light as a 135 format camera with a 100mm f/2 lens. In both cases, both cameras are capturing all the light that falls onto a 50mm circle.

Because this is all simple geometry, it so happens that the depth of field, background blur, and the rest are also comparable.

But...even if everything else is equal, the larger format still retains a number of advantages, mostly because the image on the sensor doesn't need to be magnified as many times in absolute terms to print size. Imagine making a contact print with 8" x 10" film and comparing it with an 8" x 10" enlargement even from 4" x 5" film to understand why.



I have both the 135 F2 and 200 F2 and with the 200 and IS you can easily shoot 1/30 or 1/50 all night long and if Sigma can do this, it would be like a mini 200mm F2 (and probably a lot lighter).

Actually, it'd be a mini 200mm f/2.8.

135 / 1.8 = 75
200 / 2.8 = 71
200 / 2.0 = 100

A mini 200mm f/2 would be a 135 f/1.4. And that would not at all be small, lightweight, cheap, or discreet. Imagine the bastard love child of an 85 L and a 200 f/2.

But this rumored lens, if it becomes real, would still quite impressive nonetheless.


Pop quiz. Which gathers more light: a 1DX with a Canon 50mm f/1.0 L mounted to it, or the Hubble with its distinctively unimpressive f/24 aperture? (57.6m focal length / 2.4m aperture) Which is going to generate the images with the least noise?

You are comparing lenses, not cameras. A 50mm f/1.0 lens mounted on a 300D "gathers" the same amount of light as when it is mounted on a 1DX.

The lens may well gather the exact same amount of light regardless of what, if anything, is attached to the rear mount.

But that's completely irrelevant when what actually is attached to the rear amount throws away half of the light that the lens gathers.

Or are you somehow under the mistraken impression that every APS-C camera has an invisible Metabones Speed Booster attached to it? You do know that the "crop" part of "crop sensor camera" means that the borders get cropped away and the light that the lens would have projected onto them gets absorbed by the black interior of the camera, don't you?



What does HCB stand for?

The man who defined the photographic style you're emulating, whether consciously or otherwise:



Size as in physical dimensions, not MP count.  ... better high ISO performance.

Again the sensor is used as the only criterion. And if that be, the 6D is better than the 1DX ... on paper. What happens when the look at the whole camera, as a unit?

When you do so, you discover that the 6D has significantly better image quality in all circumstances than any APS-C camera, and dramatically better image quality (and focussing performance) in low light situations. It's not designed as a speed demon, but, aside from that, it stomps all over all the APS-C cameras. Only the 5DIII and 1DX are better.

... it's the ability of that physically larger sensor to collect more total light than an APS-C sensor that leads to the better high ISO performance.

Wrong. It's the larger photo-sites of the sensor. If you should make an 8MP APS-C sensor using the same technology as that of the sensor in the 1DX/5D3/6D, then you'll get the same "better" high-ISO performance.

And, yet, you're worng. Couldn't be more wornger.

Pop quiz. Which gathers more light: a 1DX with a Canon 50mm f/1.0 L mounted to it, or the Hubble with its distinctively unimpressive f/24 aperture? (57.6m focal length / 2.4m aperture) Which is going to generate the images with the least noise? (For the sake of this discussion, assume the 1DX is strapped to the wrist of an astronaut servicing the Hubble -- just to get out of the way any pointless side discussions about atmospheric interference.)



Yes, all nice. But i am not worried about the need of such a lens. More about the possibility to even build it! (For a price someone can pay)

A 135mm f/1.8 lens has the same size physical aperture as a 200mm f/2.8 lens. I'm pretty sure this lens would be cheaper than Sigma's $1,300 70-200mm f/2.8 OS, seeing how it's a much simpler design. I'd also guess that it'd be cheaper than Canon's $1,000 135mm f/2 L, because that tends to be how Sigma rolls. I'd personally guess somewhere in the $800 range.



Pages: 1 2 3 [4] 5 6 ... 64