September 17, 2014, 12:07:18 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 67 68 [69] 70 71 ... 299
1021
Quote
Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.

Therefor...the A7s IS 14-BIT!
...

Yup. As you say, Sony show they've got a 14bit sensor output delivering 15.3 stops of DR. Interesting.

I can see you and neuro and a whole host of other web folks getting ready to point out how it is impossible and when DxO measure it and say they've got 15.3 stops of DR, DxO will be telling lies and faking it and then someone will go out and do real world tests that also corroborates with it and you'll all still be saying that it is a lie.

I want to see what they've got to show before I denounce them. After all, they're already getting more than 14 stops of DR (DxO says 14.8?) with the Nikon D800 so why should another .5 stop of DR be unreasonable? Would be interesting to see what a real world test of the DR of those cameras turned out, kind of aligned with what the Zacutto(?) folks did with various cameras for video.

Obviously it isn't a linear conversion (even now) but I'm kind of curious to know what's going on.

A curve of some kind is pretty obvious, whether it is a gamma curve or something else ...

Well, it's no surprise that you buy into DXO's bull. There are two values on DXO's site for DR. One is a measure, as in something actually MEASURED from a REAL RAW file. The other is an EXTRAPOLATION. It isn't even a real extrapolation, it is just a number spit out by a simple mathematical formula...they don't actually even do what they say they are doing.

The first of these is Screen DR. Screen DR is the ONLY actual "measure" of dynamic range that DXO does. It is the SINGLE and SOLE value for DR that is actually based on the actual RAW data. In the case of the D800....do you know what Screen DR is? (My guess is not.)

The other of these is Print DR. Print DR is supposedly the dynamic range "taken" from a downsampled image. The image size is an 8x12 "print", or do DXO's charts say. As it actually happens to be, and this is even according to DXO themselves...Print DR is not a measure at all. It isn't a measurement taken from an actually downsampled image. You know what it is? It is an extremely simple MATHEMATICAL EXTRAPOLATION based on...what? Oh, yup...the only actual TRUE MEASURE of dynamic range that DXO has: Screen DR. Print DR is simply the formula DR+ log2 SQRT(N/N0). DR is ScreenDR. N is the actual image size, N0 is the supposed downsampled size. The formula is rigged to guarantee that "Print DR" is higher than Screen DR...not even equal to, always higher. And, as it so happens, potentially 100% unrelated to reality, since it is not actually measured.

DXO doesn't even have the GUTS to ACTUALLY downsample real images and actually measure the dynamic range from those downsampled images. They just run a mathematical forumla against Screen DR and ASSUME that the dynamic range of an image, IF they had downsampled it, wouold be the same as what that mathematical value says it should be.

Print DR is about as bogus as "camera measurement 'science'" can possibly get. It's a joke. It's a lie. It's bullshit. The D800 does not have 14.4 stops of DR, as DXO's Print DR would indicate. The Screen DR measure of the D800? Oh, yeah...it's LESS than 14 stops, as one would expect with a 14-bit output. It's 13.2 stops, over ONE FULL STOP less than Print DR. The D600? Says Print DR 14.2, but Screen DR is 13.4. D610? Print DR 14.36, but Screen DR 13.55. D5300? Print DR 13.8, but Screen DR 13. A7? Print DR 14, but Screen DR 13.2. A7s? Print DR 14 but Screen DR 13. NOT ONE SINGLE SENSOR with 14-bit ADC output has EVER actually MEASURED more than 14 stops of dynamic range. That's because it's impossible for a 14-bit ADC to put put enough information to allow for more than 14 stops of dynamic range. There simply isn't enough room in the bit space to contain enough information to allow for more than 14 stops..not even 0.1 more stops. Every stop is a doubling. Just as every bit is a doubling. Bits and stops, in this context, are interchangeable terms. In the first bit you have two values. With the second bit, your "dynamic range" of number space doubles...you now have FOUR values. Third bit, eight values. Fourth bit, sixteen values. Fifth bit, thirty two values. To begin using numeric space beyond what the 14th bit allows, which would be necessary to start using up some of the 15th stop of dynamic range, you need at least 15 bits of information. It's theoretically, technologically, and logically impossible for any camera that uses a 14-bit ADC to have more than 14 stops of dynamic range.

Here is another fact about dynamic range. Dynamic range, as most photographers think about it these days, is the number of stops of editing latitude you have. While it also has connotations to the amount of noise in an image, the biggest thing that photographers think about when it comes to dynamic range is: How many stops can I lift this image? We get editing latitude by editing RAW images. RAW. Not downsampled TIFFs or JPEGs or any other format. RAW images. How do we edit RAW images? Well...as RAW images. There IS NO DOWNSAMPLING when we edit a RAW image. Even if there was...who says that we are all going to downsample our images to an 8x12" print size (3600x2400 pixels, or 8.6mp)? We edit RAW images at full size. It's the only possible way to edit a RAW image...otherwise, it simply wouldn't be RAW, it would be the output of downsampling a RAW to a smaller file size...which probably means TIFF. Have you ever tried to push the exposure of a TIFF image around the same way you push a RAW file around? You don't even get remotely close to the kind of shadow lifting or highlight recovery capabilities editing a TIFF as you do a RAW. Not even remotely close. And the editing latitude of JPEG? HAH! Don't even make me say it.

Therefor, the ONLY valid measure of dynamic range is the DIRECT measure, the measure from a RAW file itself, at original size, in the exact same form that photographers are going to be editing themselves. Screen DR is the sole valid measure of dynamic range from DXO. Print DR is 100% bogus, misleading, fake.

It doesn't matter what Sony does in their BONZ X chip. The sensor output is 14-bit RAW. The only thing their BIONZ chip can do is...the same thing YOU do. They can lift shadows, and compress highlights. They can shift exposure around and reduce noise by applying detail-softening noise reduction algorithms. But then, well then you don't actually have a RAW file anymore. You have a camera-modified file. With Sony's propensity for using a lossy compression algorithm in their RAWs, you don't even get full 14-bit precision data per pixel, and that fact has shown in many cases when people go to edit their Sony RAWs in post. The compression artifacts can be extreme. I find it simply pathetic that Sony, with all this horsepower under their thumb, would completely undermine it all by storing their RAW images in a lossy compressed format. It completely invalidates the power of their sensors, and speaks to the fact that Sony is probably just as schizophrenic internally as Nikon is. That will lead to inconsistent products and product lines, poor product cohesion, lackluster design for OTHER aspects of their cameras beyond the sensor, etc. Were already seeing many of these problems with Sony cameras. Their sensors may be good, but how Sony themselves are using their sensors is crap.

1022
My comments are based on my (imperfect) memory of science podcasts and other science journalism I've encountered in the last few years.  If you have contradictory info I'd love to see a reference.

Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail.

My understanding is that new research has shown this to be wrong.  There are a few "savant" types who have very precise/correct memory function, but for "neurotypical" (i.e. "normal") people, this is not so.

I'm not saying everyone can be a savant. I'm saying everyone can learn how to WORK their memory to improve it. I did it...I used to have the same old poor memory that everyone had, I forgot stuff all the time, couldn't remember accurately. By thinking about, exercising, and processing sensory input more actively, I can intentionally bring up other memories that I want associated to the new ones I'm creating. Purposely recalling memories in certain ways and reviewing after creating them has helped me strengthen those memories, improving my ability to accurately recall the original event, be it sight, sound, smell, touch, taste or all of the above.

Whatever current research shows, memory is NOT simply some passive process we have absolutely no control over. It's also an active process that we CAN control, and we can improve our memory if we choose to...either only of specific events of importance, or we can train ourselves to process input in a certain way such that most input is more adequately remembered and strongly associated.

Quote
I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded.

Again, my understanding is that recent research shows that the adage "seeing is believing" has it backwards: it should be "believing is seeing."  The brain does not record raw image info at all, but constructs a reality that incorporates visual data with existing beliefs and expectations.  It's that highly-processed "reality" that's recorded.  As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker.  Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker.  The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW.  Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.

I don't think any of that contradicts the notion that our brains store much or most of everything that goes into them. I don't deny that our beliefs and desires can color HOW we remember...as they could control what we recall. Remember, memory is often about association. If the guy watching the woodpecker was vividly remembering an IBW at the time (would have been an amazing find, for sure! I really hope they aren't extinct, but... :'(), that wouldn't necessarily change the new memories being created, but it could overpower the new memories with the associations to old memories of IBW. Upon recall...you aren't just recalling the new memories, but things associated with it as well. What you finally "remember" could certainly be colored by your desired, causing someone to misremember. Good memory is not necessarily good recall, and it certainly doesn't overpower an individual's desires for something to be true. All that gets into a level of complexity about our our brains work that goes well beyond any courses on the subject I've ever taken.

BTW, I am  not talking about savants who have perfect memory. Eidetic memories or whatever you want to call them, that's a different thing than what I'm talking about. Eidetic memories are automatic, it's more how those individuals brains work, maybe a higher and more cohesive level of processing than normal individuals. That doesn't change the fact that you CAN actively work with your memory to improve it, considerably. I'm not as good at it these days as I used to...severe chronic insomnia have stolen a lot of abilities like that from me, but when I was younger, I used to have an exceptional memory. I remembered small details about everything because I was always working and reviewing the information going in. Before I took that class, my memory was pretty average, after and still largely since, it's been better than average to truly excellent.

That has to do with memory creation itself, though...it doesn't mean my memories can't be colored by prior experiences for desires. I think it lessens the chance of improper recall, but it's still possible to overpower a new memory with associations to old ones, and over time, what is recalled may not be 100% accurate (again, not talking about eidetic memories here, still just normal memory.) There have been cases of obsessive-compulsive individuals having particularly exceptional memory, on the level of supposed eidetics and in some respects better. For the very very rare individual, memories become their obsession, and because it's an obsession, every memory is fully explored, strengthened and associated to a degree well beyond normal. Recall is very fast, and the details can be very vivid. It isn't just image-based either, all sensory input can be remembered this way (sounds, smells, etc.) With such strong associations and synaptic cleft strengthening, such an individuals memories are effectively permanent as well. The difference would be the obsessive-compulsive chooses what memories to obsesse over...so their recall isn't necessarily as complete as an eidetic (who's memory for imagery is more automatic.)

1023
Let's turn this around.  Can you provide a reference showing the a7S has a 16-bit ADC?

Nope. The specs for the a7s only quote the bit depth for the image files, not the ADC.

Quote
Since they are claiming >15 stops of DR

Note that at present the claim for 15.3 stops of DR comes from a 3rd party ... even I'm dubious on that. I'll wait and see what Sony says and more importantly, what DxO can measure.

Quote
it's certainly in their best interest to not make much of the fact that they're using a 14-bit ADC which cannot deliver the actual DR they claim, meaning they're merely cooking the RAW file to include fabricated data.

Or maybe they're using a spreading function (i.e applying a curve to the sensor feed) rather than doing a linear conversion?

Did you completely miss the post where I linked directly to Sony's site that SHOWS the sensor output (which CONTAINS the ADC) is 14-bit? How convenient...that you read my first post, and just magically didn't happen to see my second post. Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.

Therefor...the A7s IS 14-BIT! I love it how you DEMAND I PROVE things to you, then simply ignore the FACTS when I smack you upside the face with them.

There is absolutely ZERO question about it. The facts are the facts. The A7s is still "just" an Exmor, and Exmor's use 14-bit ADC. Here, I'll smack you upside the face with them again:

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ

Quote
16-bit image processing and 14-bit RAW output
16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).



1024
If Sony had come out with a sensor with on-die 16-bit ADCs, that would have been far, far bigger news than the fact that it can do ISO 409k. No one really cares about ISO 409k. The noise levels at that ISO are a simple matter of physics when it comes to stills.

When it comes to the A7s video performance, their DSP, BIONZ X, is the bigger news, since it's doing a significant amount of processing on the RAW signal to reduce noise at ultra high ISO settings. The BIONZ X image processor does 16-bit IMAGE PROCESSING, however the sensor output is 14-bit, and the output of the image processing is ALSO 14-bit. There is a page on Sony's site somewhere that describes this, soon as I find it I'll link it.

The BIONZ X processor is the same basic thing as Canon's DIGIC and Nikon's EXPEED. It's the in-camera DSP. Canon's DIGIC 6 has a lot of similar capabilities to Sony's BIONZ X. They both do advanced noise reduction for very clean high ISO JPEG and video output. They both do high quality detail enhancement as well. I don't believe Canon's DIGIC 6 does 16-bit processing, it's still 14-bit as far as I know. The use of 16-bit processing can help maintain precision throughout the processing pipeline, however since the sensor output is 14-bit, you can never actually increase the quality of the information you start with. That would be like saying that when you upscale an image in photoshop, you "extracted" more detail out of the original image. No, you don't extract detail when you upscale...you FABRICATE more information when you upscale.

Same deal with BIONZ X...during processing, having a higher bit depth reduces the impact of errors (especially if any of that processing is floating point), however it cannot create more out of something you didn't have to start with. That is evident by the fact that Sony is still outputting a 14-bit RAW image, instead of a 16-bit RAW image, from their BIONZ X processor.

UPDATE:

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ

Quote
16-bit image processing and 14-bit RAW output
16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).



Higher precision processing, but still 14-bit RAW. The fact that the raw sensor output is 14-bit means that the dynamic range of the system cannot exceed 14 bits. The use of 16-bits during processing increases the working space, so when Sony generates a JPEG or video, it can lift shadows and compress highlights with more precision and less error. I suspect their "15.3 stops of dynamic range" is really referring to the useful working space within the 16-bit processing space of BIONZ X. Simple fact of the matter, though, is that when it comes to RAW...it's RAW. Your dynamic range is limited by the bit depth of the ADC. Since the ADC is still 14-bit, and ADC occurs on the CMOS image sensor PRIOR to processing by BIONZ X, then any processing Sony does in-camera can do no more, really, than what you could do with Lightroom yourself.

1025
...

I'm still waiting for your reply to me asking for a reference (you know, a URL) to something that supports your claim of the Sony a7s only having a 14bit ADC ...

B&H Photo's product page, under specifications:

http://www.bhphotovideo.com/c/product/1044728-REG/sony_ilce7s_b_alpha_a7s_mirrorless_digital.html

It took me about 3 seconds to find that. I just searched for "Sony A7s Bit Depth", and that was one of the first five links (the rest, for some reason, were all about the A77...)

It states:


1026
Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there.

Your talking about a different kind of eye movement, but you are correct, our eyes are always adjusting.

Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail. But how observant an individual is, like many human things, falls onto a bell curve. The vast majority of people are barely observant of anything not directly happening to them, and even in the case of things happening to them, they still aren't particularly clearly aware of all the details. (Especially this day and age...the age of endless chaos, immeasurably hectic schedules, and the ubiquity of distractions...such as our phones.)

I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded. HOW it's recorded is what affects our memories. For someone who is attuned to their senses, they are evaluating and reevaluating the information passing into their memories more intensely than the average individual. The interesting thing about memory, it isn't just the act of exercising it that strengthens it...it's the act of evaluating and associating memories that TRUELY strengthens them. Observant individuals are more likely to be processing the information their senses pick up in a more complex manner, evaluating the information as it comes in, and associating that new information with old information, and creating a lot more associations between memories. A lot of what gets associated may be accidental...but in a person who has a highly structured memory, with a strong and diverse set of associations between memories, one single observation can create a powerful memory that is associated to dozens of other powerful memories. The more associations, the more of your entirety of memory that will get accessed by the brain, and therefor strengthened and enhanced, than when you have fewer associations.

I actually took a course on memory when I first started college some decade and a half ago. The original intent of the course was to help students study, and remember what they study. The course delved into the mechanics and biology of sensory input processing and memory. Memory is a multi-stage process. Not just short term and long term, but immediate term (the things you remember super vividly because they just happened, but this memory fades quickly, in seconds), short term, mid term, and long term (and even that is probably not really accurate, it's still mostly a generalization). Immediate term memory is an interesting thing...you only have a few "slots" for this kind of memory. Maybe 9-12 at most. As information goes in, old information in this part of your memory must go out. Ever have a situation where you were thinking about something, were distracted for just a moment, but the thing you were thinking about before is just....gone? You can't, for the life of you, remember what it was you were thinking about? That's the loss of an immediate term memory. It's not necessarily really gone...amazingly, our brains remember pretty much everything that goes into them, it's just that all the memories we create are not ASSOCIATED to other things that help us remember them. It's just that the thing you were thinking about was pushed out of your immediate-mode memory by that distraction. It filled up your immediate mode memory slots.

Your brain, to some degree, will automatically move information from one stage to the next, however without active intervention on your part, how those memories are created and what they are associated with may not be ideal, and may not be conducive to your ability to remember it later on. The most critical stage for you to take an active role in creating memories is when new information enters your immediate term memory. You have to actively think about it, actively associate it with useful other memories. Associations of like kind, and associations of context, can greatly improve your ability to recall a memory from longer term modes of memory storage. The more associations you create when creating new memories, the stronger those memories will be, and the longer they are likely to last (assuming they continue to be exercised). The strongest memories are those we exercise the most, and which have the strongest associations to other strong memories. Some of the weakest memories we have are when were simply not in a state of mind to take any control over the creation of our memories...such as, say, when a thug walks in and puts a gun in someone's face. Fear, fight or flight, activities kicked into gear by a strong surge of hormones can completely mess with our ability to actively think about what's going on, and instead...we react (largely out of 100% pure self preservation, in which case if we do form memories in such situations...they are unlikely to be about anyone but one's self, and if they are about something else, they aren't likely to be very reliable memoreis.)

It was one of the best courses I ever took. Combined with the fact that I'm hypersensitive to most sensory input (particularly sound, but sight and touch as well...smell not so much, but I had some problems with my nose years ago), the knowledge of how to actively work sensory input and properly use my memory has been one of the most beneficial things to come out of my time in college.

If you WANT to have a rich memory, it's really a matter of using it correctly. It's an active process as much as a passive one, if you choose to be an observant individual. If not...well, your recallable memory will be more like swiss cheese than a finely crafted neural network of memories and associations, and yes...the brain will try to fill in the gaps. Interestingly, when your brain fills in the gaps, it isn't working with nothing. As I mentioned before, we remember most of what goes in...it's just that most of the information is randomly dumped, unassociated or weakly associated to more random things. The brain knows that the information is there, it just doesn't have a good record of where the knowledge is. I don't think that really has to do with the way our eyes function...it has to do with how the brain processes and stores incoming information. Our eyes are simply a source of information, not the tool that's processing and storing that information.

Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.

Not exactly a fun experiment. :)

You do move your eyes, a little. You can't really do the experiment without moving back and forth. The point is not to move your eyes a lot. If you look in one direction at a tree, then another direction at the cloud, your not actually working within the same "scene". In my case, I was crouched down, the tree in front of my, the cloud just to the right behind it. Your whole field of view takes in the entire scene at once. You can move your eyes just enough to center your 2° foveal spot on either the shadows or the cloud highlights, but don't move around, don't move your head, don't look in a different direction, as that would mess up the requirements of the test. So long as "the scene" doesn't change...so long as the whole original scene you pick stays within your field of view, and all you do is change what you point your foveal spot at, you'll be fine.

To be clear, a LOT is going on every second that you do this test. Your eyes are sucking in a ton of information in tiny fractions of a second, and shipping it to the visual cortex. Your visual cortex is processing all that information to increase resolution, color fidelity, dynamic range, etc. So long as you pick the right test scene, one which has brightly lit clouds and a deep shaded area, you should be fine. In my experience, when using my in-camera meter, the difference between the slowest and fastest shutter speed is about 16 stops or so. I wouldn't say that such a scene actually had a full 24 stops in it...that would be pretty extreme (that would be another eight DOUBLINGS of the kind of tonal range I am currently talking about...so probably not a chance). But I do believe it is more dynamic range than any camera I've ever used was capable of handling in a single frame.

1027
Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Could you post a picture of this scene?  I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds.  The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.

You move your eyes, just not a lot. The point is the scene should generally be static...you shouldn't be looking in one direction for the shadows, then turning around 180 degrees for the highlights. The point is that, while our eyeballs themselves, our retinas and the neurochemical process that resolves a "frame", may only be capable of 5-6 stops of dynamic range, our "vision", the biochemical process in our brains that gives us sight, is working with FAR more information than what our eyes at any given moment process. It's got hundreds if not thousands of "frames" that it's processing , one after the other, in a kind of circular buffer. It's gathering up far more color and contrast information from all the frames in total than each frame has in and of itself. The microscopic but constant movements are what give us our high resolution vision...it's like superresolution, so we get HDR and superresolution at the same time, all in the span of a second or two.

My point is that if I look at the deep shadows under a large tree, then a moment later flick my eyes to a cloud, then a moment later flick back, I can see detail in both. There is no delay, there is no adjustment period. My visual perception is that I can see details in extremely bright highlights SIMULTANEOUSLY with seeing details in extremely dark shaded details. My "vision" is better than what my eyeballs themselves are capable of (which, really, last I checked, was only about 5-6 stops of dynamic range, and actually less color fidelity and resolution than what we actually "see" in our brains.) Our brains are doing a degree of processing that far outpaces any specific "device". Our vision is the combination of a device and a high powered HDR/Superresolution crunching computer that does this amazing thing all at once.

1028
Landscape / Re: Deep Sky Astrophotography
« on: May 31, 2014, 05:05:47 PM »
Soulless, here are three samples of the same single light frame. Note that the first two have been downsampled by a factor of 6.5x, which has the effect of SIGNIFICANTLY reducing noise. I've included a 1:1 crop to show how much noise there is in one single frame. It is the noise levels that are the primary reason why you really have to take 50, 80, 100 frames and stack them...it's the only way to reduce noise to manageable levels with a DSLR.

With Cooled CCD cameras like Bradbury's KAF-16803, you have significantly less dark current noise due to the sensor being cooled by some -50°C relative to ambient, and less read noise. You don't need to stack as many subs to get a good result with a dedicated CCD, however you DO still need to stack.

Original Out-of-Camera Frame (blue due to optical light pollution filter):


Same frame color-corrected and stretched:


100% crop from frame to show noise:


The last sample here, a you can see, has a completely unacceptable level of noise. The amount of noise drops as the square root of the frames stacked. So, to reduce the noise by a factor of two, you need to stack four subs. However, there is a LOT of noise in a single frame, a 2x reduction in noise isn't remotely close enough. To get a 3x reduction, you need nine frames...to get a 4x reduction, you need 16 frames....to get a 5x reduction in noise, you need at least 25 frames. If you are using a thermally regulated CCD, 25 frames might be getting to the point where noise is low enough to be acceptable..."MIGHT BE GETTING TO".

For a DSLR, 25 frames is never enough (even when the outside nighttime temps are around 0°C). At 50 frames, you reduce noise by 7x. In my experience and opinion, for a DSLR like the 7D at spring and fall nighttime temperatures, 50 frames is the MINIMUM. During summer nighttime temps, at least 81 frames, but 100 (a full 10x reduction in noise) is preferable. I effectively need to double my exposures to reduce the noise in my North America nebula to a level I would deem acceptable and aesthetically pleasing.

1029
Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
The only direct reference they make to dynamic range is 10 stops.

Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Make sure you bring a camera along. Meter the deepest shadows under the tree using aperture priority mode, and set the ISO to 100. Then meter the brightest part of the clouds. Compute the difference via the shutter speed (which should be the only setting that changes as you meter.) In my experience, that is a dynamic range of 16-17 stops at least, if not more. My eyes have had no trouble seeing bright white cloud detail simultaneously with seeing detail in the depth of the shadows under a large pine tree. I do mean simultaneously...you want to stand back far enough that you can see both generally within the central region of your eye, and be able to scan both the shadows and the highlights of the scene without having to move your eyes much. The sun should be behind you somewhere, otherwise your looking at significantly more dynamic range and your eyes WON'T be able to handle it.

Whatever 9VIII's books may say, this is a real world test. Compare what your eyes SEE with what your camera meters. You'll be surprised how much dynamic range there is in such a simple scene, and the fact that your eyes can pick it all up in a moment...well, to me, that means our vision is certainly more than 10 stops of dynamic range "at once", and more than even a D800. The thing about a neuroscience book, whatever they may say, it can only be a guess. They cannot actually measure the dynamic range of human vision, and at best they can only measure basic neural response to the human EYE, which is not the same thing as vision. The eye is the biological device that supports vision, but vision is more than the eye.

1030

Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows.

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.


I disagree with you, heartily! :)
when you look at a scene, you tend to look around the scene and the rapid adjustments your eye makes allow you to see and interpret a wide natural DR.
If you don't map that effect in a large print, at least to some extent, then it's like staring at the brightest part and not really seeing the detail in the darker areas.  So if your eyeballs don't move, go ahead and shoot and print that way.
I produce images for people with articulated eyeballs. :)

This is correct in one sense. The eye is constantly processing, and has a refresh rate of at least 500 frames per second in normal lighting levels (under low light levels, it can be considerably slower, and under very bright levels it can be quite a bit faster.) That high refresh rate, more so than the movement of the eye, is what's responsible for our high moment-DR. We can see a lot more than 14 stops of DR in any given second, but that's because our brains have effectively HDR-tonemapped ~500 individual frames. :P

When it comes to print, your not entirely correct. I've done plenty of printing. You have to be VERY careful when tweaking shadows to lift enough detail that they don't look completely blocked, but not lift too much that you lose the contrast. The amazing thing about our vision is that while we see a huge dynamic range, what we see is still RICH with contrast. When it comes to photography, when we lift shadows, were compressing the original dynamic range of the image into a LOWER contrast outcome. With a D800, while technically you do have the ability to lift to your hearts content, doing so is not necessarily the best thing if your goal is to reproduce what your eyes saw. It's a balancing act between lifting the shadows enough to bring out some detail, but not so much that you wash out the contrast.

Canon cameras certainly do have more banding noise. However just because they have banding noise does not mean you have to print it. After lifting, you can run your print copies through one of a number of denoising tools these days that have debanding features. I use Topaz DeNose 5 and Nik Dfine 2 myself. Both can do wonders when it comes to removing banding. Topaz DeNoise 5 in particular is a Canon users best friend, as its debanding is second to none, and it has a dynamic range recovery feature. You can also easily use your standard photoshop masking layers to protect highlight and midtone regions of your images and only deband/denoise the shadows as well, and avoid softening higher frequency detail in regions that don't need any noise reduction at all.

This is a little bit of extra work, however you CAN recover a LOT of dynamic range from Canon RAW images. They use a bias offset, rather than changing the black point in-camera. As such, even though Canon's read noise floor is higher at low ISO than Nikon or Sony cameras, there are still a couple stops of recoverable detail interweaved WITHIN that noise. Once you deband...it's all there. You can easily get another stop and a half with debanding, and if your more meticulous and properly use masking, you can gain at least two stops. That largely negates the DR advantage that Nikon and Sony cameras have. You won't have quite the same degree of spatial resolution in the shadows as an Exmor-based camera, but our eyes don't pick up high frequency details all that well in the shadows like that anyway, so at least personally, I haven't found it to be a significant issue.

There are benefits to having more DR in camera. Not the least of which is a simplified workflow...you don't have to bother with debanding, and you have better spatial resolution in the shadows. That said, if you ignore Canon's downstream noise contributors, their SENSORS are still actually quite good...the fact that you can reduce the read noise and recover another stop or two of usable image detail means their sensors are just as capable as their competitors. Their problem is really eliminating the downstream noise contributors. The secondary amplifier, the ADC, and even the simple act of shipping an analog signal across an electronic bus. Canon can solve most of those problems by moving to an on-die CP-ADC sensor design, similar to Exmor. They have the technology to do just that as well...they have a CP-ADC patent. They also have a number of other patents that can reduce dark current, adjust the frequency of readout to produce lower noise images (at a slower frame rate, say) or support higher frame rates (for action photography). Canon has the patents to build a better, lower noise, high dynamic range camera. It's really just a question of whether they will put those patents to work soon, or later...or maybe even not at all. (I'm pretty sure they have had the CP-ADC patent at least since they released the 120mp 9.5fps APS-H prototype sensor...which was years ago now.)

1031
..the eye can see much more than 24 stops across the entire range of light adaptation and pupil diameters.  It depends on your age and diet and such, but 30 is doable (but only 14 at a time, in good light, as I said).

Good explanation why Exmors are good enough (I'm happy with mine) and Canon is suffering inadequacy anxiety.  ;D

When the organic sensor from Fuji-Matsushita sees the light of day, it'll also likely be able to see in the shadows of dark holes at the same time.

EDIT - ADD LINK TO 15.3 STOPS OF DR ON SONY A7s
www.sonyalpharumors.com/sony-adds-silent-mode-and-15-3-stops-in-raw-via-fw-upgrade-on-the-new-sony-a7s/

Uhmmmm... isn't that still a 14b camera?...

even downsampled to 8MP, DxO-style... (what's the noise math again?)

It is a 14-bit ADC. I think this has to do with their in-camera image processing (which is kind of what the A7s is all about, and the reason it has such clean ultra-high ISO video). It's shifting the exposure around, lifting shadows and compressing highlights. I'm guessing that's where they get the "15.3 stops DR". It wouldn't be "sensor output RAW", though...the output from the sensor is 14-bit, so it would have to be limited to 14 stops of DR AT MOST (and there is always some overhead, some noise, so it would have to be LESS than 14 stops, i.e. 13.something.)

1032
Landscape / Re: Deep Sky Astrophotography
« on: May 29, 2014, 08:25:17 PM »
Thanks, guys! :)

@Kahuna, I bet the sky out there is AMAZING! I'm quite envious. I barely remember dark skies as a kid, when LP was much less than it is today, and when we lived pretty far out of town. But I wasn't as observant of the details back then. I really don't even remember what the summer sky milky way looks like under a truly dark sky.

Even if you don't have a camera, you still have eyeballs and a brain! Remember those nights! :)

These are amazing.  I'm jealous.  Gonna try my hand in a big way tonight with a large telescope (60 cm - professionally guided, Mt. Wilson Observatory). We were planning on shooting planets with the 1DX and deep space 60a.  Any extra advice would be helpful.  (I've read alot over the last few weeks, but always need to learn more)

Thanks!

As for advice, that is probably best left for another thread. Start one, PM me the link, and I'll offer the best bits of advice I have.

1033
Landscape / Re: Deep Sky Astrophotography
« on: May 29, 2014, 12:47:56 AM »
Living up in the Northwest, our availability of clear weather is limited to the summer, and then we have a lot of light contamination from Spokane, starting about 10 miles South of us.  We are in the country, as far as the neighborhood, but not away from the city light.
 
I've been up in Northern British Columbia, 100 miles from anything but tiny villages, and its truly amazing what you can see on a clear night.  Astrophotography would be a great hobby up there.

Light pollution doesn't have to be a problem these days. I actually shot this only a few miles from Denver, CO. The trick is using a light pollution filter. They don't work as well for galaxies (which are mostly stars, so broad band emissions), but for nebula (which are narrow band emissions), they work wonders. I use the Astronomik CLS, which is one of the better ones for blocking pollutant bands.

All of my images were shot under light polluted skies using the Astronomik filter. I'm under a yellow zone that, depending on the atmospheric particulates, often turns into an orange zone (I generally judge by whether I can see the milky way or not...if I can faintly see it, then my LP conditions are more yellow-zone, if not, then orange zone. Either way, with an LP filter, you can image under heavily light polluted skies. I know many people who image under white zones.

I agree, though, it's amazing what you can see under dark skies. There is one spot in the north western corner of Colorado that is 100% free of LP of any kind. I want to get up there sometime and see what it's like. You can very clearly see the milky way, so clearly that all the dust lanes show up to the naked eye, and all the larger Messier objects (like Andromeda, Triangulum, etc.) is also visible to the naked eye.

1034
EOS Bodies / Re: New Full Frame Camera in Testing? [CR1]
« on: May 28, 2014, 07:54:39 PM »
The color accuracy of the 1Dx actually isn't all that great.  It's nothing compared to the 1Ds Mark III.

Yeah, that would be expected, given the weaker CFA relative to the 1Ds III. I wonder what Canon is doing to remedy that issue...

1035
Landscape / Re: Deep Sky Astrophotography
« on: May 28, 2014, 05:18:41 PM »
We finally had a couple of clear nights the last two nights here in Colorado. These are the first since the lunar eclipse some five weeks ago now. Gave me the opportunity to image part of the North America nebula in Cygnus.



Equipment:
- Canon EOS 7D (unmodded)
- Canon EF 600mm f/4 L II (image)
- Orion ST80 (guider) + SSAG

Integration (49 subs (3h 40m)):
- 52x270s (4m30s) (95% integrated)
- 67 Darks (divided into three groups, temp matching lights, ~15-20 darks per group)
- 100 Biases
- 30 Flats

Pages: 1 ... 67 68 [69] 70 71 ... 299