October 21, 2014, 01:43:25 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jrista

Pages: 1 ... 66 67 [68] 69 70 ... 298
Even though you don't think you're moving your eyes when looking at clouds and tree bark, you more then likely are, just a small amount, and at hard to notice speeds. You're so used to this process that its not something you can be sure that you would notice, since you normally don't. You're brain combines all these part images together to give you what you perceive as "the world." It is also very possible that you aren't truly seeing the bark. You've seen it many times before, even if you don't realize it. You're brain will often fill in information that it isn't actually "seeing" at that very moment because it either knows that it is there, or that it believes that it is there. This is one of the reasons why eye witness accounts of sudden and traumatic crimes can be notoriously inaccurate. As an example, a person may honestly believe that a mugger has a gun in his one hand that is down at his side near his (the robbers) pocket, but in reality what is there is a dark pattern on his jacket pocket, either from the jacket's colour/style, or from a shadow. The witness isn't lying, he was just so afraid for his/her life that he/she imagined that there was a gun there.

Your talking about a different kind of eye movement, but you are correct, our eyes are always adjusting.

Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail. But how observant an individual is, like many human things, falls onto a bell curve. The vast majority of people are barely observant of anything not directly happening to them, and even in the case of things happening to them, they still aren't particularly clearly aware of all the details. (Especially this day and age...the age of endless chaos, immeasurably hectic schedules, and the ubiquity of distractions...such as our phones.)

I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded. HOW it's recorded is what affects our memories. For someone who is attuned to their senses, they are evaluating and reevaluating the information passing into their memories more intensely than the average individual. The interesting thing about memory, it isn't just the act of exercising it that strengthens it...it's the act of evaluating and associating memories that TRUELY strengthens them. Observant individuals are more likely to be processing the information their senses pick up in a more complex manner, evaluating the information as it comes in, and associating that new information with old information, and creating a lot more associations between memories. A lot of what gets associated may be accidental...but in a person who has a highly structured memory, with a strong and diverse set of associations between memories, one single observation can create a powerful memory that is associated to dozens of other powerful memories. The more associations, the more of your entirety of memory that will get accessed by the brain, and therefor strengthened and enhanced, than when you have fewer associations.

I actually took a course on memory when I first started college some decade and a half ago. The original intent of the course was to help students study, and remember what they study. The course delved into the mechanics and biology of sensory input processing and memory. Memory is a multi-stage process. Not just short term and long term, but immediate term (the things you remember super vividly because they just happened, but this memory fades quickly, in seconds), short term, mid term, and long term (and even that is probably not really accurate, it's still mostly a generalization). Immediate term memory is an interesting thing...you only have a few "slots" for this kind of memory. Maybe 9-12 at most. As information goes in, old information in this part of your memory must go out. Ever have a situation where you were thinking about something, were distracted for just a moment, but the thing you were thinking about before is just....gone? You can't, for the life of you, remember what it was you were thinking about? That's the loss of an immediate term memory. It's not necessarily really gone...amazingly, our brains remember pretty much everything that goes into them, it's just that all the memories we create are not ASSOCIATED to other things that help us remember them. It's just that the thing you were thinking about was pushed out of your immediate-mode memory by that distraction. It filled up your immediate mode memory slots.

Your brain, to some degree, will automatically move information from one stage to the next, however without active intervention on your part, how those memories are created and what they are associated with may not be ideal, and may not be conducive to your ability to remember it later on. The most critical stage for you to take an active role in creating memories is when new information enters your immediate term memory. You have to actively think about it, actively associate it with useful other memories. Associations of like kind, and associations of context, can greatly improve your ability to recall a memory from longer term modes of memory storage. The more associations you create when creating new memories, the stronger those memories will be, and the longer they are likely to last (assuming they continue to be exercised). The strongest memories are those we exercise the most, and which have the strongest associations to other strong memories. Some of the weakest memories we have are when were simply not in a state of mind to take any control over the creation of our memories...such as, say, when a thug walks in and puts a gun in someone's face. Fear, fight or flight, activities kicked into gear by a strong surge of hormones can completely mess with our ability to actively think about what's going on, and instead...we react (largely out of 100% pure self preservation, in which case if we do form memories in such situations...they are unlikely to be about anyone but one's self, and if they are about something else, they aren't likely to be very reliable memoreis.)

It was one of the best courses I ever took. Combined with the fact that I'm hypersensitive to most sensory input (particularly sound, but sight and touch as well...smell not so much, but I had some problems with my nose years ago), the knowledge of how to actively work sensory input and properly use my memory has been one of the most beneficial things to come out of my time in college.

If you WANT to have a rich memory, it's really a matter of using it correctly. It's an active process as much as a passive one, if you choose to be an observant individual. If not...well, your recallable memory will be more like swiss cheese than a finely crafted neural network of memories and associations, and yes...the brain will try to fill in the gaps. Interestingly, when your brain fills in the gaps, it isn't working with nothing. As I mentioned before, we remember most of what goes in...it's just that most of the information is randomly dumped, unassociated or weakly associated to more random things. The brain knows that the information is there, it just doesn't have a good record of where the knowledge is. I don't think that really has to do with the way our eyes function...it has to do with how the brain processes and stores incoming information. Our eyes are simply a source of information, not the tool that's processing and storing that information.

Probably the only real, reliable way to conduct an experiment like looking at a very dark and a very bright thing at the same time and knowing that you didn't look at each separately would be to have a special camera/s closely monitoring your head, eyeballs, and pupils for any movement. It would also have to be an artificial or set up scene so that there was some symbol or something in the dark area that you would have to be able to identify without any movement. I honestly don't think "not moving" at all is possible without medical intervention such as somehow disabling a persons ability to move at all; body, neck, head, even eyeballs.

Not exactly a fun experiment. :)

You do move your eyes, a little. You can't really do the experiment without moving back and forth. The point is not to move your eyes a lot. If you look in one direction at a tree, then another direction at the cloud, your not actually working within the same "scene". In my case, I was crouched down, the tree in front of my, the cloud just to the right behind it. Your whole field of view takes in the entire scene at once. You can move your eyes just enough to center your 2° foveal spot on either the shadows or the cloud highlights, but don't move around, don't move your head, don't look in a different direction, as that would mess up the requirements of the test. So long as "the scene" doesn't change...so long as the whole original scene you pick stays within your field of view, and all you do is change what you point your foveal spot at, you'll be fine.

To be clear, a LOT is going on every second that you do this test. Your eyes are sucking in a ton of information in tiny fractions of a second, and shipping it to the visual cortex. Your visual cortex is processing all that information to increase resolution, color fidelity, dynamic range, etc. So long as you pick the right test scene, one which has brightly lit clouds and a deep shaded area, you should be fine. In my experience, when using my in-camera meter, the difference between the slowest and fastest shutter speed is about 16 stops or so. I wouldn't say that such a scene actually had a full 24 stops in it...that would be pretty extreme (that would be another eight DOUBLINGS of the kind of tonal range I am currently talking about...so probably not a chance). But I do believe it is more dynamic range than any camera I've ever used was capable of handling in a single frame.

Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Could you post a picture of this scene?  I'm having difficulty imagining how I can simultaneously (without moving my eyes) see into the dark depths of a stand of trees, while simultaneously seeing clouds.  The closest I can imagine is a brightly lit flower nearer to me than a stand of trees, but both along the same line-of-sight.

You move your eyes, just not a lot. The point is the scene should generally be static...you shouldn't be looking in one direction for the shadows, then turning around 180 degrees for the highlights. The point is that, while our eyeballs themselves, our retinas and the neurochemical process that resolves a "frame", may only be capable of 5-6 stops of dynamic range, our "vision", the biochemical process in our brains that gives us sight, is working with FAR more information than what our eyes at any given moment process. It's got hundreds if not thousands of "frames" that it's processing , one after the other, in a kind of circular buffer. It's gathering up far more color and contrast information from all the frames in total than each frame has in and of itself. The microscopic but constant movements are what give us our high resolution vision...it's like superresolution, so we get HDR and superresolution at the same time, all in the span of a second or two.

My point is that if I look at the deep shadows under a large tree, then a moment later flick my eyes to a cloud, then a moment later flick back, I can see detail in both. There is no delay, there is no adjustment period. My visual perception is that I can see details in extremely bright highlights SIMULTANEOUSLY with seeing details in extremely dark shaded details. My "vision" is better than what my eyeballs themselves are capable of (which, really, last I checked, was only about 5-6 stops of dynamic range, and actually less color fidelity and resolution than what we actually "see" in our brains.) Our brains are doing a degree of processing that far outpaces any specific "device". Our vision is the combination of a device and a high powered HDR/Superresolution crunching computer that does this amazing thing all at once.

Landscape / Re: Deep Sky Astrophotography
« on: May 31, 2014, 05:05:47 PM »
Soulless, here are three samples of the same single light frame. Note that the first two have been downsampled by a factor of 6.5x, which has the effect of SIGNIFICANTLY reducing noise. I've included a 1:1 crop to show how much noise there is in one single frame. It is the noise levels that are the primary reason why you really have to take 50, 80, 100 frames and stack them...it's the only way to reduce noise to manageable levels with a DSLR.

With Cooled CCD cameras like Bradbury's KAF-16803, you have significantly less dark current noise due to the sensor being cooled by some -50°C relative to ambient, and less read noise. You don't need to stack as many subs to get a good result with a dedicated CCD, however you DO still need to stack.

Original Out-of-Camera Frame (blue due to optical light pollution filter):

Same frame color-corrected and stretched:

100% crop from frame to show noise:

The last sample here, a you can see, has a completely unacceptable level of noise. The amount of noise drops as the square root of the frames stacked. So, to reduce the noise by a factor of two, you need to stack four subs. However, there is a LOT of noise in a single frame, a 2x reduction in noise isn't remotely close enough. To get a 3x reduction, you need nine frames...to get a 4x reduction, you need 16 frames....to get a 5x reduction in noise, you need at least 25 frames. If you are using a thermally regulated CCD, 25 frames might be getting to the point where noise is low enough to be acceptable..."MIGHT BE GETTING TO".

For a DSLR, 25 frames is never enough (even when the outside nighttime temps are around 0°C). At 50 frames, you reduce noise by 7x. In my experience and opinion, for a DSLR like the 7D at spring and fall nighttime temperatures, 50 frames is the MINIMUM. During summer nighttime temps, at least 81 frames, but 100 (a full 10x reduction in noise) is preferable. I effectively need to double my exposures to reduce the noise in my North America nebula to a level I would deem acceptable and aesthetically pleasing.

Clever wording Jrista. Since that conversation we had in (January? February?) Today I actually happened to have the afternoon off and knowing where this conversation was headed actually read the relevant chapters in "Principles of Neural Science" (absolutely fantastic book, not difficult to read at all).
The only direct reference they make to dynamic range is 10 stops.

Here is a little test, for anyone who is interested. This is how my eyes work...maybe it isn't the same for everyone else. On a fairly bright day, with some clouds in the sky, find a scene where you can see the clouds, as well as the deep shadows underneath a tree. Pine trees are ideal. In my case, I can see the bark of the tree and the dried pine needles under the tree very well, while simultaneously being able to see detail in the clouds.

Make sure you bring a camera along. Meter the deepest shadows under the tree using aperture priority mode, and set the ISO to 100. Then meter the brightest part of the clouds. Compute the difference via the shutter speed (which should be the only setting that changes as you meter.) In my experience, that is a dynamic range of 16-17 stops at least, if not more. My eyes have had no trouble seeing bright white cloud detail simultaneously with seeing detail in the depth of the shadows under a large pine tree. I do mean simultaneously...you want to stand back far enough that you can see both generally within the central region of your eye, and be able to scan both the shadows and the highlights of the scene without having to move your eyes much. The sun should be behind you somewhere, otherwise your looking at significantly more dynamic range and your eyes WON'T be able to handle it.

Whatever 9VIII's books may say, this is a real world test. Compare what your eyes SEE with what your camera meters. You'll be surprised how much dynamic range there is in such a simple scene, and the fact that your eyes can pick it all up in a moment...well, to me, that means our vision is certainly more than 10 stops of dynamic range "at once", and more than even a D800. The thing about a neuroscience book, whatever they may say, it can only be a guess. They cannot actually measure the dynamic range of human vision, and at best they can only measure basic neural response to the human EYE, which is not the same thing as vision. The eye is the biological device that supports vision, but vision is more than the eye.


Exactly. Where did this come from ? Talk about seeing the past through Rose tinted spectacles !

So much rubbish about what the 'eye can see'. That's like saying your camera lens gives better DR than another. The brain sees, so you could say we see in a form of HDR, or multiple exposures, as someone pointed out, rather than one single exposure.

If you produced a picture with the same contrast as we 'see' it would be very flat and unappealing. Even old artists added contrast in their paintings, often giving very dark, heavy shadows.

The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.

I disagree with you, heartily! :)
when you look at a scene, you tend to look around the scene and the rapid adjustments your eye makes allow you to see and interpret a wide natural DR.
If you don't map that effect in a large print, at least to some extent, then it's like staring at the brightest part and not really seeing the detail in the darker areas.  So if your eyeballs don't move, go ahead and shoot and print that way.
I produce images for people with articulated eyeballs. :)

This is correct in one sense. The eye is constantly processing, and has a refresh rate of at least 500 frames per second in normal lighting levels (under low light levels, it can be considerably slower, and under very bright levels it can be quite a bit faster.) That high refresh rate, more so than the movement of the eye, is what's responsible for our high moment-DR. We can see a lot more than 14 stops of DR in any given second, but that's because our brains have effectively HDR-tonemapped ~500 individual frames. :P

When it comes to print, your not entirely correct. I've done plenty of printing. You have to be VERY careful when tweaking shadows to lift enough detail that they don't look completely blocked, but not lift too much that you lose the contrast. The amazing thing about our vision is that while we see a huge dynamic range, what we see is still RICH with contrast. When it comes to photography, when we lift shadows, were compressing the original dynamic range of the image into a LOWER contrast outcome. With a D800, while technically you do have the ability to lift to your hearts content, doing so is not necessarily the best thing if your goal is to reproduce what your eyes saw. It's a balancing act between lifting the shadows enough to bring out some detail, but not so much that you wash out the contrast.

Canon cameras certainly do have more banding noise. However just because they have banding noise does not mean you have to print it. After lifting, you can run your print copies through one of a number of denoising tools these days that have debanding features. I use Topaz DeNose 5 and Nik Dfine 2 myself. Both can do wonders when it comes to removing banding. Topaz DeNoise 5 in particular is a Canon users best friend, as its debanding is second to none, and it has a dynamic range recovery feature. You can also easily use your standard photoshop masking layers to protect highlight and midtone regions of your images and only deband/denoise the shadows as well, and avoid softening higher frequency detail in regions that don't need any noise reduction at all.

This is a little bit of extra work, however you CAN recover a LOT of dynamic range from Canon RAW images. They use a bias offset, rather than changing the black point in-camera. As such, even though Canon's read noise floor is higher at low ISO than Nikon or Sony cameras, there are still a couple stops of recoverable detail interweaved WITHIN that noise. Once you deband...it's all there. You can easily get another stop and a half with debanding, and if your more meticulous and properly use masking, you can gain at least two stops. That largely negates the DR advantage that Nikon and Sony cameras have. You won't have quite the same degree of spatial resolution in the shadows as an Exmor-based camera, but our eyes don't pick up high frequency details all that well in the shadows like that anyway, so at least personally, I haven't found it to be a significant issue.

There are benefits to having more DR in camera. Not the least of which is a simplified workflow...you don't have to bother with debanding, and you have better spatial resolution in the shadows. That said, if you ignore Canon's downstream noise contributors, their SENSORS are still actually quite good...the fact that you can reduce the read noise and recover another stop or two of usable image detail means their sensors are just as capable as their competitors. Their problem is really eliminating the downstream noise contributors. The secondary amplifier, the ADC, and even the simple act of shipping an analog signal across an electronic bus. Canon can solve most of those problems by moving to an on-die CP-ADC sensor design, similar to Exmor. They have the technology to do just that as well...they have a CP-ADC patent. They also have a number of other patents that can reduce dark current, adjust the frequency of readout to produce lower noise images (at a slower frame rate, say) or support higher frame rates (for action photography). Canon has the patents to build a better, lower noise, high dynamic range camera. It's really just a question of whether they will put those patents to work soon, or later...or maybe even not at all. (I'm pretty sure they have had the CP-ADC patent at least since they released the 120mp 9.5fps APS-H prototype sensor...which was years ago now.)

..the eye can see much more than 24 stops across the entire range of light adaptation and pupil diameters.  It depends on your age and diet and such, but 30 is doable (but only 14 at a time, in good light, as I said).

Good explanation why Exmors are good enough (I'm happy with mine) and Canon is suffering inadequacy anxiety.  ;D

When the organic sensor from Fuji-Matsushita sees the light of day, it'll also likely be able to see in the shadows of dark holes at the same time.


Uhmmmm... isn't that still a 14b camera?...

even downsampled to 8MP, DxO-style... (what's the noise math again?)

It is a 14-bit ADC. I think this has to do with their in-camera image processing (which is kind of what the A7s is all about, and the reason it has such clean ultra-high ISO video). It's shifting the exposure around, lifting shadows and compressing highlights. I'm guessing that's where they get the "15.3 stops DR". It wouldn't be "sensor output RAW", though...the output from the sensor is 14-bit, so it would have to be limited to 14 stops of DR AT MOST (and there is always some overhead, some noise, so it would have to be LESS than 14 stops, i.e. 13.something.)

Landscape / Re: Deep Sky Astrophotography
« on: May 29, 2014, 08:25:17 PM »
Thanks, guys! :)

@Kahuna, I bet the sky out there is AMAZING! I'm quite envious. I barely remember dark skies as a kid, when LP was much less than it is today, and when we lived pretty far out of town. But I wasn't as observant of the details back then. I really don't even remember what the summer sky milky way looks like under a truly dark sky.

Even if you don't have a camera, you still have eyeballs and a brain! Remember those nights! :)

These are amazing.  I'm jealous.  Gonna try my hand in a big way tonight with a large telescope (60 cm - professionally guided, Mt. Wilson Observatory). We were planning on shooting planets with the 1DX and deep space 60a.  Any extra advice would be helpful.  (I've read alot over the last few weeks, but always need to learn more)


As for advice, that is probably best left for another thread. Start one, PM me the link, and I'll offer the best bits of advice I have.

Landscape / Re: Deep Sky Astrophotography
« on: May 29, 2014, 12:47:56 AM »
Living up in the Northwest, our availability of clear weather is limited to the summer, and then we have a lot of light contamination from Spokane, starting about 10 miles South of us.  We are in the country, as far as the neighborhood, but not away from the city light.
I've been up in Northern British Columbia, 100 miles from anything but tiny villages, and its truly amazing what you can see on a clear night.  Astrophotography would be a great hobby up there.

Light pollution doesn't have to be a problem these days. I actually shot this only a few miles from Denver, CO. The trick is using a light pollution filter. They don't work as well for galaxies (which are mostly stars, so broad band emissions), but for nebula (which are narrow band emissions), they work wonders. I use the Astronomik CLS, which is one of the better ones for blocking pollutant bands.

All of my images were shot under light polluted skies using the Astronomik filter. I'm under a yellow zone that, depending on the atmospheric particulates, often turns into an orange zone (I generally judge by whether I can see the milky way or not...if I can faintly see it, then my LP conditions are more yellow-zone, if not, then orange zone. Either way, with an LP filter, you can image under heavily light polluted skies. I know many people who image under white zones.

I agree, though, it's amazing what you can see under dark skies. There is one spot in the north western corner of Colorado that is 100% free of LP of any kind. I want to get up there sometime and see what it's like. You can very clearly see the milky way, so clearly that all the dust lanes show up to the naked eye, and all the larger Messier objects (like Andromeda, Triangulum, etc.) is also visible to the naked eye.

EOS Bodies / Re: New Full Frame Camera in Testing? [CR1]
« on: May 28, 2014, 07:54:39 PM »
The color accuracy of the 1Dx actually isn't all that great.  It's nothing compared to the 1Ds Mark III.

Yeah, that would be expected, given the weaker CFA relative to the 1Ds III. I wonder what Canon is doing to remedy that issue...

Landscape / Re: Deep Sky Astrophotography
« on: May 28, 2014, 05:18:41 PM »
We finally had a couple of clear nights the last two nights here in Colorado. These are the first since the lunar eclipse some five weeks ago now. Gave me the opportunity to image part of the North America nebula in Cygnus.

- Canon EOS 7D (unmodded)
- Canon EF 600mm f/4 L II (image)
- Orion ST80 (guider) + SSAG

Integration (49 subs (3h 40m)):
- 52x270s (4m30s) (95% integrated)
- 67 Darks (divided into three groups, temp matching lights, ~15-20 darks per group)
- 100 Biases
- 30 Flats

EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: May 28, 2014, 12:12:37 AM »
It will sell because it is an all purpose imaging unit. In the case of a new 7D, the properties of a crop sensor that are attractive for still photographers in certain applications are just as attractive to videographers taking video instead of stills. A video centric 7D will be more attractive to sport and wildlife videographers than a 5D would, for the exact same reasons as stills.

In the modern era a camera needs to be able to perform both types of imaging well to really succeed as a general purpose imaging device (which is how the average owner would use it).

The concepts of consumer/prosumer cameras being dedicated still or video cameras is an outdated idea that properly belongs in the past.

I've never made any point about consumer/prosumer cameras being dedicated still or video cameras.

None of this changes the point I was making. I was debating the points made by Pallette about the reasons behind why Canon might add or enhance the video features of the 7D II. His points were based in the notion that Canon made somekind of mistake with the 5D III, and that they would correct that mistake with the 7D II. It's a false notion. Canon will fix problems in the 5D III with the 5D IV. Those who might have bought the 5D III for video and passed it up won't be buying a 7D II as an alternative...they are most likely going to want a full frame sensor for the cinematic quality it offers when using EF lenses, which means the only camera that Canon can "fix" any presumed problems with the 5D III's video is in the 5D IV.

That video will somehow make the 7D II sell like hotcakes is another mistaken notion. Video is certainly an endemic feature of DSLRs and Mirrorless cameras now, but it isn't the primary reason DSLRs like the 7D II sell. It isn't even the primary reason cameras like the 5D III and 5D II sell or sold. For every person doing cinematography with a 5D II, there were a dozen doing landscapes, and at least a dozen more doing weddings.  That doesn't count all the dozen other photographers using the 5D II for other STILL photography purposes. All for each and every individual who actually bought the 5D II for the purpose of doing video...EXTRA sales of the camera that primarily intended to use it's secondary feature set. The amount of photographers using still camera DSLRs for photography completely swamps the amount of photographers or cinematographers using them for video.

Any failures with the 5D III, at least any failures that have a significant impact on the bottom line for sales numbers, primarily have to do with the core functionality and core technology. The sensor, the AF unit, ergonomics. Canon MIGHT have "lost" a few customers here and there because the 5D III, which DOES have many improvements for video over the 5D II, might not have the specific video feature they want (i.e RAW HDMI out). Most of the reasons why video people might have skipped the 5D III have also been fixed by Magic Lantern, so most of the points are moot these days anyways. I don't doubt that Canon will be improving the 7D II. I highly doubt those improvements will have a particularly significant impact on the number of units Canon will sell, given it's predecessors primary use cases.

EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: May 27, 2014, 06:13:18 PM »

Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.

Which is the reason why the GH3 and GH4 were total failures in sales  ::)

No one would ever buy a camera like that....oh, wait....they do....how can that be? Very weird, there must be something wrong with those customers.

Your missing my point. I'm not saying 7D II's wont sell for video. I'm saying that won't be the primary reason they sell...not by a very long shot.

The reasons the GH3 and GH4 are successes in sales has not been determined to be solely due to their video features. They are also good CAMERAS. The notion that DSLRs sell best because they have video features is an ASSUMPTION. It is not backed up by any data. Sure, more will sell because of video, because people who want a DSLR for video purposes will buy them, but that doesn't change the fact that the majority of sales are due to PHOTOGRAPHERS buying them for of PHOTOGRAPHY. That's the case for pretty much every DSLR or Mirrorless with video features...they are still cameras, designed for still photography, and far more of pretty much any camera model you bring up will sell significantly more for photography purposes.

As far as the 7D II being a better seller for video than the 5D III, I don't think so. The larger sensor in the 5D III is extremely appealing for that cinematic look and feel. I'm not saying no 7D II's will sell for video purposes, but I don't think that video features will be the primary reason the 7D II sells. I still think the 7D II will sell primarily because of action photographers, particularly bird and wildlife photographers, want a camera with high resolution, lots of reach, and a damn fast frame rate (and doesn't cost a mint and a half to buy.)

EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: May 27, 2014, 01:20:31 PM »
Every time you split a photodiode, each resulting smaller photodiode is less sensitive to light...it has a smaller area.

When you split a photodiode in two, you get LESS than half the light in each half. There is an amount of waste real-estate around the edges of a cell. To illustrate with a simple example, let's say the manufacturing process has a resolution of 1 unit and a pixel is 10x10 units square. You have a waste area of 1 unit around the outside of the photodiode so you end up with an 8x8 photodiode and 64% of the surface area used to gather light. By splitting the photodiode, you end up with 2 3x8 photodiodes, or 48% of the surface area used to collect light.

Yes, you can use microlenses to counter this, but perfection (which can never be achieved) would get you back to even with the single photodiode.

Aye, this is very true. There are spatial losses. Based on Canon's patents, I don't think the waste is quite as significant as your basic example with the units. It's probably more on the order of each pixel being 100x100 units square, losing maybe 5 units around due to wiring and other amp/readout transistors. Then there may be a 1-unit gap between the two halves of the split photodiode. So there are losses, but maybe not quite as extreme as your 10x10 example.

Regardless, there is still no way to construe DPAF or some hypothetical future QPAF as being some magic bullet to increasing either the readout performance nor dynamic range of Canon sensors. ;P That's a myth that just won't die, it seems. It's like the horse that was beat, and is now undead. It just keeps coming back for more brains... O_o

EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: May 27, 2014, 05:53:45 AM »
The bad timing of the release of 5D III was actually caused by the bad timing of the
release of 1D X. It was delayed after Nikon introduced the new D4. Canon managed
to use the extra time to adjust the sensor tech to match the Nikon performance.

Highly unlikely. It takes YEARS to design a sensor. Canon did not even have a year between the initial announcement of the 1D X and it's actual release to Photographers during the Olympics. The major changes between announcement and release had to do with the AF system, not the sensor.

Canon did not adjust the sensor technology to match Nikon's performance. Canon had designed and finalized the design of the sensor, and was probably well into mass producing them, by the time they announced the product. There is no chance they reengineered it after that point...not in time for release.

That means Canon released a highly competitive sensor out the gate WITHOUT the need to reengineer it to "match" the capabilities of the competition.

The Nikon D800 forced Canon to accelerate the process of perfecting and releasing 
the 5D III before they actually were ready to launch their next sensor.

Again, false. This is a 100% pure fabrication.

The 5D III was in the same boat as the 1D X. It takes a good six years to engineer, debug, and release the kind of technology found in cameras like the 5D III and 1D X. By the time these cameras releases rolled around, it was WAY past any time when Canon would have had a chance to make any significant changes to their sensor technology.

The big problem was that we all (including Canon) predicted and expected the 5D III to 
be the best ever video filming DSLR camera. With the heritage from 5D II the demand
for better inner quality in the filming department kind of forced the developers to go for
a sensor with less moire. Exactly how this is done is something I haven´t read or heard
about anywhere. But I suggest the inside software had to be designed to deal with much
softer images from the sensor and apply a radical up sharpening. This would explain why
the lo ISO performance is worse than expected. Readers here will surely share their opinion
on this. Please add comments.

Again, false. The 5D III is a sharper camera than it's predecessor. It's AA filter is slightly weaker than the 5D II's. Canon binned the pixels to produce video, which is where some of the "softening" came from, but binning concurrently reduced noise. Tradeoffs.
My point is that I feel Canon does not want to make the same mistake again. They will
release the next tech when they are certain the 4K video standard is on pair with what
the other companies will be able to deliver in the next years to come. And they will have
to make the sensor output sharp and noise free for stills as well. Expect the 7D II to
be 20 megapixel with 4K video at 60p. That would be a well balanced step forward at
this moment I think.

Speculation. As much as people like to use DSLRs for video, video is still the secondary purpose of this kind of camera. I don't think Canon is focusing solely on improving the video capabilities of the 7D II...especially because it's an APS-C camera. It is simply incapable of the same kind of thin DOF cinematic look and feel that the 5D II became famous for due to it's cropped sensor. I don't think the 7D II will be a particularly popular video DSLR. It might be somewhat popular, especially if it has some enhanced video features, but it isn't going to be the cinematic DSLR powerhouse that gave so many movies and TV shows reason to use it for professional prime time/big screen productions.

The new sensor has to be able to read out a huge amount of data or pre process
it on chip before entering the processor.

Assuming it hits at around 20-24mp, it actually won't need to read out much more than the 5D III. I've already demonstrated mathematically on multiple occasions that the DIGIC5+ chips in the 1D X are more than capable of handling 10fps @ 24mp 14-bit.

I predict the suggested quad pixel tech to be used in a way no one has talked about here.
This tech allows not only for fast live AF, but also for reducing the sensor noise by using
the well known multi exposure technique. Instead of taking four separate images and sandwiching together for lower visible noise, Canon will be able to make one exposure  with four separate channels of the same pixel read. This makes it possible to get a  much better ISO performance. The potential for reducing and minimizing artifacts is huge, I would say. 

Again, speculation. This is not a proven fact. It is a regurgitated assumption that people all over the net are spewing. There is no magic about the DPAF technology (which, BTW, is DUAL pixels, not quad pixels...all the patents and other evidence about the 70D clearly indicates the photodiode is split once, into two halves. The next refinement changes the sensitivities of each half. There is no quad pixel AF patent from Canon as of yet.) The photodiodes are split UNDER the color filters. Again , I've demonstrated mathematically on multiple occasions that dual-ISO reads of split photodiodes results in a net-zero result...you neither really gain nor lose anything. Dual-ISO with half-pixels is not the same as the dual-ISO with Magic Lantern, which utilizes FULL pixels and takes advantage of Canon's off-sensor, downstream secondary amplifier to do it's magic. Dual ISO with half pixels means your working with half as much light as what ML is working with now, which effectively nullifies any benefit you might have otherwise gained. Assuming Canon DOES eventually come out with QPAF, then each sub-photodiode is only receiving 1/4 of the light for the whole pixel. Same deal...Dual ISO with such a setup results in a net zero outcome...you cannot use less light to create a better result, no matter what ISO settings your using.
And not only can you compare differences between four reads of the same pixel. 
You can compare the adjacent pixel reads or all pixels on the sensor and identify
noise introduced by the power supply much easier. Four separate reads of the
single pixel allow you to step into the zero time domain where the processor will
have the optimum working space for computing errors in signal transfer.

Again, incorrect. It is not four reads of the same pixel. It is four reads of 1/4 of the pixel! It is four reads that result in 1/4 the light each (or, as the actual facts would have it, since it's DUAL pixel technology, two reads at 1/2 the light each). You cannot read a single half or quarter of a split photodiode, and assume it is the same as reading the whole pixel. That's WHY Canon bins the two photodiode halves in DPAF sensors when doing an image read (vs. an AF read)...because otherwise, they are just reading smaller pixels with less light. There is no magic here, no special capabilities. Smaller photodiodes are smaller photodiodes...they have less charge capacity, less total surface area for light to strike.

Four separate reads also mean more time to read out the sensor. It's more information, like going from a 20mp sensor to an 80mp sensor. I don't see how that allows any optimization of any kind...it's exactly the opposite. It's a factor of four increase in "pixels" to read, meaning at least that much more processing power would be required...more, really, if you factor in overhead.

It will be a matter of computing power to take the full advantage of the quad pixel
tech and I guess this is why we are waiting for Canon to present the next generation
of DSLR sensors. If they get it right I think we will se images and video with much
less noise and improved color fidelity.

Assuming Canon ever creates a quad pixel sensor, yes, they will need significantly faster processors. Good thing they only do reads of each separate photodiode for AF purposes, and use hardware binning built into the sensor itself for image reads. That means they are still only reading out 20-24mp worth of "pixels", regardless of how many photodiodes there may be on the sensor.

Another question is if Canon would prefer to introduce the next generation of sensor
I suggest on the 7DII or not. I suppose a demand for higher frame rates on this model
makes things more complicated.
The possibilities are just as overwhelming as the challenges. Canon will most likely
make sure they use the new sensor tech to the full extent before releasing it.

This is my guess. What do you think?

I think you've made a lot of wild guesses, assumptions and crazy speculative leaps. You make the assumption that Canon has QPAF technology, they do not. (Based on current patent filings, no one does...some competitors are finally developing their own DPAF-like patents. Canon's own subsequent patents to DPAF, some only a few months old, still indicate DUAL photodiodes, not quad. The changes have to do with sensitivity alone, and those sensitivity changes have to do purely with AF technology, the image readout technology is still exactly the same...binned.)

I'm really not sure why everyone things that Canon's DPAF tech is actually QPAF tech, or why everyone thinks that somehow this dual PHOTODIODE/pixel technology is somehow going to mean better dynamic range. I keep debating these mistaken points...they just don't seem to die. Every time you split a photodiode, each resulting smaller photodiode is less sensitive to light...it has a smaller area. Concurrently, it increases the number of photodiodes that need to be read. There is no way to construe less light and more photodiodes as some kind of magical optimization that suddenly somehow gives Canon either a performance edge or a dynamic range edge or a noise management edge.

There are only two things that affect REAL sensitivity as far as sensor design goes (three if you factor in downstream readout logic): Total sensor area and quantum efficiency. If you do throw in downstream read logic, then read noise also plays a role, but in Canon sensors readout logic is primarily off-die, so not actually a function of the sensor. Increase sensor area, increase sensitivity. Increase quantum efficiency, increase sensitivity. You can split photodiodes to your hearts content...so long as they are contained within the same total sensor area, splitting them really doesn't to jack to improve anything. A given amount of light is a given amount of light. Nothing done after you've gathered that given amount of light is going to change the original amount. Pixel size is largely irrelevant until you are reach limited. Only in reach-limited situations does pixel size matter, however have no illusions...smaller pixels mean more noise, less dynamic range. Always. The benefit of smaller pixels in reach limited scenarios is resolution, not better overall IQ.

EOS Bodies - For Stills / Re: 7d2 IQ thoughts.
« on: May 26, 2014, 09:55:11 PM »
So yeah, when everyone has a 4K monitor on their desks, can you imagine the level of pixel peeping that will go on?

Actually, since the pixels in 4k screens are about 1/4 the size of pixels in 1080p screens, and are that much harder to see, pixel peeping will actually be much more difficult to do. Not only that, the increase in density should improve sharpness on-screen, so pixel peepers should be seeing better results...and might finally stop bitching. :P

Pages: 1 ... 66 67 [68] 69 70 ... 298