September 22, 2014, 06:53:32 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jrista

Pages: 1 ... 86 87 [88] 89 90 ... 306
EOS Bodies / Re: New Sensor Technology Coming From Canon? [CR1]
« on: April 29, 2014, 12:42:26 PM »
I agree. I've never understood people's fascination with using a FF sensor in an M-mount camera. It always seemed more reasonable to just shorten the flange-sensor distance and use the EF mount. Just getting rid of the mirror box will allow significant size and weight reductions. I don't see how the M-mount would make that much of a difference.

EF (and EF-S) lenses are designed with a 44mm flange focal distance.  If Canon makes a FF mirrorless with that same flange focal distance, they'll use the same mount.  If they make one with a shorter flange focal distance (it's 18mm for EF-M lenses, for example), they'll make a new mount for the same reason they designed the system so EF-S lenses don't mount on FF bodies - to avoid confusion and unexpected results.  They might try squeezing the FF mount into the EF-M size, so that the new FF-mirrorless lenses could be used directly on EOS M or other APS-C mirrorless, in the same way that EF lenses can be used on APS-C dSLRs.  In particular, it the whole ecosystem does shift to mirrorless, longer lenses don't really benefit from a smaller image circle, so having a mount compatible with larger and smaller sensors makes sense.

Totally agree.

EOS Bodies / Re: New Sensor Technology Coming From Canon? [CR1]
« on: April 29, 2014, 11:47:24 AM »
Before people get too excited, they might want to re-read this CR-1 rumor.

It is focused on improvements in manufacturing technology to increase yields and reduce costs. Aside from a glancing mention of "Foveon Like" technology (whatever that is supposed to mean), this is all about reducing costs of production, not about any change in the performance of sensors.

That's not to say it isn't important or beneficial to consumers, just that the benefits are more likely to come in some combination of lower costs and better margins.

Well, I think you can read into "inrease yields and reduce costs" a bit.

The only way to really increase yields, especially with a decade-old fabrication process that is highly likely to already be as refined as it can get, is to increase wafer size. If Canon was using 200mm wafers for their existing process, then it seems logical they are moving to 300mm wafers. If they are moving to 300mm wafers, then that means they are either sharing fab time with their small sensor fabs, or have built new fabs. If they have built new fabs, then it also seems likely that they have moved to a smaller process, 180nm? 90nm?

Even if Canon does not employ a layered foveon-like technology, a move to larger wafers and a smaller process would be huge. It's the thing Canon needs to be able to move more technology onto the sensor die, move to a CP-ADC design (which they have a patent for), etc.

EOS Bodies / Re: New Sensor Technology Coming From Canon? [CR1]
« on: April 29, 2014, 11:42:50 AM »
Well there you go! The reason the 7D2 has been delayed so long is that it will be a full frame mirrorless dual pixel quad pixel fovenon big megapixel camera with a 1DX build in an EOS-M package.... that will shoot at ISO 819,200 and take 8K video.....

And support frame rates up to 100fps stills and 100,000fps video. :P

early adopter tax ;)

Indeed. :P

I like the lightfield concept. They have increased the "megarays" by four fold with the new design...I'm curious how that will affect the results.

Third Party Manufacturers / Re: New Samyang Lenses Go On Sale Tomorrow
« on: April 28, 2014, 01:42:49 PM »
Hmm...a 7.5mm lens is intriguing to me. I like wide field, but I don't necessarily always want a fisheye for the really ultra wide stuff. I wonder of that lens will have an EF mount (it says Cine, but Canon's cine bodies still use EF...)

EOS Bodies / Re: dual pixel tech going forward
« on: April 27, 2014, 03:16:21 PM »
Before I forget: I found an interesting patent about a dual photodiode per pixel architecture which is used to increase the DR to 120 dB called "Dynamic-Range Widening in a CMOS Image Sensor Through Exposure Control Over a Dual-Photodiode Pixel". They have a pixel split into a L shaped photodiode with 75% area and the 25% area which complets the L-shape to a square (might be not available at the moment due to a web site update):

Aye, I read about that. There are a few other patents for similar technology as well. They all use a different exposure time for the luminance pixels, though, and the way they achieve that is to extend the exposure time for the luminance pixels across frames. The majority of these sensors are used in video applications, which is the primary reason they can employ the technique. They can expose luma for two frames, and blend that single luma value into the color for both. (I cannot actually access the patent article you linked without an account, and they require an institutional email to sign up, however based on the abstract it sounds pretty much the same as other patents that use exposure control for DR improvement.)

EOS Bodies / Re: dual pixel tech going forward
« on: April 27, 2014, 01:34:18 PM »

Every sensor design requires light to penetrate silicon to reach the photodiode.

Thanks to your extensive explanations but I disagree in some important details.

Your last sentence ist truly correct - you need to reach the pn-junction of the photodiode which is "inside" the dye structure.
But after checking a lot of images in the web I came to the following conclusion:

1 micron of silicon would (according to page 2) reduce the amount of light at 500 nm to 0.36^3 = 0.05  or 5 % - a sensor with 1 micron silicon between front and photodiode structure would be orthochromatic (red sensitive).

Therefore the space between semiconductor chip surface and photodiode is filled by oxides. If silicon is the base material the oxide is usually silicon dioxide which is the same as quartz and highly transparent. I have tried to depict that in the sketch "Simplified Imaging Sensor Design" attached here (transistors, x-/y-readout channels are omitted).

Indeed. I did mention that it was a silicon-based compound, not pure silicon: "Regardless of the actual material in the well, which is usually some silicon-based compound, the photodiode is always at the bottom. "

I agree, though, SiO2 is usually the material used for layers deposited above the substrate, or is silicon dioxide based, but not always. It depends on the design and size of the pixel. As most small form factor designs have moved to BSI, I don't think there are many lightpipe designs out there, however in sensors with pixels around 2┬Ám and smaller, the channel to the photodiode is lined with SiN, then filled with a highly refractive material. In one paper (, very interesting read, if your interested), they mentioned two other compounds used: Silecs XC400L and Silecs XC800, which are organosiloxane based materials (partially organic silicates, so still generally SiO2 based, but the point is to make them refractive enough to bend light from highly oblique angles from the microlens down a deep, narrow channel to the photodiode).

I have another paper bookmarked somewhere that covered different lightpipe materials, but with BSI having effectively taken over for small form factor sensors, I don't think it much matters.

According to photodiode sensitivity: You can shurely reduce the sensitivity of the photodiode in a system by
(1) using a filter
(2) initiating a current that discharges the photodiode permanently
(3) stopping integration during exposure independently
For (1) think about a tiny LCD window in front of the second photodiode of one color pixel: blackening the LCD has the same effect like a - e.g. ND3 - gray filter. Both photodiodes read the same pixel at different sensitivity. The unchanged photodiode has full sensitivity, the filtered photodiode has 3 EV lower sensitivity. The LCD should be closed during exposure but is left open for DPAF.
For (2) think of a transistor for the second photodiode of a pixel which acts as a variable resistor between sth. like 1000 MOhms and 100 kOhms - photodiode 1 of the pixel integrates the charge fast, photodiode 2 of the pixel integrates the charge slowlier because some charge is withdrawn by the transistor acting as discharge resistor.
For (3) you need a transistor too and stop integration after e.g. 10% of the exposure time before the full well capacity is reached.
All methods require to replace information from saturated photdiodes 1 by the non saturated photodiodes 2 (with slower integration rate). It is like doing a HDR shot combined from 2 images which were taken SIMULTANOUSLY (except (3)).

I understand what your getting at, but it isn't quite the same as doing HDR. With HDR, your using the full total photodiode area with multiple exposures. In what you have described, your reducing your photodiode capacity by using one half for the fast-saturation and the other half for slow-saturation. Total light sensitivity is determined by the area of the sensor that is sensitive to light...your approach effectively reduces sensitivity by 25% by reducing the saturation rate of half the sensor by one stop.

If your photodiode is 50% Q.E., has a capacity of 50,000e-, you have a photonic influx rate of 15,000/sec, and you expose for five seconds, your photodiode ends up with a charge of 37,500e-. In your sensor design, assuming the same scenario, the amount of photons striking the sensor is the end up with a charge of 18,750e- in the fast-sat. half, and 9,375e- in the slow-sat. half. for a total saturation of 28,125e-. You gathered 75% of the charge that the full single photodiode did, and therefor require increased gain, which means increased noise.

I thought about this fairly extensively a while back. I also ran through Don's idea of using ISO 100 for one half and ISO 800 for the other, but ultimately it's the same fundamental issue: sensitivity (true sensitivity, i.e. quantum efficiency) is a fixed trait of any given sensor. Aside from a high speed cyclic readout which constantly reads the charge from the sensor and stores it in high capacity accumulators for each pixel, for standard sensor designs (regardless of how wild you get with materials), there isn't any magic or clever trickery that can be done to increase the amount of light gathered than what the base quantum efficiency would dictate. The best way to maximize sensitivity is to:

 A) Minimize the amount of filtration that occurs before the light reaches the photodiode.
 B) Maximize the quantum efficiency of the photodiode itself.

I think, or at least hope, that color filter arrays will ultimately become a thing of the past. Their name says it all, color FILTER. They filter light, meaning they eliminate some portion of the light that reached the sensor in the first place, before it reaches the photodiode. Panasonic designed a new type of sensor called a Micro Color Splitting array, which instead of using filters, used tiny "deflector" (SiN) to either deflect or pass light that made it through an initial layer of microlenses by taking advantage of the diffracted nature of light. The SiN material, used every other pixel, deflected red light to the neighboring photodiodes, and passed "white minus red" light to the photodiode of the current pixel. The alternate "every other pixel" had no deflector, and passed all of the light without filtration. Here is the article:

The ingenuity of this design results in only two "colors" of photodiode, instead of three: W+R and W-R, or White plus Red and White minus Red. I think that, if I understand where your going with the descriptions both above and below, that this is ultimately where you would end up if you took the idea to it's extreme. Simply do away with filtration entirely, and pass through the microlenses as much light as you possibly can. Panasonic claims "100%" of the light reaches the photodiodes...I'm doubtful of that, there are always losses in every system, but it's certainly a hell of a lot more light reaching photodiodes than is currently possible with a standard bayer CFA.

I think Micro Color Splitting is probably the one truly viable alternative to your standard Bayer CFA, however the sad thing is it's Panasonic that owns the patent, and I highly doubt that Sony or Canon will be licensing the rights to use the design any time, once again, I suspect the trusty old standard Bayer CFA will continue to persist throughout the eons of time. :P

Enhancing resolution (perhaps) slightly (according to 3kramd5's or caruser's description): ( <=EDIT)
Typical pattern is (for DPAF sensor in current config, AF and exposure): ( <=EDIT)

rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb
rr  GG  rr  GG  rr  GG  rr  GG
GG  bb  GG  bb  GG  bb  GG  bb

Just resort to this (after AF is done) to the following readout with 20MPix but 2 colors per (virtual) pixel: (<=EDIT)

r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b
r  rG  Gr  rG  Gr  rG  Gr  rG  G
G  Gb  bG  Gb  bG  Gb  bG  Gb  b

You are right (and that was my feeling to) that this will not dramatically enhance resolution but I see one special case there it might help a lot: Monochromatic light sources which will used more and more while signs (street signs, logos, etc.) are lit by LEDs. I observed that de-bayering works bad with LED light, especially blue and red light because the neigboured green photosites aren't excited enough. I very often see artifacts in that case that vanish if you downsample the picture by a factor 2 (linear).

I understand the general goal, but I think Micro Color Splitting is the solution, rather than trying to use DPAF in a quirky way to increase the R/B color sensitivity. Also, LED lighting is actually better than sodium or mercury vapor lighting or even CFL lighting. Even a blue LED with yellow phosphor has a more continuous spectrum than any of those forms of lighting, albeit at a lower intensity level. However progress in the last year or so with LED lighting has been pretty significant, and were starting to see high-CRI LED bulbs with around 89-90 CRI, and specially designed LED bulbs are starting to come onto the market that I suspect will ultimately replace the 95-97 CRI CFL bulbs that have long been used in photography applications where clean, broad-spectrum light is essential.

Regardless of what kind of light sources we'll have in the future, though, I think that, assuming Panasonic can get more manufacturers using their MCS sensor design, or maybe if they sell the patent to Sony or Canon, standard Bayer CFA designs will ultimately disappear, as they simply filter out too much light. MCS preserves the most light possible, which is really what we need to improve total sensor efficiency. Combine MCS with "black silicon", which employs the "moth eye" effect at the silicon substrate level to nearly eliminate reflection, and we have ourselves one hell of a sensor.  ;D

(Sadly, I highly doubt Canon will be using any of these technologies in their sensors any time soon...most of the patents for this kind of technology is held by other manufacturers...Panasonic, Sony, Aptina, Omnivision, SiOnyx, etc. There have been TONS of CIS innovations over the last few years, some with amazing implications (like black silicon)...the only thing Canon developed that barely made it on the sensor-innovation radar is DPAF, and it was like someone dropped a pebble into the ocean, the DPAF innovation was pretty much ignored entirely...)

Animal Kingdom / Re: Show your Bird Portraits
« on: April 26, 2014, 11:57:26 PM »
Northstar, If you mean me - YEP - I like to hear all the details.  Better than "I shot this yesterday". ;)

Still, it takes all kinds to make an interesting world, have to admit.

Jrista, it's hard to let go of the setup and head out into the woods now that I've become lazy!  I still mutter silent thankyous to you and Don and others.  There is more to a shot than the shot, like observing behaviour.


Yeah, setups are fun. I have to get mine going again, although this year I am going to try and build a multi-level apparatus that will let me set up multiple trays with different types of perches all in a single general area, so I can get different kinds of shots from within my hide without having to move around. I also want to try taking my setup out to Cherry Creek state park which is nearby, and see if I can attract a greater variety of birds in a new environment. My yard gets a limited range of finch, house sparrow, black-capped chickadee, american robin, grackle, red wing blackbird, mourning dove, and eurasian collared dove. Every so often, a hoard of bushtits will blow through, and sometimes I'll get mountain chickadees and the rare american goldfinch. I know there are a lot more species of birds than that out in Cherry Creek, including a couple varieties of tits, more varieties of finches, warblers, sparrows, waxwings, nuthatches, creepers, larkspurs, humming birds and probably some others I haven't seen yet (as I can hear songs I don't recognize.) There are also pheasants and probably some other groundfowl that I'd like to lure up to a perch using some of Alan Murphy's setup techniques.

Sometimes you just gotta get up and take the setup with you! :P Then you can have fun in the forest, and still gain the benefit of your setups. ;) And observe behavior (especially if you have a blind...blinds are amazing for getting birds to settle down and get's amazing the behavior you'll see.)

EOS Bodies / Re: 1d IV vs. 7D II
« on: April 26, 2014, 11:22:11 PM »
Here is another real-world example of a critically sharp 7D photo. Again, of a Mourning Dove, specifically it's eye, the feathers around it's ear, and neck feathers. This is a 1:1 unscaled crop, it has had zero sharpening applied (still at the default 25), it has had fairly significant noise reduction applied (+40 in LR, as well as masking in the sharpening tool, which further reduces noise in the background).

Furthermore, this is one frame out of burst here. AF is not hit the eye, but missed the beak. However in this situation, even the 1D X would have done that, as I was pointing at the eye, with a thin DOF/wide aperture, and the birds head was pointed away. I'd have had the option of focusing on the beak with the 1D X thanks to it's greater number of AF points, definite bonus there. However I just want to be clear...this is a real-world shot, single frame, single AF action (I use rear-button AF, press it to focus, let go, start making photos).

As you can imagine, I'm a very strong believer that the 7D CAN INDEED realize it's full potential in the field, in the real world. For those of you who are wondering whether a 7D II will be worth it when it is finally released, I have no doubt that it will be. In reach-limited scenarios, I think it will be a phenomenal camera, especially if it hits with 10fps and an improved AF system. I don't think the 7D II will be much better at high ISO than the 70D, which is itself marginally improved over the 7D. High ISO performance and noise performance in general is where sensor area kicks in, and APS-C is's always the same total sensor area. If you are not as concerned as much about reach, and are more concerned about noise levels, then you want a larger sensor. If you want the best of both worlds, and can get your hands on a 1D IV, that is still the best has a larger sensor area, so it will have less noise, but it is cropped and generally has smaller pixels than FF cameras (D800 is probably the one exception.)

If you want the best reach with the best resolution possible, go APS-C. If you want the best noise performance possible, go full frame. Any full frame will have considerably less noise than any APS-C, however if you are interested in the lowest per-pixel noise, then you want both the largest sensor you can find AND the largest pixels. If you are interested in the highest resolution possible, you want the largest sensor you can find with the SMALLEST pixels.

Animal Kingdom / Re: Show your Bird Portraits
« on: April 26, 2014, 11:00:44 PM »
Finch Candids (I guess they eat crab apple blossoms??  :o):

These are just a couple from-the-hip shots of house finches as they flitted about my yard. No setups were used or anything, which makes it practically impossible to get them out in the open for a clear shot (or get clean, creamy backgrounds):

EOS Bodies / Re: 1d IV vs. 7D II
« on: April 26, 2014, 10:30:10 PM »
Look, I was addressing one very specific point, the difference between a crop camera image with much higher pixel density and a cropped ff image with much fewer pixels, that was all. I wasn't addressing peoples ability to learn, peoples ability to buy bigger, better and, more expensive gear, or their ability to maximise the IQ from the gear they have.

If you were solely addressing the IQ difference, then the data leads to only one simple conclusion: The 7D offers a measurable, visible improvement in sharpness in reach-limited scenarios.

The reason I counter you is because you stated this:

" if you accept these conditions are not attainable in the real world"


"That is because you like being obtuse and ignoring all the other factors that go into making an image in the real world."

You are clearly making the argument that it is impossible to realize the full potential of the 7D in the real world. You stated such quite clearly with the words "are not attainable"...hard to misinterpret that. I posted a real world example of a CRITICALLY SHARP (sharpest possible) image taken with the 7D. It was the best frame out of about 12, slightly over one second of continuous shooting. From a technique standpoint, I wouldn't have done anything differently if I had the 1D X in my hands...I shoot short bursts, as is the recommended best practice, and pick the best frame. Even if there are more frames that are better with the 1D X, the same technique is used, and I would still have picked the best out of 14-16 frames. It's how you achieve critical sharpness in the field, regardless of which camera you are using.

Here is an example of comparable sharpness differences between the 7D, 5D III, and 1D X assuming you achieved critical sharpness (same image resampled to simulate the various pixel sizes of each camera; 7D versions are 1:1 full size crops, no scaling; note: Noise not indicative of real-world noise, as this was saved as s limited-palette GIF for animation purposes):

Assuming you achieve the BEST focus with all three cameras from the exact same location (which IS an attainable goal, regardless of whether you are using the 61pt or 19pt AF systems), with the exact same lens, on the exact same tripod, that eliminates all the other variables except two: pixel size (primarily) and AA filter strength (a very distant second, although any AA filter at all is going to be additive on top of the differences in pixel size, it cannot make bigger pixels better than their size would dictate).

One other thing I'd like to point out. The differences in sharpness between one generation of sensors and the next is never very large. The difference between the 7D and the 70D, for example, is much less than the difference between the 7D and the 5D III or 1D X. The difference between the 5D III and the 5D II is also much smaller than the difference between the 7D and either the 5D III or 1D X. These "small" and "meaningless" differences in sharpness are what we all scramble about spending thousands of dollars for in the first place! :P The 5D III was about a 5% improvement in spatial resolution over the 5D II, but the 7D is a 111% improvement in spatial resolution.!  :o The difference in spatial resolution between the 5D III and D800 is 63%. The difference between the 1D X and 5D III is 24%. The 7D still has a 30% advantage over the D800 (although that's going to be hard to see in a side-by-side comparison)! :P  8) Even when you throw in the 7D's "strong" AA filter (which really isn't too strong, it's just right), it STILL has a greater resolution edge over all of Canon's Full Frame alternatives.

I just thought I would clarify why I think it is important not to underestimate the advantage of smaller pixels in reach-limited scenarios. We all strive for more resolution, for that last minute little improvement that gives our photography the edge. It doesn't even matter if it is the sensor, or some other aspect of the camera that improves our IQ, from generation to generation, the actual measurable differences are not huge by any means. Canon's 61pt AF system is about 35-45% better than the 45pt system they used in the 1D IV (which was a pretty massive improvement over the 1D III, however the 1D III had some pretty major and persistent firmware bugs that crippled the system, which would have otherwise been excellent.) A majority of photographers are pixel peepers at one point or another...even the pros are (just dig into a few pro sports and wildlife/bird photographer blogs, and see how often they post 100% crops to show off their low noise at ISO 51200, or examine sharpness, or something else like that.) As much as I know my images are primarily published to the web, and most frequently at around 1200px on the long edge (over a 4x downsample), even the web is becoming more demanding. I've started uploading my photos to 500px at 1920x1200. In a couple of years time, I suspect I'll start uploading "High DPI/4k Ready" images at 3480x2400. Every little bit counts. It's what we spend thousands of dollars for. If you don't have the budget for high frame rate FF or the lenses to maximize it's potential, higher resolution APS-C parts are going to become increasingly valuable (and the differences in spatial resolution with APS-C pixel sizes, even with strong AA filters, is a lot more than most of the differences between full frame options themselves, even across generations.)

Animal Kingdom / Re: Show your Bird Portraits
« on: April 26, 2014, 08:30:41 PM »
Another Mourning Dove

EOS Bodies / Re: 1d IV vs. 7D II
« on: April 26, 2014, 08:30:00 PM »
"I don't know why this comparison, which clearly gives the edge to the 7D, is always downplayed as showing that the 7D doesn't "really" have an does."

That is because you like being obtuse and ignoring all the other factors that go into making an image in the real world. You might like to say "the 7D definitely shows a marked improvement in resolution." but it doesn't show anything like the difference all you "more pixels on target" guys always hypothesis about, the difference is small, these are massive crops, the whole setup was artificially created to maximise the crop sensor advantage and, you will never attain that level of detail in real world shooting situations. If you ignore all that it is easy to say the 7D is substantially better, if you accept these conditions are not attainable in the real world and the small difference is easily eaten up by AF inconsistencies, lower contrast, higher iso, less stability etc etc then you would see the example as most practical people, and those who have owned both, do, the difference is marginal in optimal artificial conditions. That doesn't mean there is no reason to shoot with a crop camera, just that the difference in resolution between real world images of cropped ff images and images from a crop camera of the same generation are not compelling in themselves. I only did the tests because I wanted an excuse to buy a 7D, but realised for me there was no point.

But don't take my word for it, there are countless people who had 5D MkII's and 7D's out there that will tell you the same thing, Neuoanatomist is one, so was the complete pixel on target theoretician AlanF, until he got a FF camera.

I think what you may be failing to account for is that photographers often find a way of negating, at least partially, the reach issues when using FF cameras. If we just use bird photography as an example case, as I think that is most often the use case for the Canon 7D and 400mm lenses. A lot of the people who buy a 7D and a 400mm lens (or the 100-400mm zoom) are those who are just starting out, don't really have the option of spending the kind of money necessary on FF and longer lenses (either a 300/2.8 + TCs or a 600/4). They are the most reach handicapped.

One does not remain at a reach disadvantage forever, though. With time comes skill, and eventually most novice bird photographers learn how to get closer (just take a look at Jack in the bird photography forum here to see an example of a guy who took some advice to heart, and is now hardly reach limited at all for most of his work). Once you learn to get closer, you both learn how to maximize the potential of your existing gear, and develop a need for better gear. If you really stick with it and hone the skill of getting close, you can, with care, completely eliminate the reach handicap with FF and a 400mm lens (it isn't easy, it definitely takes skill, and a lot of pro photographers prefer not to get that close as it is generally disruptive to the birds, runs the high risk of scaring off whole flocks, which in turn can be disruptive to other bird most stick with much longer lenses...600mm, 800/840mm, 1200mm.)

Neuro is an accomplished photographer. He has the skill to maximize the potential of a full frame camera. He is obviously not one of the reach handicapped, and is also therefor not one of the people who the 7D line is marketed to. That does not, however, mean that there is no market for the 7D line at all. The 6D isn't exactly an alternative for beginner birders or wildlife has a slower frame rate (and if the rumors end up true, much slower), and it's larger frame puts them at an even greater disadvantage. The Tamron 150-600 will certainly make the 6D a more viable wildlife and birding camera, however there is still a lot of value in the intrinsic reach of the 7D II and the potential 10fps frame rate.

Plus, at 24mp, it would offer an even better resolution advantage over the 7D. The 7D, as much as you try to downplay it's resolution advantage, is definitely resolving more detail in your comparison images. The 7D II would resolve another 30% more on top of that. The 5D III is a mere 5.7% improvement over the 5D II, and the 1D X is, from a resolution standpoint, a step back (this effectively necessitating the use of great white lenses to maximize it's potential.)

I have no question the 7D II will offer a significant reach benefit. It'll be an excellent camera for biginner bird photographers, it'll be a great option for beginning wildlifers (especially those who want to photograph wildlife in action), although those who shoot after sunset and don't care as much about action could probably do quite well with a 6D.

The advantages of full frame sensors are clear, there is no question about that and I do not deny that. But I think it's unfair to make the ASSUMPTION that you cannot realize the advantages of a high resolution crop sensor in the field. The resolution advantage of the 7D is fully realizable in the field, even with the relatively lowly 100-400mm lens, if you have the skill. Some professional bird photographers have used nothing but the 7D and 100-400mm lens for all of their work, and it's stellar work, too. There are advantages on both sides. FF may have more, but that does not mean that APS-C has none. I am also clearly not the only one who sees the resolution advantage that the 7D has, either. Bird photography is also not the only form of photography where APS-C can be useful, still scene photography is not the only way one can realize the full potential of smaller APS-C pixels.

I'm trying to be objective, give credit where it's due, and having used the 7D for years now, I know full well what it is capable of (and what it is NOT capable of, and where FF is clearly better.) I'm also well beyond reaching the limits of what the 7D can do. I have a hell of a lot more skill now than when I first picked up a 7D. Back then, I had no ability to get close to birds, APS-C was essential. Today I can easily get close, and half the time the birds come right to me when I'm properly hidden. I'll be another one of the people joining the FF camera ranks soon enough here, but that isn't going to give me reason to suddenly ignore the advantages of APS-C, nor forget that there is a prime target market for the 7D line of cameras (one of which is as a backup camera for FF 1D X and 5D III and maybe even 6D users.)

Just as an example, here is a photo I took recently in a reach-limited capacity. My hard is currently home to at least four Mourning Dove couples. They are quite happy to hang out and munch down on my bird seed if I'm out there, so long as they know I'm out there, but they still seem to be camera shy, so I have to hide to photograph them. That usually means setting up behind one of the corners of my house, or hiding inside and poking my lens out my sliding glass door. I don't have the option of getting closer. This photo shows off how sharp a 7D can really get, and if I'd used a 5D III, 6D, or 1D X, I'd have been putting fewer pixels on the subject. This is the exact reason I'd get a 7D II, as a reach-limited backup camera to a 5D III or 1D X:

EOS Bodies / Re: dual pixel tech going forward
« on: April 26, 2014, 08:06:32 PM »
which would be a little odd to demosaic and might not produce the best quality results.

Interesting? Why could it not produce as good if not better. I'm assuming an array like this:

Code: [Select]

Each pixel would read two colors, either green + either red or blue. That maintains the same color ratio of a bayer type CFA (2G/1R/1B), it's just collocating each green with another color.

You have a pixel size ratio issue here. You have twice as many pixels horizontally as you do vertically. I think this was the first thing Neuro mentioned. To correct that, you would have to merge the two halves during which case...why do it at all? You lose the improved resolution once you "bin"...regardless of whether your binning is electronic or a digital algorithmic blend.

Regarding color fidelity, I don't know that there is any evidence that your particular design would improve color fidelity. There have been a LOT of attempts to use various alternative CFA designs to improve color fidelity. Some may have worked, for example Sony added an emerald pixel (which is basically a blend of blue and green), Kodak experimented with various arrangements with white pixels. Fuji has used a whole range of alternative pixel designs, as well as utilizing a 6x6 pixel matrix with lots of green, some red, and some blue pixels, extra luminance pixels, and a variety of other designs. Sony has even designed sensors with triangular and hexagonal pixels and alternative demosaicing algorithms to improve color fidelity, sharpness, reduce aliasing.

None of these other designs have ever PROVEN to offer better color fidelity than your simple, standard RGBG bayer CFA. The D800 is an excellent example of how good a plain old bayer can's color fidelity is second to none (and even bests most MFD sensors).

Anyway...DPAF isn't a magic bullet. It solved an AF problem, and solved it quite nicely, while concurrently offering the most flexibility by rendering the entire sensor (well, 80% of it) as usable for AF purposes. To get more resolution, use more pixels that are smaller. If you want better dynamic range, reduce downstream noise contributors (bus, downstream amps, ADC units.) If you want better high ISO sensitivity, increase quantum efficiency. If you want improved light gathering capacity, make the photodiodes larger, increase the transparency of the CFA, employ microlenses (even in multiple layers), move to BSI, use color splitting instead of color filtration, etc. Sensor design is pretty strait forward. There isn't anything magical here, and you also have to realize that a LOT of ideas have already been tried, and most ideas, even if they ultimately get employed at one point or another, often end up failing in the end. The good, old, trusty, strait-forward Bayer CFA has stood the test of time, and withstood the onslaught of countless alternative layouts.

EOS Bodies / Re: dual pixel tech going forward
« on: April 26, 2014, 07:54:07 PM »
"It won't improve resolution (since the photodiode is at the bottom of the pixel well, below the color filter and microlens),"

I think that the whole structure below the filter is the photodiode - to discriminate both "phases" you need to discriminate light that hits both photodiodes of 1 pixel. So there is a chance to enhance resolution SLIGHTLY by reading out of both photodiodes separately.

Trust me, the entire structure below the filter is not the photodiode. The photodiode is a specially doped area at the bottom of what we call the "pixel well". The diode is doped, then the substrate is etched, then the first layer of wiring is added, then more silicon is added, more wiring. Front-side Illuminated sensors are designed exactly as I've depicted. The photodiode is very specifically the bit of properly doped silicon at the bottom of the well.

The "well" or better potential well of a photodiode is the part of the photodiode where the charge is stored during exposition. It is made of (doped) silicon which is intransparent. The image you provided seems to me a little bit strange: How could the light hit the photodiode at the bottom if the well is intransparent? Please send me the source of the image and hopefully I could find some enlightening information about it!

Thanks in advance - Michael

Silicon is naturally semitransparent to light, even well into the UV range, and particularly in the IR range. The natural response curve for silicon tends to peak somewhere around the yellow-greens or orange-reds, and tapers slowly off into the infrared (well over 1100nm). The entire structure I've drawn is only a dozen or so microns thick at most. Light can easily reach the bottom of the well or photochannel or whatever you want to call it. The photodiode is indeed at the bottom. Sometimes the entire substrate is doped, sometimes it's an additional layer attached to the readout wiring. Sometimes it's filled with something. Here is an image of an actual Sony sensor:

Here is Foveon, which clearly shows the three layers of photodiodes (cathodes) that penetrate deeper into the silicon substrate for each color (the deeper you go, more higher frequencies are filtered, hence the reason the blue photodiode is at top, and red is at bottom), and there is no open well, it's all solid material:

Here is an actual image of one of Canon's 180nm Cu LightPipe sensor designs. This particular design fills the pixel "well" as I called it with a highly refractive material, the well itself is also lined with a highly reflective material, and the photodiode is the darker material at the bottom attached to the wiring:

Regardless of the actual material in the well, which is usually some silicon-based compound, the photodiode is always at the bottom. Even in the case of backside illuminated sensors, the photodiode is still beneath the CFA, microlens layers, and all the various intermediate layers of silicon:

This image is from a very small sensor. It's overall thickness is a lot thinner than your average APS-C or FF sensor. The entire substrate is apparently photodiode cathodes, you can see the CFA, microlenses, and some wiring at the bottom. The readout wiring is at the top. The photodiode layer is in the middle.

Every sensor design requires light to penetrate silicon to reach the photodiode.

Pages: 1 ... 86 87 [88] 89 90 ... 306