October 26, 2014, 12:50:07 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jrista

Pages: 1 ... 77 78 [79] 80 81 ... 299
1171
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 05:45:15 PM »
Don't pretend you don't have your own biases, though.  You are proud of, and trumpet often, your bias against an entire company, Sigma.

I've never pretended. I'm pretty strait up about what I think of Sigma. I am not against the entire company. I've said on many occasions I think their new lenses from the last couple of years are excellent, and that I appreciate the competitive force they bring in that arena.

I have NEVER hidden my feelings about how Sigma has handled Foveon. I have been quite open about it. I think they do Foveon, which I believe is technology with a lot of potential, a severe disservice by missleadingly selling it as having some magical powers to increase resolution, when it does nothing of the sort. Spatial resolution is determined by pixel size, plain and simple. Foveon's strengths lie in other areas than spatial resolution, and they are good strengths. No color moire, good sharpness (for the resolutions that Foveon sensors come in), and excellent color fidelity.

Sigma wastes far too much time, money, and effort trying to trick potential customers into thinking they will get more resolution with a Foveon than a bayer, which is just a blatant, outright lie. I don't appreciate that, and yes, I fault Sigma for it. If Sigma would take a big chunk of their false advertising budget and inject it into their R&D department instead, I think they could make Foveon viable both on the color fidelity and spatial resolution fronts, and actually have a real competitor on their hands. But sadly, they keep pushing their missleading advertising.

Your bias and the need to feel proud of it somehow, is rather juvenile, don't you think?

Bait. Hmm. I'll let another fish bite.

Since you are very concerned about having the highest image quality, you should never use an aps-c camera, yet you do, very often.  Practice what you preach.

I use an APS-C camera because I haven't had the money to buy a full-frame camera. I spent over ten grand on a lens last year. No one who isn't independently wealthy spends that kind of money, then turns right around and spends thousands more on MORE equipment. I do practice what I preach. Soon as I have the funds, I'll be using a full frame camera. Until then, my 7D has more reach, thanks to it's higher spatial resolution, and that's a fact I greatly appreciate. Oh, it's also a fact I preach, too. ;P

1172
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 05:38:43 PM »
I've been through every free Chipworks article they have ever published.

Hmm. Obviously not, because Chipworks has a free partial die photo of the 70D sensor:
https://chipworks.secure.force.com/catalog/ProductDetails?sku=CAN-EOS-70D_Pri-Camera&viewState=DetailView

Take a careful look and consider the geometry of a dual-photodiode pixel:
- you can have two rectangular photodiodes that form a square pixel
- or, you can have two square photodiodes that form a rectangular pixel
- finally, you can have two square photodiodes plus wasted space on the die that form a square pixel

The image you are referring to does not even have any of the dual pixels in it, assuming those are pixels at all (on the contrary, they kind of look like readout pins in a land grid array, which would be on the BOTTOM of the sensor, the opposite side from where the actual pixels are, and assuming they are not readout pins, I would know a CMOS sensor pixel if I saw one...those are not even remotely close to what a CMOS pixel looks like...they don't even have microlenses or color filters...it's just wiring and bare silicon substrate). That image is from the outer periphery of the sensor die, which is usually riddled with power regulation transistors and other non-pixel logic. Canon's DPAF pixels are only in the center 80% of the part of the die that actually contains pixels...so even if the image WAS of pixels (which it is not), then they wouldn't be DPAF pixels...they would be standard single-photodiode pixels.


Now, as I said, take a careful look at the partial sensor die and tell me if you see:
a) anything rectangular features on this photo
b) any apparently wasted space

A partial die photo is certainly not a definitive proof.

It isn't proof, because you are gravely mistaken about what that photo is actually of. There is even some kind of stamp on top of the electronics in the region of that photo that ChipWorks has shared. You don't stamp the actual pixels...and usually such stamps are again on the back side or very outer periphery of the sensor, not the side with the pixels. This photo is either of peripheral logic on the top side of the sensor, or circuitry or pinning on the bottom side of the sensor.

It's a very good clue, though, that the 70D sensor is in fact using a quad photodiode design, not  a dual one.
Again, just think of the geometry of a dual pixel design and make your own conclusions.

Again, your completely misinterpreting what that image is.

As for the resolution of a non-bayer filter: I should have been more clear.
The 70D sensor is a bayer sensor, where each pixel has a monochromatic R/G/B color filer.
Thus, each of the four constituent photodiodes of that pixel lies under a single, common monochromatic filter - that happens to throw away 2/3 of the incoming light.

Now, imagine if each of the photodiodes had their own, individual color filters.

I don't need to imagine, as that is exactly what a sensor with split photodiodes WITHOUT DPAF or QPAF would be...each photodiode would have it's own color filter...because each photodiode would be a pixel:P Thus, what you are proposing is the removal of DPAF technology, and a factor of two reduction in pixel size, and a higher resolution. That's it! There really, truly, honestly isn't anything special about giving each smaller photodiode it's own filter. That just means you have a sensor with four times as many pixels, which is pretty much what each new generation of sensors gets anyway. (Well, not four times as many pixels, but a pixel size reduction and an increase in pixel count is a pretty consistent fact of just about every new still photography camera release.)

You still have a single pixel with a single microlens.

If you do this, then you are going to have problems properly distributing light into each photodiode. The entire purpose of the microlens is to guide as much light as possible onto the photodiode. If you try to increase the pixel resolution below the microlens, then the problem you have is that one of those four subpixels gets more light than the rest, as the microlens, just like any other lens, FOCUSES LIGHT. The focal point, where the majority of the light is concentrated, is rarely dead center underneath the microlens (the farther from the center of the sensor you go, the more off-centered the focal point from the microlens will be). So, if you split the color filter and photodiode underneath the microlens, you'll greatly increase noise levels...one out of four subpixels will get most of the light, and the other subpixels will get significantly less light. You idea effectively trades noise for resolution.

You counter might be, well just use more layers of microlenses for each photodiode. If you throw in more layers of microlenses, then you further screw with the AF capability of the subpixels, as you would be mucking with the phase of the light below the initial microlens. Muck with phase, and you can no longer "phase detect" (PD), or at least not detect it as well or as accurately. So again, as I said before, all you are proposing is a factor of two reduction in pixel size, or a factor of four increase in pixel count. In other words, a standard (non-AF capable) sensor with higher resolution...and more noise.

Underneath,  however, there are four individual color filters - one for each photodiode.
Here's the thing about the individual color filters: they don't have to be monochromatic R/G/B filters anymore.
Instead, you can use a combination of di/poly-chromatic filters, from which you can derive the overall pixel color.
And instead of deriving a single R/G/B color, as in a bayer sensor, you derive all three primary colors.

Look up Micro Color Splitting Sensor. Panasonic's design is vastly superior to any kind of di/poly-chromatic filter, because it simply doesn't filter. It splits light, but directs all of it into photodiodes.

In summary, if you have a single, monochromatic filter for the entire pixel, you can only get one color per pixel (either R, G, or B).
But if you use individual di/poly-chromatic filters for each photodiode, you can derive all three primary colors per pixel (R+B+G).
Plus, you have a more sensitive/efficient pixel, as di/poly-chromatic filters by definition are wasting less light than a monochromatic filter.

And, by definition, MCS wastes zero light. Why invest time, money, and effort into a very complicated pixel design, one that is prone to being much noisier due to improper use of a microlens, when there are proven techniques that eliminate filtration entirely?

Back to the topic of extra resolution:
The increase in resolution comes from the fact that you have all three primary colors per pixel vs the single color per pixel in a beyer sensor.
Admittedly, the resolution increase is not all that big - but it's still an increase.

What your proposing is a significant increase in resolution. The fact that you don't understand even that demonstrates that you don't understand sensor technology all that well, which indicates that your just speculating and dreaming. Nothing wrong with dreaming, but you should be aware that's what your doing. ;) Your DOUBLING resolution in both the horizontal and vertical by making each photodiode one quarter the size. The D800 clearly has a lot more resolution than the 1D X, and it's basically the same thing...twice the resolution.

You are really just talking about a pixel size reduction. Again...there isn't anything special here, and because your proposing that one single microlens be used for multiple pixels, your going to have an increase in noise due to what I described above. The increase in noise is going to be a severe drag on IQ, so again...your talking about at the very best, a net neutral difference, and at worst, your going to get WORSE IQ with your sensor design because of the increased noise.

Think about all those things.
You seem to be dismissing the quad-photodiode tech - seemingly without fully realizing its potential.
If you believe that Foveon is better than Bayer, just consider that a quad-photodiode design with individual non-bayer color-filters (one per photodiode) is a better solution that Foveon.

I fully understand what DUAL-photodiode technology is, how it works, why it's designed the way it's designed, and I also understand that it isn't some magical technology that will suddenly slingshot Canon ahead of the competition. You are dreaming, pure and simple, that somehow Canon has solved their IQ problems with an AF invention. It's just a dream, though. It's the same dream a lot of Canon users have, because they all want better IQ out of Canon sensors, but it's still just a dream. It's an ill-educated dream, I am sorry to say, and your misinterpreting a lot of information (such as the Chipworks photo of the OUTER PERIPHERY of the 70D sensor...anyone who knows anything about die fabrication understands that the outer periphery of any CMOS die, sensor, cpu, memory, whatever, is the domain of power regulation, control circuitry, wiring and pin solder points, etc. not core logic, memory cells, or pixels.)

Canon does not have quad pixel technology. If they had already used it in the 70D, then they would have received patents for it years ago. I've read all of Canon's photography-related patent releases for the last three years. They have several for DPAF technology, some new ones since the 70D that have not been implemented anywhere. Their patents, being patents, MUST be extremely precise and explicit about the design (that's what patents are, specific details about specific implementations of a concept). Not one single patent Canon has ever filed for DPAF has ever detailed quad photodiodes. Neither would Canon have sold themselves short by announcing DUAL pixel technology if in reality they had QUAD pixel technology...if they had QPAF, they would have told the world. It would be big news.

Finally, Canon also already has patents for layered sensor technology that really, truly DOES have the potential to increase image quality. Given some of the things their patents discuss, such as the use of what is basically akin to the nanocoating technology they use on some of their lenses on the second and third photodiode layers, Canon has the potential to improve the total amount of light their red and green photodiodes are sensitive to by reducing the chance of reflection at those lower layers, thereby increasing Q.E. Canon Foveon-like technology has the potential to be superior to Sigma Foveon technology, and with Canon's R&D budget, they certainly have the power to bring the technology to market and continue improving it.

If you want to root for Canon, and really want better image quality (which has less to do with photodiode count, and more to do with pixel design quality, quantum efficiency, etc.), then you should look into their layered sensor patents and root for them to actually make a DSLR camera that uses it. If Canon is indeed using nano-crystal technology to reduce reflection and increase Q.E. of the photodiodes in their layered sensors, I think they really have something that could outdo Sigma's Foveon, and outdo it enough that Canon could produce a 30 or 40 megapixel layered sensor that not only has the benefit of higher color fidelity, but also have higher native, non-bayer spatial resolution. THAT is where a meaningful increase in IQ for Canon DSLRs will come from....not DPAF.

1173
You should change your metaphor, it is is Canon  who are chasing today .
Sony  as one example sold more cameras in South Korea than Canon and Nikon  in 2013 and I think the rest of the world will go the same way , from large SLR to smaller but with a FF sensor.

Canon has never chased anyone. They never chased anyone in the past, and they are not chasing anyone now. Canon does what Canon does, for whatever reasons Canon decides to do them. People are constantly complaining about how "Canon hasn't responded to <pickyourpoison>" and "Canon MUST respond to <yaddayadda>"...they constantly complain, because Canon is not in the business of "responding" to anyone for anything...never have, and I don't have reason to suspect they ever will.

Canon builds products for THEIR customers. They build EXCELLENT products for THEIR customers. The fact that Canon builds excellent products for their customers is the reason why they are one of the top imaging companies in the world, and the top photography company in the world. Canon delivers what their customers ASK for, and they make sure that what they deliver lives up to the expectations their customers have, and their own reputation.

Nikon is a very different company. Nikon has practically made a reputation out of doing two things: Responding to competitors products (and responding extremely late, well beyond the time when the ship sailed and the train left the station), and creating hyperniche products like the Df or a 24karat gold plated, lizard-skin gripped $12,000 trophy camera that no one cares about other than as a curiosity on the internet every so often (oh yes, that thing really does exist...which actually blows my mind... ???).  Sony doesn't even seem to have a plan, it's just "*BLAMM!* Shotguun and Ho'yeah! Let's see wut sticks!  :o" wild-west product design and production that's burning their funds and burying them in a hole so deep and filled to the top with debt they will never be able to see sunlight again (let alone pay off).


Canon is not, and will not, be responding with anything to any competitor's product any time soon. Canon will release the 7D II, or the 5D IV, or the 1D XI or whatever the next big thing is when THEY decide it meets the necessary requirements and is capable of maintaining and building up Canon's reputation as the worlds top (and most profitable) photography company. When the next big thing is released, it WILL be a phenomenal product that DOES live up to Canon's reputation as a top-notch photography company, and even if it doesn't have 25 stops of DR, 150 megapixels, 100fps, a 900 image frame buffer, a 200 point AF system that works in both mirror mode and live view/video mode, a 12000ppi 10-bit full-color high DR 60fps EVF and quad memory card slots supporting both CF and CFast2 all for the rock bottom low price of $500....good grief ppl....do you realize what you all sound like when you bring up the "Canon MUST respond!" and "Canon charges too much!" and "I want this, and this, and that, and OH YEAH THIS THING TOO! AND IT HAS TO BE $1500!!!!!!1!1!111111~~! *gimmegimmehgimmeeeenglfheee* *gasp* *GASP* *SUUCKING IN AIR....*"?   :o ::)

Bleh...it would be a wonderful day if everyone could just be happy with the fact that pretty much every single camera on the market today puts nearly every camera from the film era to complete and total, utter shame when it comes to IQ. Even when it comes to drum-scanned large format film, while you gain in resolution, even that can't really touch the color depth and brilliance of a high resolution digital sensor these days.


your reliance on the Canon brand is astonishing
at the same time I can read  from you that Canon is behind in sensor tech and I and many many Canon owners longs for a high resolution, high DR camera now.
a bit contradictory.

It's only contradictory if you assume that the sensor is the sole source of image quality, or that Canon's sensor IQ is the single source of their success as a photography company. Clearly, given the plethora of evidence, the fact that Canon's sensor IQ is no longer "the best of the best of the best" has nothing to do with the fact that Canon makes excellent cameras, excellent lenses, has the best customer service department of any camera company, and sells more cameras than any other camera company.

It's also only contradictory if you assume Canon is incapable of progressing and leapfrogging the competition, again. There is only one individual I know of who has persistently pushed the notion that Canon is literally incapable of competing. He was permanently banned from these forums for his constant antagonism...I certainly hope you are not him.

The fact that you insist that Canon specifically provide you with a high megapixel, high DR camera indicates that you seem to rely on Canon more than I do. Unless one of Canon's next camera releases has a notable improvement on DR, I myself will be picking up a Sony A7r for my high DR, high resolution landscape work. If you really, truly, honestly NEED more dynamic range, and you are really, truly, honestly not completely and utterly dependent upon Canon, then you would have stopped complaining about Canon offering a high DR camera a very long time ago...because there are other options out there that already offer what you supposedly need! :P

1174
Animal Kingdom / Re: Show your Bird Portraits
« on: May 03, 2014, 02:17:14 PM »
Thanks Hank. Yours is a great shot. How do you manage to get so close, without scaring off the bird?

Thanks, Eldar. It helps A LOT that the Japanese White-eye is well-adapted to the urban park environment, and that the particular tree is right next to a walkway with people coming and going pretty much throughout the day. To them I was probably just another guy with a stick (monopod) standing next to a park bench...for a long time. My concern was less with disturbing the nest than attracting attention from passersby.
That explains it. The willow tit lives in the high mountain birch forrest and can live its entire life without being close to any human being, so they are easier scared off.

I have posted a couple on the BIF thread, but I am happy enough with them to also republish one here. The reason for chasing this little fellow was to get shots of just when it jumps off from the tree. When I bird jumps off, before it has retracted its legs, you can either get a very energetic take-off look or, in the other end of the scale, you get this hanging-in-the-air almost ballet like posture. This one is one of the latter. (The thing in its beacon is carved wood from the nest room his carving in the trunk, which he disposes of in safe distance from the tree).

It was very difficult to nail focus, because it is extremely fast. The 1DX AF never picked up the bird in the air, so I had to  get the bird´s jump off within the focal plane I had set. I think I shot about 200 take offs and I managed to get an app. 10% keeper rate from a focus perspective and about 25% of these had a good posture. To me this was exactly what I was looking for and the bokeh is amongst the best I have ever managed to get, so I was happy

1DX, 600mm f4L IS II
1/4000s, f11, ISO4000

Superb shot, Eldar! Good to see your kind of dedication paying off.

1175
Perhaps I am a bit dense, but I guess I don't understand this company loyalty business.  People are "rooting" for Canon, or dismayed with Canon for not "competing" adequately against their opponents.

<snip>

The reality (at least from my perspective) is that all of these companies make excellent cameras.  As someone else mentioned, compared to film cameras, all of them are heads and shoulders better.  So, why does it matter so much what Canon does?

Quite right.  Generally the pro- Canon talk is in response to the anti-Canon talk.  There seems to be a small, vocal group who think their photo needs are broadly representative of the entire market, so they're put out when Canon doesn't give them exactly what they want, and they try to argue that Canon's business will decline if they don't listen to their customers.  Others, including me, point out that Canon is selling quite well, and has a history of incorporating tech and features when demanded by the market, but that they are a conservative, consistently profitable business.

That's really it, there seem to be two camps: (1) those who believe their niche demand represents the market as a whole; (2) those who may have individual preferences for new features, but acknowledge that the customer base (the "market") speaks with its own voice.

Of course we all want all the best features of the competing products, without losing the benefits of what we already have, or suffering an increase in price.  Well, that ain't gonna happen.  As you rightly point out, those with particular needs should choose the gear that meets those needs.

+1 Well put.

1176
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 02:09:29 PM »
How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Probably because he doesn't shoot JPG, which is the only situation where the method used by Canon cameras is relevant, and if you're shooting SOOC JPG images, then achiving the best IQ of which your camera is capable is certainly not your priority.

I didnt know that dark frame subtraction was limited to JPEG.

Dark Frame Subtraction (LENR, Long Exposure Noise Reduction, which I explicitly excluded in my response) is not the standard JPEG noise reduction. LENR is user-togglable for either RAW or JPEG, and it's sole purpose is to remove hot pixels. However because of how LENR works, it actually tends to make deep shadow random noise worse. This is why astrophotographers generally do not use in-camera LENR, and instead choose to take 30-50 "dark frames" which are then averaged together. The averaging greatly reduces the random noise component to the point where it is practically non-existent, and makes the hot pixel information more accurate. A master dark frame is then subtracted from each light frame, as that again is superior to using in-camera LENR.

1177
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 02:06:34 PM »
...
A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.)

People that have actually used 20+MP APS-C cameras and used the Sigma DP2M would disagree with you and say that the Sigma is the sharper camera/lens combination so it goes without saying that the Sigma DP2M (15MP) should also be considered sharper than the 6D.

"so it goes without saying"? LOL, love that.   ::)

Oh, and...prove it! (I have already proven the opposite with visual examples on multiple occasions, so the burden of proof is on you this time.)

Sigma DP2M review:
http://www.luminous-landscape.com/reviews/cameras/sigma_dp2m_review.shtml

This doesn't prove anything. For one, the reviewer is highly biased. He talks about a resolution advantage for the Foveon in comparison to the M9. Despite the fact that the M9 image CLEARLY suffers from camera shake blur, it still has the resolution advantage. The guy is comparing 100% results, rather than normalizing the image size. The M9, for example, has about a 30% advantage on spatial resolution, meaning it should have been downsampled a bit. Once downsampled, any perception of a resolution advantage for the Foveon disappears entirely. (And, of course, if the guy actually used an appropriately stable tripod to snap his images, that would have allowed the M9 to trounce the Foveon hands down with or without downsampling.)

The NEX is in the same boat as the M9...it has a significant true resolution (spatial resolution) advantage over the Foveon. The reviewer was also pretty clear that he used a sucky lens on the NEX, but still used it anyway because he was more interested in framing parity (which is naive, you can achieve that by moving the camera). The NEX comparison had what appears to be an intentional handicap vs. the Foveon not because the sensor isn't as sharp...but because a soft lens was used. Despite that, downsample the NEX image, and most of the resolution advantage of the Foveon disappears. Use an appropriately sharp lens, and the NEX will best the Foveon both at native size and downsampled.

The Sony RX100? MASSIVELY diffraction limited, as it's only a 1" sensor. Fuji X100? The X100 sensor has never been one for the sharpest images. The sensor uses shifted microlenses. That helps reduce the kind of vignetting that occurs when you place the lens so close to the sensor, but it doesn't really do anything to improve resolution. The X100 is softer than pretty much any Canon camera except the Rebels.

And, finally, that article does NOTHING to prove that the DP2M offers more resolution than the Canon 6D.

1178
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 01:54:48 PM »
Because DP tech has nothing to do with base image quality.

It may not be quite obvious but the dual-pixel tech has a lot to do with image quality.

First of all, let's clarify that the dual-pixel tech is actually a quad-pixel tech.
Canon is not advertising it but Chipworks has a photo of the sensor die, which shows a quad pixel design.

I've been through every free Chipworks article they have ever published. They do not have any images of Canon's 70D sensor that shows anything like quad pixel. Their full teardown of the 70D sensor costs $16,000, so I highly doubt anyone on these forums has seen those documents, but I have little doubt they show a dual-pixel design, not a quad pixel design.

I've also read Canon's actual patents on the technology, which show two separate photodiodes, not four. Even their revised patent, which includes high and low sensitivity halves, is still just HALVES, so there are only two photodiodes.

So sorry, but I'm calling bulls*it on this one. :P Unless you can furnish an actual image of the actual sensor die it self that proves otherwise (and believe me, I spent a lot of time looking for that image after the 70D was released). I've actually found real sensor images for prototype DPAF sensors from Omnivision and Panasonic, which basically steal Canon's design...however again, they clearly show two photodiodes per pixel, not four. (It also means Canon won't be the only company with DPAF technology in the relatively near future...so their potential advantage in this area will probably dissapear. Omnivision's patents have DPAF pixels in 100% of the sensor pixels, vs. Canon's 80%.)

The term "dual PIXEL" is also very missleading terminology. A pixel is a single image element, however in Canon's designs, each single pixel has a split photodiode. The photodiode exists below the microlenses and CFA. As such, there is nothing special that can be done to make it magically provide higher resolution or anything like that, and changing the design to actually allow higher resolution

So, what does this have to do with image quality?

The quad-pixel tech will eventually allow Canon to use non-bayer color filters.
I can easily see them using dichromatic or polychromatic filters for each sub-pixel - and then deriving the overal pixel color from the values of the different sub-pixels.
In principle, this would be the same as Foveon, as each pixels will have full color information (rather than a single R/G/B color as in bayer).
But it would be better than Foveon, as the implementation complexity and poor color separation of Foveon are avoided.

This is the same old logical fallacy that everyone seems to be making. DPAF is not a magic bullet for higher image resolution. What happens if you make a green quad-pixel a green, red, blue, and say white (luma) "quad pixel"? You no longer have a quad pixel!! You have separate, smaller pixels with one quarter of the area...that's all! You probably also lose the ability to do focal-plane autofocus as well, as having a SINGLE microlens over the split photodiodes is essential to how focal-plane AF works. If you try to keep one microlens over a "quad pixel", then you have problems with distributing the right amount of light over each of the RGBW "sub pixels", which would increase noise.

There is nothing special here about this technology. It has ONE purpose: To support autofocus. That's it. Anything else, and you simply have smaller color pixels.

Using non-bayer color filters has the advantage of both better resolution and better light sensitivity, as a bayer filter 'throws' away two thirds of incoming light vs one third for a dichromatic filter, for example.

In addition to allowing the use of non-bayer color filters, the quad-pixel tech also has the known ability that sub-pixels can be read at different ISO/amplification.
MagicLantern has shown that Canon already implements this in the 5DIII - but they are not using it.

Just these two improvements to the dual-pixel tech (quad-pixel, as noted) have the potential to improve sensor performance with maybe ~2 stops of ISO.
That's 4x more sensitivity, which is huge

Again, this is a lot of wishful thinking. For one, it has NEVER been demonstrated that the pixel halves can be read at different ISO settings. Even if they could, I've debunked the notion that it would improve anything on several occasions...in the end, assuming you read one half at ISO 100 and the other half at ISO 800 or 1600, you have a net neutral result: Your jacking up the ISO on HALF the amount of light (since it's half the photodiode), which does NOT get you the same result as what Magic Lantern is doing with current Canon sensors (which uses a downstream amplifier, not the per-pixel amplifiers, to achieve their results...plus, they are using full pixels, not half pixels). I highly doubt that we will ever see Canon's DPAF technology used in such a way that it improves dynamic range. Not even from Magic Lantern.

Sensitivity has to do with total photodiode (or rather sensor) area and quantum efficiency. Splitting the photodiode does nothing to improve quantum efficiency....Q.E. is a fixed trait of the silicon, how the silicon is doped and design of the photodiodes. Were already at about 50% Q.E., so to gain ONE stop of improved ISO, we would need to double that to 100% Q.E. That isn't ever going to happen in a consumer-grade device.

Reducing the amount of light filtered is also a means of increasing the rate at which light fills the sensor, however again, total sensor area is the real limiting factor here. A reduction in filtering simply means you fill the sensor with charge faster. That lets you use a lower gain, however it also means that you could end up clipping your signal for any given exposure length...and the only way to avoid that is to increase the total sensor area (i.e. move from APS-C to FF, which is the only way to really improve dynamic range.)  Switching to some kind of dichromic filter still means your using a filter, and still means your losing light. Your actually losing about 50% of the light per pixel color, which isn't anywhere close to a two-stop improvement in high ISO (it's actually only half a stop improvement at best).

The only real improvement on a bayer sensor design is the use of MCS, or Micro Color Splitting. This concept replaces filtration entirely with light splitting, using a special kind of prism-like component between the microlens and the photodiodes that utilizes the diffracted nature of light to either channel one part of light downward or deflect the other part of light to the neighboring pixels. You end up with White-Red and White+Red light in this kind of sensor. MCS preserves nearly 100% of the light. This does indeed improve low light sensitivity, but again, the gain is at most one stop, and you still benefit from it most by using a larger sensor. Assuming we could never actually achieve 100% Q.E. (the only sensors that I know of that achieve higher than 70% Q.E. are Grade 1 CCDs, extremely expensive commercial and scientific grade, and generally speaking the Q.E. curve peaks somewhere in the greens or yellows, and falls off again to around 50-60% for all other wavelengths.)


So, the dual pixel tech has a lot to do with image quality.

Note that I have no inside info or anything like that.
I'm just making informed guesses of where Canon might take this technology.

Your making wild assumptions, not informed guesses. ;) I spend a considerable amount of time reading actual sensor patents, reading everything Chipworks and Image Sensors World posts, and while I do not have inside info, I do have a lot of references I can point to that back up my conclusions.

Canon's dual "pixel" technology is not dual pixels. It is dual photodiodes within each pixel, and it is quite LITERALLY "dual", not "quad". Even if they had "quad" photodiodes, that still isn't going to improve image quality. The photodiodes are split below the microlenses and color filters, by necessity (as that is required for the AF functionality to work). All  you've really talked about is Canon making the pixels (the whole pixels, microlenses and filters and all) smaller...which really isn't anything special, and it precludes the option of having split photodiodes (which must exist below the microlens and CFA in order to function properly for AF purposes.)

1179
Nikon is a very different company. Nikon has practically made a reputation out of doing two things: Responding to competitors products (and responding extremely late, well beyond the time when the ship sailed and the train left the station), and creating hyperniche products like the Df or a 24karat gold plated, lizard-skin gripped $12,000 trophy camera that no one cares about other than as a curiosity on the internet every so often (oh yes, that thing really does exist...which actually blows my mind... ???).  Sony doesn't even seem to have a plan, it's just "*BLAMM!* Shotguun and Ho'yeah! Let's see wut sticks!  :o" wild-west product design and production that's burning their funds and burying them in a hole so deep and filled to the top with debt they will never be able to see sunlight again (let alone pay off).

Relax, I am sure Sony and Nikon needn't be bashed either ;)

Hmm, not so sure about that. Given how much people slobber and drool all over them like love-sick puppies all the time, I think a little dose of SoNikon Reality was in order. ;P It does boggle my mind that anyone actually bought Nikon's gold-plated lizard skin camera....

1180
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 03, 2014, 12:33:07 AM »
We’re told by a few other people that Canon is working on a “foveon like” sensor for their next generation of full frame cameras.

Any sensor expert can testify that Foveon is an impractical technology.

Why would you say Foveon type sensors are impractical? Sigma doesn't even rank within throwing distance of the radar, let alone on the radar, when it comes to sensor design and manufacture. Foveon's struggles have far less to do with it being "impractical", and more to do with the fact that it's Sigma, a company that doesn't really have the resources to really bring Foveon to bear and make it as competitive as it has the potential to be.

Fundamentally speaking, gathering full color information at each and every pixel is a superior means of imaging with digital image sensors. It's a more complicated sensor design, requiring advanced techniques and the use of the proper silicon-based compounds in both the substrate and the photodiodes in order to actually allow enough light to penetrate deeply enough to work. Given that you do end up with three times as many photodiodes to read for any given pixel count, you need a faster readout system, however that generally required higher frequency components to support a reasonable frame rate. Sigma just hasn't demonstrated that they either have the R&D budget, nor technological resources nor the prowess to develop the technology that would allow them to build a truly high resolution Foveon style sensor that had a high speed readout without junking the signal with a crap ton of read noise.

Canon, on the other hand...they very possibly DO have the resources to make a Foveon style sensor both high resolution, and a truly viable alternative to bayer. I would actually bet on Sony as being the most likely to have the resources to do it, but Sony hasn't shown any interest, where as Canon actually has patents for the technology.

Quote
At the same time, Canon's has its own 'dual pixel' tech, which has a lot of potential and room to improve. 
So, why would Canon go chasing Foven when they already have a new, promising technology in-house ?? 

Because DP tech has nothing to do with base image quality. DP adds an ALTERNATIVE option for performing autofocus, one that as of yet has not proven to be better than the tried and true approach of using a dedicated AF unit and a mirror. A Foveon-style sensor design, on the other hand, if done right with new techniques to increase parallelism and readout rate without increasing read noise, improve pixel structure, and increase pixel count, has the potential to radically improve image quality.

So...the real question is, why wouldn't Canon pursue the technology?

1181
You should change your metaphor, it is is Canon  who are chasing today .
Sony  as one example sold more cameras in South Korea than Canon and Nikon  in 2013 and I think the rest of the world will go the same way , from large SLR to smaller but with a FF sensor.

Canon has never chased anyone. They never chased anyone in the past, and they are not chasing anyone now. Canon does what Canon does, for whatever reasons Canon decides to do them. People are constantly complaining about how "Canon hasn't responded to <pickyourpoison>" and "Canon MUST respond to <yaddayadda>"...they constantly complain, because Canon is not in the business of "responding" to anyone for anything...never have, and I don't have reason to suspect they ever will.

Canon builds products for THEIR customers. They build EXCELLENT products for THEIR customers. The fact that Canon builds excellent products for their customers is the reason why they are one of the top imaging companies in the world, and the top photography company in the world. Canon delivers what their customers ASK for, and they make sure that what they deliver lives up to the expectations their customers have, and their own reputation.

Nikon is a very different company. Nikon has practically made a reputation out of doing two things: Responding to competitors products (and responding extremely late, well beyond the time when the ship sailed and the train left the station), and creating hyperniche products like the Df or a 24karat gold plated, lizard-skin gripped $12,000 trophy camera that no one cares about other than as a curiosity on the internet every so often (oh yes, that thing really does exist...which actually blows my mind... ???).  Sony doesn't even seem to have a plan, it's just "*BLAMM!* Shotguun and Ho'yeah! Let's see wut sticks!  :o" wild-west product design and production that's burning their funds and burying them in a hole so deep and filled to the top with debt they will never be able to see sunlight again (let alone pay off).


Canon is not, and will not, be responding with anything to any competitor's product any time soon. Canon will release the 7D II, or the 5D IV, or the 1D XI or whatever the next big thing is when THEY decide it meets the necessary requirements and is capable of maintaining and building up Canon's reputation as the worlds top (and most profitable) photography company. When the next big thing is released, it WILL be a phenomenal product that DOES live up to Canon's reputation as a top-notch photography company, and even if it doesn't have 25 stops of DR, 150 megapixels, 100fps, a 900 image frame buffer, a 200 point AF system that works in both mirror mode and live view/video mode, a 12000ppi 10-bit full-color high DR 60fps EVF and quad memory card slots supporting both CF and CFast2 all for the rock bottom low price of $500....good grief ppl....do you realize what you all sound like when you bring up the "Canon MUST respond!" and "Canon charges too much!" and "I want this, and this, and that, and OH YEAH THIS THING TOO! AND IT HAS TO BE $1500!!!!!!1!1!111111~~! *gimmegimmehgimmeeeenglfheee* *gasp* *GASP* *SUUCKING IN AIR....*"?   :o ::)

Bleh...it would be a wonderful day if everyone could just be happy with the fact that pretty much every single camera on the market today puts nearly every camera from the film era to complete and total, utter shame when it comes to IQ. Even when it comes to drum-scanned large format film, while you gain in resolution, even that can't really touch the color depth and brilliance of a high resolution digital sensor these days.

1182

Pixel peeping has it's place, but it should be thought of as a tool, not the end result. I zoom to 100% when I am denoising or sharpening, to see what the effect looks like at the pixel level. It's how I choose the right attenuation of the various denoising or sharpening settings.

But pixels aren't a picture, they are only components of a picture. You have to look at the whole picture to see the photograph. The problem with "pixel peepers" is that those are the whiny group of individuals who see nothing BUT the pixels, or to steal a phrase "missing the photo for the pixels".

I don't see why it has to be either-or; just because a photo is well composed and looks good overall, and was taken with a view to being seen as such, doesn't mean that there aren't details in it that one might want to peer into as well or that it can't be cropped to create a different composition that stands as well on its own.  Some photos lend themselves to it more than others, perhaps, but we don't all see and look in the same ways.  (And  if there really are people who take photos merely to peer at pixels, that's their business, not mine - unless they whine in my presence...). 

Cropping does not change anything I've stated. A cropped photo is still a conglomerate of millions of pixels. Maybe not the tens of millions your sensor has, but still millions. If you are cropping so much that your final image can only be printed at native size on a 4x6, or cannot be downsampled, then your cropping way too much, and you seriously need a better camera. :P

As for detail to draw viewers in, sure, but again...you are either downsampling to some acceptable "web size", or printing, and in both cases, the amount of detail that can be effectively displayed at a comfortable viewing distance is generally going to be significantly less than what your photo started out with at 100%.

Now, as 4k displays hit the market and eventually become mainstream, we as photographers will certainly have more demanding viewers expecting better results. But even then, the pixels of 4k screens are going to be even farther below the resolving limit of most viewers, and harder to resolve for viewers with exceptional vision, which again mitigates the impact of pixel-level IQ details.

1183
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 02, 2014, 02:31:09 PM »
...
Software is a difficult thing to discuss. The biggest reason why is: Which software? There are countless ways of, countless algorithms for, reducing noise. There are your basic averaging/blurring algorithms, your wavelet algorithms, your deconvolution algorithms, etc. Some denoising tools are more complex, and thus more difficult to use effectively, but when used effectively, can produce significantly better results. Some denoising tools are extremely simple, but don't produce as good of results.
...

How is it that in your treatise on software and noise reduction you didn't include the method used by Canon cameras?

Because Canon's in-camera NR (excluding LENR), which only applies to JPEG, really sucks. It blurs the crap out of data, so IMO it is not a viable option. It isn't particularly advanced, either, given the limited horse power in Canon's DIGIC processors, so it can't be much more advanced than a tweaked averaging algorithm anyway.

1184
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 02, 2014, 02:29:31 PM »
...
A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.)

People that have actually used 20+MP APS-C cameras and used the Sigma DP2M would disagree with you and say that the Sigma is the sharper camera/lens combination so it goes without saying that the Sigma DP2M (15MP) should also be considered sharper than the 6D.

"so it goes without saying"? LOL, love that.   ::)

Oh, and...prove it! (I have already proven the opposite with visual examples on multiple occasions, so the burden of proof is on you this time.)

1185
EOS Bodies / Re: More Sensor Technology Talk [CR1]
« on: May 02, 2014, 01:13:46 PM »
Jrista, I was comparing DP2M to 6D - 15 Mp physical pixels (which Sigma has called "46 Mp" in the past) to 20 physical Bayer Mp. In low-ISO situations, the DP2M does look a tad sharper than the 6D, despite the 5 Mp disadvantage. I attribute this to the color fidelity, because my subjects are generally landscapes with subtle color variation in leaves, grasses, etc. It is not 100% fair comparison because my current 50mm lens is a pre-computer-design manual AIS Nikkor 50mm f/1.2 used on an adapter on the 6D, which does look pretty darn sharp at f/4 to f/5.6 and still sharp f/8. The DP2M's fixed Sigma 30mm f/2.8 lens (45 mm equivalent) at same f stops looks sharper, but then again, the lens is 30 years younger. The real test would be the new Sigma 50mm f/1.4 Art - new design, no adapter, best affordable lens resolution-wise for the EF mount.

Thanks for the additional facts! That's always helpful when trying to understand things like this.

One of the things photographers don't quite understand, possibly in no small part to Sigma's marketing of Foveon, is that DSLR's have full luminance data...they only really suffer a loss in color resolution, and therefor color fidelity. Standard bayer interpolation uses 2x2 matrices of RGBG sensor pixels, in overlapping fashion, to produce RGB pixels in an image rendered to screen or saved to a file (i.e. TIFF). Effectively, the dimensions of an image from a bayer sensor is a count of the intersections between 2x2 pixel matrices in the sensor. This tends to cost you a little in luminance spatial resolution, and a fair bit in chroma spatial resolution, and is prone to artifacts like stair stepping.

More advanced algorithms, like AHDD or Adaptive Homogeneity-Directed Demosaicing, aim to maximize the luminance detail in each and every individual pixel (so it uses the luma information directly, rather than interpolating), while concurrently interpolating chroma data from pixels in such a manner that it maximizes chroma spatial resolution while eliminating stairstepping and other artifacts. Lightroom, Apple Aperture, RawThearapy, DarkTable all use/support AHDD, which means that generally speaking, demosaiced RAW images always have nearly the full resolution of modern bayer sensors.

A sharper lens used with the 6D, when demosaiced with something like Lightroom, will produce superior sharpness compared to the Foveon (even the 15mp Foveon.) It will have lower color fidelity, but because of the higher resolution luminance detail, it won't matter all that much. Color depth on a bayer can be extremely high. Canon sensors don't have the best color fidelity, but Sony Exmor sensors have very high color fidelity.

Pages: 1 ... 77 78 [79] 80 81 ... 299