EOS 7D Mark II & Photokina

Re: EOS 7D Mark II & Photokina

pierlux said:
So the 6 MP p&s S3IS has better IQ of the 13 MP full frame 5D?

No, the 2.04 micron pixels of the S3IS provide better IQ behind the same lens as the 8.2 micron pixels of the 5D. In other words, small pixels win.

Could you provide a link please so that I can see all by myself without asking you more detail? Thanks.

I don't know what link you want. They're my pictures.

The reason this works is pretty simple - the same amount of light falls on the same area of the sensor regardless of how you divide that sensor up into pixels. Lots of small ones or one large one, it's all the same light. The difference is, with the small ones, you can choose to reduced your resolution down to the same as you get from the large one (and reduce noise along with it), or not. You can't make that choice with the larger pixels. Further, when you reduce your resolution, you can choose to use far more effective techniques than the simple block-averaging approach that is effectively what the larger pixels are doing - averaging over the large block size of the large pixels.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Lee Jay said:
Famateur said:
Dare I wade into the pizza war? :P

Perhaps I can translate it into a wooden pizza to fit one of my other hobbies: If I have a 15" maple disc, cutting it into 6 pieces WOULD give me more maple surface area than cutting it into 8 pieces. Why? because there is waste from blade kerf. If I have a 1/8" kerf, I lose an approximately 1/8" slice of material with each cut. Let's say now that we fill in each cut with a 1/8" slice of ebony so we don't lose overall surface area when we glue it all up. The disc maintains its original surface area, but there is still less maple surface area with 8 slices than with 6. Using a 1/16" kerf blade will increase the ratio of maple surface area to ebony, but there will still be less maple surface area with 8 slices than with 6.

Now imagine the disc is actually a rectangle, and the pieces are squares instead of pizza slices. The maple is the photo-sensitive portion of the sensor, and the ebony is the border around each pixel. If sensor size and transistor size are constant, doesn't increasing the number of pixels increase the number of borders and transistors, and doesn't that reduce the portion of the overall sensor that receives light? Is moving from a 500nm process to a 180nm process like going from a 1/8" kerf to 9/200" kerf?

I'm obviously not a sensor geek, so I might be completely misunderstanding pixels, borders, et cetera. What am I missing in this analogy? :P

What you're missing is gapless microlenses, which essentially render the "blade kerf" largely moot by concentrating the light into the light-sensitive area between the "kerf lines".

Interesting. So you're saying that the microlenses can redirect virtually all the light (that would have fallen on the border) into the photo site (or that if any is lost, the final result is not appreciably different)? Good to know.

So moving to a smaller process to shrink the borders does not affect the amount of light captured for each pixel because of the gapless microlenses?

If microlens size = photo site + border, then it would seem that a larger pixel-with-microlens would gather more light than a smaller pixel-with-microlens. Are you saying that the resolution (given the same sensor dimensions) is higher for the smaller pixels so when you compress the image to the same resolution as the sensor with the larger (fewer) pixels, the overall light/data collected for the multiple smaller pixels, now sized-down to the lower resolution end up producing essentially the same image quality? Am I understanding this right? Does this mean that if I want to enjoy the same image quality as the sensor with fewer pixels I have to compress the resolution of my images to match?

One other thought: microlenses perfectly focusing the light on the photo site sounds great on paper. How precisely do the lenses do this in the real world? If they're nearly perfect, how in the world do they accomplish such precision on such a small scale? Simply amazing to me...

If the microlenses do their job, then I guess it's not light/surface-area that makes the difference between crop and full frame. Could it be that for the smaller pixels, there's more opportunity for noise to be introduced by the supporting circuitry? Something must be happening, because it seems that sensors with larger pixels seem to do better for noise at high ISO.

I'm obviously showing my ignorance here, and at the risk of inviting the sensor-tech-savvy among us to bury me in information over my head, but hey...why not? :P
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Famateur said:
So moving to a smaller process to shrink the borders does not affect the amount of light captured for each pixel because of the gapless microlenses?

Pretty much, yes.

If microlens size = photo site + border, then it would seem that a larger pixel-with-microlens would gather more light than a smaller pixel-with-microlens. Are you saying that the resolution (given the same sensor dimensions) is higher for the smaller pixels so when you compress the image to the same resolution as the sensor with the larger (fewer) pixels, the overall light/data collected for the multiple smaller pixels, now sized-down to the lower resolution end up producing essentially the same image quality?

Yes, though if you do the down-sizing properly, the smaller pixels will generally win, and quite easily.

Am I understanding this right? Does this mean that if I want to enjoy the same image quality as the sensor with fewer pixels I have to compress the resolution of my images to match?

That depends on what you mean by image quality. Resolution? Noise? With the smaller pixels, you have the option to reduce noise at the expense of resolution. On the larger pixels, that part has been done for you and you have no choice.

One other thought: microlenses perfectly focusing the light on the photo site sounds great on paper. How precisely do the lenses do this in the real world? If they're nearly perfect, how in the world do they accomplish such precision on such a small scale? Simply amazing to me...

The efficiency varies with the design, but it's quite close to all of the light. They do this using the techniques of photolithography, which is quite a precise thing, especially in the more modern versions.

If the microlenses do their job, then I guess it's not light/surface-area that makes the difference between crop and full frame. Could it be that for the smaller pixels, there's more opportunity for noise to be introduced by the supporting circuitry? Something must be happening, because it seems that sensors with larger pixels seem to do better for noise at high ISO.

They do this because they use more sensor area, not because they use larger pixels. When you have the same f-stop, the light intensity (called "illuminance" - light per unit of area) is the same (for a given scene), and that means a sensor with more area captures more light. Since signal-to-noise ratio goes with sqrt(total light captured), more area (bigger sensor) means better signal to noise ratio for the same f-stop. That's why larger sensor perform better in low light.

Another way to look at the same thing is to express f-stop as its definition - focal length / aperture. So, a lens with a 100mm focal length and a 25mm aperture has an f-stop of 4 (it's often written as its reciprocal - 1/4 or 1:4).

Well, let's say you want to use your 100/4 on your full-frame camera. To what do you compare? Well, on a 1.6-crop camera, you might use the same lens zoomed out to 62.5mm so that you have the same angle of view. 62.5mm / 4 = 15.625mm compared with 25mm on the full-frame camera. That's a lot smaller hole for the light to squeeze through, and so you get a lot less.

The two explanations are equivalent.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

pierlux said:
And Sony? Compare the the 36 MP Alpha A7r(esolution) and the 12 MP A7s(ensitivity), then say again that more MP does not mean more noise if you dare. At base ISO maybe, but try going at 800 and beyond...

I'll take your ISO 800 and beyond and raise you a comparison between the A7r and A7s at ISO 3200:

http://www.dpreview.com/reviews/image-comparison/fullscreen?attr65_0=sony_a7s&attr65_1=sony_a7r&attr66_0=3200&attr66_1=3200&attr67_0=raw&attr67_1=raw&attr68=12mp&normalization=full&widget=119&x=0.15845192244437983&y=0.31115156636410585.

See any difference between the two when normalized to 12MP? I don't think so. Everything was controlled in this test, down to using the same exact lens, same shutter speed, aperture, ISO, etc.

In fact, I don't see any appreciable advantage over the A7R until you hit ISOs of 25.6k and beyond. Because at that point, the crumbs start encroaching on the slices, in the pizza analogy. For deep deep shadows, the A7S' takes over at ISOs of 6400 and beyond. Really depends on the tone you're looking at, b/c different tones are affected differently by the different sources of noise (shot noise, read noise, etc.).

But ISO 800? 1600? You're unlikely to see any appreciable difference.

LTRLI also made an interesting point about finer-grained noise of higher resolution sensors. So there's also that to consider.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

You've made a lot of logical mistakes, so I'll address them one by one below.

pierlux said:
Sorry, it's a long post, maybe too long.

In physics, the Carnot cycle is an ideal representation of a perfect engine having 100% efficency. It is a useful thermodynamic representation to explain energy conversion. In the real world, such a system does not exist.

Too many of you claim that a higher MP sensor does not have more noise than a lower MP one, all conditions being equal, and support this claim with mathematics, but that's not true in the real world. Some even claim that smaller photosites have less noise than bigger ones: now, that's the kind of claim that should make all of us invoke Santa, bigfoot and rainbow-pooping unicorns going on vacation together with a flying saucer. The real world behaves differently.

I would say the exact same thing about the notion that bigger pixels gather enough "more light" to be meaningful given how sensors are designed today. Your missing how all the small changes to sensor design have largely nullified and negated the impact of fill factor. Fill factor is the ratio of sensor die space dedicated to light sensitive area vs. non-sensitive area (i.e. wiring, transistors, etc.)

pierlux said:
Lee Jay said:
Saying bigger pixels let more light in is like saying cutting a 15 inch pizza into 6 slices instead of 8 gets you more pizza.

No. Your logic is flawed.

Nope, his logic is actually flawless, and here is why:

pierlux said:
If the pizza represents the sensor, the number of slices correspond to the number of the pizza eaters, represented by the sensels, i.e. photosites, i.e. pixels. An eater who eats 1/6 of the pizza eats more than one eating 1/8 of it.

Your only thinking about the individual pizza slices here. Your missing the bigger picture: The eater who eats 1/6th of a pizza eats 1/6th of a pizza SIX TIMES!! Therefor, the eater is not eating one pizza slice...the eater is eating A WHOLE PIZZA! ;D This is the critical point that everyone seems to miss. If an eater eats two 15" pizzas, one cut into 6ths and one cut into 8ths...has the eater eaten less total pizza when eating the one cut into 8ths? NOPE!! He's still eaten a whole 15" pizza, same as he did when he ate the one cut into 6ths.

We don't generally use image sensors in slices. We frame our scene and we take a photo. Ignoring cropping for the moment, our sensor, the ENTIRE sensor, regardless of how many pixels it has, was used to create an image. Now, cropping does play a factor. If we crop, we are using less area, and throwing away some of the sensor area. That might be like eating two or three slices of pizza, instead of a whole entire pizza. However, what role does pixel size play? I mean, if we have one pizza cut into 6 slices, and another pizza cut into 12 slices...if we eat two of one or four of the other, we've still eaten the same amount of pizza.

pierlux said:
An ideal 24 x 36 mm sensor being a single light sensitive unit is a one pixel sensor which, let's say, collects 1 billion photons at a given time unit and luminous intensity; it has the lowest resolution possible, but it would be capable of letting you know if even a bunch of photons have hit it or not with 100% certainty, i.e. with zero noise. Ideally, if you divide that single huge photosite into 1 million smaller photosites (1 MP sensor), each photosite receives IDEALLY 1000 photons under the same conditions; in the practice it's less than 1000 because of the wiring and the spacing between photosites which equally absorb the photons, but do not convert them into a useful signal, instead convert them into heat, which is detrimental. This 1 MP sensor has sufficient resolution to resolve enough detail for a very small print, and with today's tech you could probably use it at 204,800 ISO or more with very low noise (and before any of you reply that you can reduce the size of the image and therefore reduce noise and equally obtain the print, try exposing a 36 MP sensor at 204,800 ISO or higher if you can...).

You started to get there with the one photodiode bit...then you lost it.

pierlux said:
Again, the same sensor with 20 MP exposed in the same conditions does not collect 50 photons per photosite, but MUCH LESS this time due to massive wiring and lots of wasted space between photosites, so you have a high resolution image, but with a lot of noise.

You are correct, smaller pixels collect less light per photodiode. IN THE PAST, there WERE losses due to fill factor. Because wiring and transistors used up some die space, incident light that struck any non-photodiode space would either reflect or convert to heat. THAT IS NO LONGER THE CASE TODAY! Technology has improved considerably, with microlenses, BSI sensors, deep photodiodes, increased charge hole density, and a whole host of other improvements.

pierlux said:
Actually, in my example with 1,000,000,000 total photons hitting a 24 x 36 mm sensor I think you'd have only random noise at 20 MP, but it was for the sake of explaining. I'm not talking quantum efficency at all here, it's just the number of photons you can effectively use I'm talking about. Moreover, we don't have a linear relationship between number of photons and noise, so it's not as if you have half of the photons per photosite you double the noise, the situation is worse in the real world.

You are correct about the non-linear relationship that noise has with signal strength. However, you are forgetting about the relationship of smaller pixels to larger pixels, and the impact that averaging smaller pixels together has on noise. If you have 10µm pixels, they gather four times as much light as 5µm pixels. Now, let's say out 10µm pixels have a full well capacity (FWC) of 100,000 electrons, or 100ke-. The 5µm pixels have a FWC of 25ke-. In terms of photon shot noise, the larger pixels have SQRT(100,000), or ~316e- noise, while the smaller pixels have SQRT(25,000) or ~158e- noise. If we simply ADD the noise of four smaller pixels together, we get 632e-. Oh no, more noise!!

Your forgetting that noise is not simply added together when normalizing...it is AVERAGED. We do get a total of 632e- noise in a 2x2 matrix of five micron pixels, however if we average those pixels together by downsampling the larger output image from the sensor with smaller pixels to the same dimensions as the output image from the sensor with larger pixels, the noise factor drops by the square root of the difference in in pixel count. Since we can fit FOUR times as many five micron pixels in the same area as ten micron pixels, the 632e- noise drops by a factor of SQRT(4), or 632e-/2, which is? Oh, look at that...316e-!! Smaller pixels are NOT noisier on a normalized basis.

pierlux said:
The Canon 1Dx is 18 MP; in the Nikon D800, being 36 MP, each photosite collects LESS than one half of each of the Canon's photosites, that's why the 1Dx is much better in low light. And the D800 holds because of its superior sensor tech (let's face it, fortunately Canon's system is better as a whole), otherwise they wouldn't have made it 36 MP in the first place.

The 1D X does marginally better in low light. No camera does "much better" in low light, because at high ISO, we are primarily physics bound. But lets use some real numbers here. The 1D X has an FWC of 90367e-, and the D800 has an FWC of 44972e-. Now, that is the full well capacity, the maximum amount of charge each pixel can hold. If we just compute the grant total amount of light both sensors can gather if they were exposed to maximum saturation:

Code:
1DX: (5205px*3533px)*90367e-/px = 1,661,782,710,255e-
D800: (7424px*4924px)*44972e-/px = 1,643,986,358,272e-

The 1D X has slightly more. In terms of percent, if we divide those two numbers, we get:

Code:
((1,661,782,710,255e-/1,643,986,358,272e-) - 1) * 100= 1.0825%

The 1D X has a GRAND TOTAL light gathering capacity lead over the D800 of ONE PERCENT. That's it! One percent. Now, let's look at high ISO. At ISO 12800, the 1D X has a saturation point of 735e-, and the D800 has a saturation point of 507e-. The 1D X's pixels should do much better, right, because they are so much larger? Hmm, not according to the math:

Code:
1DX: (5205px*3533px)*735e-/px = 13,516,109,775e-
D800: (7424px*4924px)*507e-/px = 18,533,778,432e-

The D800 should be suffering because of it's smaller pixels. Each one, after all, is gathering LESS total light at ISO 12800 for any given exposure than the 1D X. However, in terms of absolute amount of light gathered...

Code:
((18,533,778,432e-/13,516,109,775e-) - 1) * 100 = 37.12%

Fundamentally, ignoring any other factors for the moment (I'll get into those shortly here), the D800, despite it's smaller pixels, is actually gathering THIRTY SEVEN PERCENT more light in total! That not only demonstrates the point that smaller pixels don't really have anything to do with light gathering capacity, but it also speaks volumes to the technology in the Exmor sensor.

Now, these numbers don't jive with tests. Why? Pixel size does not matter when it comes down to how much light a given sensor, regardless of it's pixel size, can gather. If you know anything about equivalence, you know that the primary thing that really matters from the standpoint of noise is total light gathered. This is why larger sensors perform better than smaller sensors, and always will (given they both use the same technology.)

So, why does the 1D X perform better at high ISO? Canon currently has Correlated Double Sampling (CDS) technology that performs better than Sony or Nikon's, so their dark current is extremely low at high ISO, and because of their use of a bias offset, Canon's read noise (dark current plus downstream RN) is lower in total. The 1D X has 1.6e- RN at ISO 12800, vs. the D800's 3.3e- RN at ISO 12800. That leads to increased dynamic range at high ISO, which gives the 1D X the edge:

Code:
1DX: 20*log(735/1.6) = 53.24dB
D800: 20*log(507/3.3) = 43.72dB

The 1D X has ~8.8 stops of dynamic range at ISO 12800, while the D800 has ~7.3 stops. Ironically, the technology that flattened the read noise floor for Exmor, and gave it a significant lead at low ISO, actually seems to hurt it a bit at high ISO (at least, in the D800).

Why does the D800 suffer here? When it comes to dynamic range at high ISO, pixel size does matter, because it affects the maximum amount of charge that can be held in each pixel. Since dynamic range is the ratio between saturation point and read noise, the higher read noise of the D800 at high ISO costs it the lead it had at low ISO. If the D800 used a bias offset the same way Canon does, the offset could then be used to minimize read noise at high ISO. The D800 with 0.5e- read noise at ISO 12800 would have a dynamic range of 60dB. The D810 now actually uses a bias offset, and it's read noise drops to a very low 1.3e- at ISO 51200.

pierlux said:
In the pizza analogy, the more you cut the pizza, the more breadcrumbs, morsels, atomies you produce, leaving the eaters with less and less pizza to eat to the point that, putting together all the slim slices of pizza eaten by all, they add up to not even a quarter of the original one. And, actually, a pie should be a better fitting analogy.

Sure, there are crumbs and other small losses. However, those losses are small. Not just small, minuscule. They are, for all intents and purposes, meaningless. You could even shave off a little bit off the sides of each pizza slice (which often happens when using a classic pizza cutting wheel), and it would still largely be meaningless in terms of the grand total area of pizza that is still eaten.

Plus, the pizza crumb analogy ignores technological improvements in sensor design. Modern sensors are NOT just an open matrix of photodiodes and exposed wiring and transistors. Moderns sensors use one of two designs: Front Side Illuminated (FSI) and Back Side Illuminated (BSI). An FSI design does have wiring and transitors on the face of the sensor, however they no longer receive incident light in a modern sensor design. Canon started using microlenses a few generations ago, and all their current sensors use gapless microlenses. Microlenses direct incident light through the CFA and onto the photodiode. Sony sensors now actually use a double layer of microlenses...an upper layer that directs incident light into the CFA, and a lower layer that directs any dispersed light that actually makes it through the CFA into the photodiode. (Part of the reason why Sony sensors have better Q.E.) That cuts the losses due to fill factor to nearly nothing. There are still losses, but they are a tiny fraction of what they used to be. Fill factor is now no longer the concern it used to be.

pierlux said:
It's like having a 100 x 100 ft room all for yourself, 10,000 square feet is plenty of space. But if you want to accomodate 100 people inside it and offer them a bit of privacy, you have to build walls which eat space, not to mention furniture, so you end up with much less than 100 sq. ft for each dweller.

The primary concern with fill factor now is grand total photodiode area. Smaller pixels have smaller total photodiode area, and since historically the only thing that mattered from a total charge capacity standpoint was the area of the photodiode, smaller photodiodes hold less charge than larger photodiodes. The greater losses in sensor die area that can be dedicated to photodides when using smaller pixels can affect the grand total amount of light that a sensor with smaller pixels gather. A 10µm pixel fabricated with a 500nm process will have 9µm square photodiodes, where as a 5µm pixel will have 4µm square photodiodes. Instead of a 4x difference in area, that's a 5x difference.

A BSI sensor does not have these problems. With a BSI design, all of the wiring and transistors are relegated to the "front side", and the photodiodes, CFA, and microlenses are placed on the "back side" of the sensor substrate. This effectively gets you a 100% fill factor. Generally speaking, any sensor technology that can be used for smaller sensors can be used for larger sensors. Technically speaking, BSI COULD be used for large full frame sensors, however because of the fact that both sides of the silicon substrate are etched, the increase in fragility results in a yield that is well below acceptable for larger sensors (at the moment this includes APS-C sensors, however there are some patents out there that cover means if strengthening the wafer to allow larger sensors than 1/2" to be fabricated with a BSI design.)

A small BSI sensor will have a nearly 100% fill factor as far as the back side goes. Therefor, there are no losses in die space, and since BSI is a benefit that only smaller sensors can use currently, it normalizes the playing field. The differences in photodiode capacity even for FSI are largely moot these days, with new techniques allowing "deep photodiodes", where the fundamental silicon doping and structure of the photodiodes themselves can be tweaked to increase charge capacity with depth as well as area. There are even patents that describe means of increasing "electron hole density" in a photodiode, allowing greater charge to be stored in a smaller photodiode area. The more sensor technology marches on, the less fill factor will play a role in grand total light gathering capacity.

If we ignore furniture and other accomodations, which really have absolutely ZERO relevance to a sensor analogy...then your walls don't really reduce the capacity. We could still fit 100 people in a 10,000 square foot room if we added walls. The people would just be more densely packed. We would need to pack in a lot more than 100 people into 10k square feet before the addition of walls started to matter. The walls eventually would matter, once you start packing in more people into smaller and smaller rooms...however that's where BSI sensor designs come into play. You then build another floor on top of the walled rooms, and move everyone upstairs. Suddenly, you have plenty of room again.

pierlux said:
Still not convinced? OK, you may say "who are you to stand up and make such claims against my maths?", so let's look at what Canon's engineers have done, I bet they know more than me or anyone else on CR about silicon performance and noise. This is what I wrote in a previous post:

"There's a reason the 1Dx has the best (to my eye) IQ of all the DSLRs available to date (yes, better than any Nikon I think): its 18 MP FF sensor. And there's a reason why Canon developed a prototype sensor with photosites 7.5 times larger than the 1Dx: to capture quality video in candlelight (candledrkness sounds better, though). Don't know if you remember, but check these:

http://www.canon.com/news/2013/mar04e.html

http://petapixel.com/2013/09/13/canon-debuts-exciting-prototype-sensor-exceptional-low-light-capability/

And Sony? Compare the the 36 MP Alpha A7r(esolution) and the 12 MP A7s(ensitivity), then say again that more MP does not mean more noise if you dare. At base ISO maybe, but try going at 800 and beyond...

And should somebody dare claim again that smaller photosites means less noise as I've read too many times, remember Santa & Co. are watching us from their flying saucer... And again, at base ISO maybe, but what's the point of shooting 36 MP and then reduce resolution in post to lower the noise and make small prints or web sized images?

I'm going to spend the weekend with my son, so I'll be having a look at CR every now and then, but I'm not going to post, sorry. Have a nice weekend you all, too!

Yup, still not convinced. :P

The 1D X's high ISO performance, as I ACTUALLY demonstrated with REAL math, is not primarily due to it's pixel area. In terms of pixel size, total sensor area, and sensor efficiency (Q.E.), the D800 is actually 37% more efficient than the 1D X at ISO 12800. The difference between the 1D X and the D800 at high ISO is actually the amount of read noise...the 1D X's lower read noise gives it more dynamic range, and THAT is what makes it better. Read noise has more to do with sensor technology and complimentary electronics (i.e. ADC unit) than it does with pixel size...so again, sorry...but your still wrong. ;)
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Hello again.
www.500px.com/Vgramatikov
I`m wildlife photographer.

If i have to use not to many words i would say:

In practice lower count megapixels is better than higher count simply because of the workflow.
I speak like wildlife photographer. Shooting sports, fast moving subjects and so on is better with lower mp.
Higher mp is always better and effective if the sensors in same production design and innovation BUT.
When we use fast cameras like 7d/ii we use them mainly for fast moving subjects in natural light.
Bigger count is better but bigger count means lot more post to get good image. That why d4 and 1dx are lower count mp cameras. Because for sport and wildlife higher count means usually slower frame rate, short buffer, lot more work in post.
I prefer 16mp on crop camera than 20+....i loose all that extra resolution due to lower shutter counts, not perfect optical design of the lens, motion blur and so on.

If you just check how 300 2.8 mk2 IS perform on 1ds iii in DXO showing 21mp of 21mp resolving power.
Witch means almost perfect lens for this size/mp sensor.
Same lens on Canon 70d produce 17mp from 20mp. This means 3mp is throw away due to smaller sensor. And this is in perfect test conditions. On the field you will throw out even more 10mp for sure.
That why d4 and 1dx are lower count mp. Cause it is no sense to use higher count mp camera for sports and wildlife.

I prefer better work flow, bigger buffer, cleaner images, more DR and less mp.

But for aps-c sensors is already impossible someone to announce 16mp sensor.
If i can make i wish. I want 7d ii with some kind of Fuji interpretation of Sony 16mp sensor and 40-50 RAW buffer.

Just check the Nikon d7100....good camera, good sensor and only 6fps and very very slim buffer....this is a one big joke in practice. 7d have 24 raw buffer with 8fps and 18mp sensor way way better if i`m not wrong for the details.

Sorry for bad english again!!!
Good luck and don`t worry 7d ii will be perfect as any canon body. But it will be crop sensor camera. Do not expect 6d low iso performing. Just may be little better than my 70d. But like instrument will be lot better.

I don`t care about +2 stops of DR for Sony sensors in the range 100-800 iso because i do not use very much iso 100-400 when i m on the field. When i take landscapes canon is just fine. None of my images will become better with 2 stops more DR at iso 100 :)

If i was a landscape or portrait photographer i will go for Canon 6d or Nikon d610. Not hard choice.
But for wildlife. The best balance between cheaper and better is for sure 7d,70d available cameras and future 7d ii. Canon was the greatest long telephoto lenses starting from 400/5.6L.

None of the companies have lens like this. I have my own 300 2.8 IS mk1 and 600/4 for the past.
Canon 400 5.6L is sharper than 600f/4 IS mk1 and as sharp or better than 300 2.8 IS mk1 + 1.4
Sold 300 2.8 and buy 200 2.8L for low light 500 euro and 400 5.6L for main telephoto lens for 700 euro.

Who cares about the sensors if they are just fine ! Not greater ?!:)
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Vgramatikov said:
If you just check how 300 2.8 mk2 IS perform on 1ds iii in DXO showing 21mp of 21mp resolving power.
Witch means almost perfect lens for this size/mp sensor.
Same lens on Canon 70d produce 17mp from 20mp. This means 3mp is throw away due to smaller sensor. And this is in perfect test conditions. On the field you will throw out even more 10mp for sure.

And, at the same range, the 21MP will be reduced by a factor of 1.6^2 (2.56), leaving 8.2MP.

In other words, the larger sensor only has an advantage if you can use a larger lens or wait until the subject is closer. At the same range, it has a disadvantage because of its large pixels.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Mt Spokane Photography said:
cnardo said:
But if Canon said: "Canon today announces DETAILS of it presence at Photokina 2014..." where are they ??????

[snip]

Canon extracts revenge on leakers. If a employee leaks something, the company he works for is punished.

That's it! That explains everything. Every time a Canon employee leaks something the release date for the new 7D is pushed out by 2 months. That is why it has taken so long and why it is so hard to find information.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

@Lee Jay: Thanks for the detailed replies. Reducing noise at the expense of resolution -- that sums it up well, I think.

In the past (and this is often how it's presented by "experts"), I've thought that, given identical sensor technology, going from 20.2MP to 24MP would just translate into more noise, and I guess it does, but if I scale it back down to 20.2MP, I haven't lost anything -- or maybe even slightly gained. Then in optimal conditions, I have more resolution, and in noise-producing conditions, I can scale the image down and be no worse off than the 20.2MP version of the same sensor tech.

Does that sound right? If so, then bring on the pixels! I wouldn't mind the flexibility to compress for noisy images and have extra resolution for low-noise images.

---

Interesting stuff. I'm open to explanations if anyone else wants to add, but this seems to make sense...
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Famateur said:
@Lee Jay: Thanks for the detailed replies. Reducing noise at the expense of resolution -- that sums it up well, I think.

In the past (and this is often how it's presented by "experts"), I've thought that, given identical sensor technology, going from 20.2MP to 24MP would just translate into more noise, and I guess it does, but if I scale it back down to 20.2MP, I haven't lost anything -- or maybe even slightly gained. Then in optimal conditions, I have more resolution, and in noise-producing conditions, I can scale the image down and be no worse off than the 20.2MP version of the same sensor tech.

Does that sound right? If so, then bring on the pixels! I wouldn't mind the flexibility to compress for noisy images and have extra resolution for low-noise images.

---

Interesting stuff. I'm open to explanations if anyone else wants to add, but this seems to make sense...

Yup. You pretty much have it. Assuming equivalent or better sensor technology, more pixels is never bad. It may not necessarily be better, beyond the core theory a lot of factors play a role...but more pixels can pretty much never be bad, certainly not worse.

The reason that full frame sensors usually perform better than APS-C sensors is the greater total area. If you frame identically with both cameras, then the larger the sensor, the more total light your gathering. That means less noise (on a normalized basis), and usually, even though the pixels tend to be bigger, more detail (since your getting more pixels onto the subject than you can with a smaller sensor.)

The only real caveat to the above is reach-limited situations. In those cases, smaller pixels will always resolve more detail than larger pixels. Doesn't necessarily matter if they are in a smaller sensor (although they usually are)...all that matters is that when you are at a fixed distance from your subject with a specific lens, your subject is going to fill the same absolute sensor area regardless of how big the sensor is. Then, smaller pixels mean more detail.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Lee Jay, jrista, and also sarangiman , sorry for the delay, but my ex-wife lives 40 km away from me so it took me a while. Plus, I'm very slow in writing.

Thank you guys for the effort spent in replying, you almost convinced me. I think I have to re-read your replies another couple of times to fully understand all, but I'm nearly convinced. I have a couple of questions. I still don't get what
Lee Jay said:
...the 2.04 micron pixels of the S3IS provide better IQ behind the same lens as the 8.2 micron pixels of the 5D.
actually means. Problem is, comparing sensors having different resolution, we are far from "all the other conditions being equal" here. Are you sure those images are ISO 800? My old 6 MP 300D at 800 ISO is noisy as hell, and it's APS-C. The fact is that I still have problems in regarding normalization as a fair mean of comparison.

Moreover, I still think the following logic is flawed. jrista, you say
jrista said:
Your only thinking about the individual pizza slices here. Your missing the bigger picture: The eater who eats 1/6th of a pizza eats 1/6th of a pizza SIX TIMES!! Therefor, the eater is not eating one pizza slice...the eater is eating A WHOLE PIZZA! ;D This is the critical point that everyone seems to miss. If an eater eats two 15" pizzas, one cut into 6ths and one cut into 8ths...has the eater eaten less total pizza when eating the one cut into 8ths? NOPE!! He's still eaten a whole 15" pizza, same as he did when he ate the one cut into 6ths.
but I still think the eater here is not the whole sensor, it's each individual photosite. If the eater was the whole sensor, you'd obtain zero resolution, no detail. But you need detail to produce a meaningful image, so you must compare the eater to the single photosite. So the more the eaters (the higher the resolution), the less amount of pizza each eater eats.

I know BSI sensors have the wiring on the opposite side, and that large sensor only marginally benefit from this configuration, that's why this more expensive and lower yield technology is not used in larger sensors, but this is true for (relatively) low density photosites per area unit. The higher the density, thus the smaller the photosites, the greater the benefit. I don't know the numbers, but are you sure the wiring of conventional sensors matter so little in the light blocking effect on the photosites? Does a 24 MP APS-C sensor, even at 180 nm, still marginally suffers from the interposed wiring? jrista, it seems you know where to find such information, I'd like to deepen my knowledge, why don't you post the most interesting links you find, every now and then, on CR? I mean not now, don't get me wrong, but you're one of the most active posters here, I'm sure at least some CR members would appreciate some technical reading sometimes, I for sure.

Thank you all guys, it's very late here, good night!

Ah, my son and I had pizza for dinner, irony!
 
Upvote 0
Re: EOS 7D Mark II & Photokina

pierlux said:
but I still think the eater here is not the whole sensor, it's each individual photosite. If the eater was the whole sensor, you'd obtain zero resolution, no detail. But you need detail to produce a meaningful image, so you must compare the eater to the single photosite. So the more the eaters (the higher the resolution), the less amount of pizza each eater eats.

The eater ate the slices at different times. The total number of pepperoni slices consumed are still the same whether the person counted them on the entire pizza all at once or on each piece of pizza individually. By counting them a slice at a time, you get more detail about their distribution, but they still contribute the same amount to your total fullness either way. :)
 
Upvote 0
Re: EOS 7D Mark II & Photokina

pierlux said:
Lee Jay, jrista, and also sarangiman , sorry for the delay, but my ex-wife lives 40 km away from me so it took me a while. Plus, I'm very slow in writing.

jrista said:
Your only thinking about the individual pizza slices here. Your missing the bigger picture: The eater who eats 1/6th of a pizza eats 1/6th of a pizza SIX TIMES!! Therefor, the eater is not eating one pizza slice...the eater is eating A WHOLE PIZZA! ;D This is the critical point that everyone seems to miss. If an eater eats two 15" pizzas, one cut into 6ths and one cut into 8ths...has the eater eaten less total pizza when eating the one cut into 8ths? NOPE!! He's still eaten a whole 15" pizza, same as he did when he ate the one cut into 6ths.
but I still think the eater here is not the whole sensor, it's each individual photosite. If the eater was the whole sensor, you'd obtain zero resolution, no detail. But you need detail to produce a meaningful image, so you must compare the eater to the single photosite. So the more the eaters (the higher the resolution), the less amount of pizza each eater eats.

Your still thinking about it wrong. The eater is not the sensor, nor the photosite. The eater is light. The pizza is the sensor. The slices are the pixels. When you create an exposure, light illuminates the ENTIRE sensor...not only one or two or ten thousand pixels...but the whole thing.

Hence the analogy. The eater is light. The eater is always eating a WHOLE PIZZA. It doesn't matter if that pizza is sliced into 6ths, 8ths, 12ths, or 40 millionths. The eater is STILL going to eat the whole darn thing....every single time (i.e. every single exposure). :P

Make sense now?


pierlux said:
I know BSI sensors have the wiring on the opposite side, and that large sensor only marginally benefit from this configuration, that's why this more expensive and lower yield technology is not used in larger sensors, but this is true for (relatively) low density photosites per area unit. The higher the density, thus the smaller the photosites, the greater the benefit. I don't know the numbers, but are you sure the wiring of conventional sensors matter so little in the light blocking effect on the photosites?

Yes, I am sure. It used to matter, but that was near a decade ago now. Canon was using microlenses almost that long ago, and that alone changed the game considerably. Pretty much everyone is using gapless microlenses, and some are using two layers of microlenses. The non-sensitive die area is no longer blocking, reflecting, or absorbing much light. It's a percent or two, which has a small impact on overall Q.E., but it isn't significant enough to worry about.

A vastly more impactful issue is the use of a CFA itself. The color filter over each pixel is reflecting anywhere from 60% to over 70% of the incident light!!! The CFA is the real killer of sensitivity, by a very, very LONG shot. The small differences in photodiode area are pretty minor these days when talking about pixel sizes 4µm and larger. Canon is actually suffering a lot more because of that than their competitors...their process is 500nm. I have long suspected that Canon has not made a 24mp APS-C sensor yet because that would put them into a pixel pitch range where the 500nm wiring and transistor size WOULD hurt the performance of smaller pixels (i.e. fill factor, DESPITE the use of microlenses, becomes an issue again, simply because the ratio of photodiode area to total die area is too small).

Most other manufacturers are making sensors with 180nm or smaller processes. At 180nm, the difference between a 4µm pixel and a 6µm or 10µm pixel are pretty moot. There is a small percentage difference, but it doesn't generally amount to significant enough differences in total light gathering capacity that it matters in the grand scheme of things. I mean, look at the D800 vs. 1D X. The former has only a 1% loss vs. the latter at low ISO, and a massive 37% LEAD over the latter at high ISO. Granted, the D800 has better technology...it's got a Q.E. of 56%, the sensor does use a 180nm fabrication process, it's got better microlensing, etc. The 1D X only managed to scrape back the lead because Canon's analog CDS is (currently) superior at higher ISO.


Anyway...the use of microlenses, light pipes, BSI designs, etc. pretty much negates the fill factor issue. The remainder of the issue is being resolved with tall photodiodes, materials science that allows more electrons of charge to be held in a smaller photodiode area, etc. At some point, I figure all sensors will be BSI with some kind of reinforcement technology to keep the sensor substrate rigid, minimizing yield losses. When every sensor is BSI, the whole fill factor issue will be gone for good.

pierlux said:
Does a 24 MP APS-C sensor, even at 180 nm, still marginally suffers from the interposed wiring? jrista, it seems you know where to find such information, I'd like to deepen my knowledge, why don't you post the most interesting links you find, every now and then, on CR? I mean not now, don't get me wrong, but you're one of the most active posters here, I'm sure at least some CR members would appreciate some technical reading sometimes, I for sure.

I periodically have these sensor patent and technology binges. :) I dig around looking for the latest and greatest news on what the image sensor world is doing, find and read patents (if I can, that's often a MAJOR PITA, as a lot of patents are only written in Japanese, and translating them can be a confusing endeavor.) I regularly read http://image-sensors-world.blogspot.com/ and browse through http://www.chipworks.com/ for the latest teardowns. Those two places are usually where I learn about new technology, and from there, I'll go digging for more information.

One thing to note...the VAST majority of the high end sensor technology I talk about is actually only really used in tiny sensors. Stuff 1/2" size of smaller, the kind of sensors used in hand held devices, cars, specialized video cameras, etc. Even though Sony's FF and APS-C sensors are currently the creme of the crop, and Canon's are up to a couple generations behind just about everyone at this point...in the grand scheme of things, FF and APS-C sensor technology across the board is quite primitive. There are some AMAZING things being done at the ultra tiny end of the scale. Were talking about pixel sizes of 1.4µm, 1.2µm, and 1.1µm are the current smallest generation. Most new small form factor sensors are 1.2 and 1.1 microns in size, however soon they will be reaching the theoretical limit for visible light image sensors, 0.9µm or 900nm. At 900nm, were starting to close in on the bandwidth of visible light, which ranges from around 750-800nm down to 320-380nm. (To my knowledge, light cannot pass an aperture smaller than it's wavelength. At some point, someone may figure out a way to get around that hurdle, but so far, I think that the 900nm pixel pitch is considered the minimum size for a pixel such that the luminous flux of an incoming wavefront can actually pass through the pixel aperture, or be refracted by a microlens, and still reach the photodiode.)

It took a lot of innovations just to make 1.4 micron pixels possible, and the current cutting edge 1.1 micron pixels required even more not just to be possible, but to still have enough sensitivity to be useful. Keep in mind, many of these sensors are only 1/8th of an inch in size at their largest, so the total light gathering capacity is minuscule. The proliferation of video into everything, including phones and tablets, now the booming market for car rear view video, etc. has demanded a level of high speed sensitivity that still cameras never dreamed of. That's where we got innovations like black silicon (similar to Canon's SWC lens nanocoating), which significantly increases the absorption rate of incident photons, multibucket pixels for high dynamic range, color splitting filters as an alternative to color filter arrays that don't waste any light, and a whole host of other improvements that radically improve the overall quantum efficiency of devices in very low light at high frame rates. Other innovations have succeeded in reducing dark current to practically meaningless levels (current generation FF and APS-C DSLR sensors have massive amounts of dark current...at their operating temperatures, probably as much as many electrons per second of exposure, so CDS and bias offsetting or black point clipping was essential to minimize dark current signal and reduce thermal noise)...dark current levels in modern cutting edge sensors is as little as a small fraction of an electron per second (in other words, it takes many seconds for even one electron to be freed in a pixel due to dark current).

I have an ultra high sensitivity 74% Q.E. Aptina sensor in my guide and planetary camera that I use for astrophotography. It has dark current of about 0.005e-/px/s. Sony's new ExView II CCD sensors (used in thermally regulated CCDs for astrophotography), which have 77% Q.E., have dark current of 0.003e-/px/s or less! Both of these sensors come in 1/2" and 1/3" varieties...so they are pretty small, half the size or less of an APS-C.

At some point, I figure the technology from the cutting edge, and ultra small, will eventually work it's way up to the larger form factors. However, on the SENSOR innovation front...the only things Canon has shown in the last couple of years are DPAF and a couple multi-layered sensor patents. Canon is an innovative company, however it seems most of their innovation is focused in areas other than image sensor development. In the same time frame, Aptina, Sony, Omnivision, Toshiba, and a number of other major players in the sensor market have filed as many as dozens of patents for sensor technology...each. If and when these cutting edge 1.1 micron sensor technology innovations find their way into FF and APS-C sensors...I don't suspect it will be Canon who does it first.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

Lee Jay said:
Famateur said:
Dare I wade into the pizza war? :P

Perhaps I can translate it into a wooden pizza to fit one of my other hobbies: If I have a 15" maple disc, cutting it into 6 pieces WOULD give me more maple surface area than cutting it into 8 pieces. Why? because there is waste from blade kerf. If I have a 1/8" kerf, I lose an approximately 1/8" slice of material with each cut. Let's say now that we fill in each cut with a 1/8" slice of ebony so we don't lose overall surface area when we glue it all up. The disc maintains its original surface area, but there is still less maple surface area with 8 slices than with 6. Using a 1/16" kerf blade will increase the ratio of maple surface area to ebony, but there will still be less maple surface area with 8 slices than with 6.

Now imagine the disc is actually a rectangle, and the pieces are squares instead of pizza slices. The maple is the photo-sensitive portion of the sensor, and the ebony is the border around each pixel. If sensor size and transistor size are constant, doesn't increasing the number of pixels increase the number of borders and transistors, and doesn't that reduce the portion of the overall sensor that receives light? Is moving from a 500nm process to a 180nm process like going from a 1/8" kerf to 9/200" kerf?

I'm obviously not a sensor geek, so I might be completely misunderstanding pixels, borders, et cetera. What am I missing in this analogy? :P

What you're missing is gapless microlenses, which essentially render the "blade kerf" largely moot by concentrating the light into the light-sensitive area between the "kerf lines".

although most DSLR sensors still waste some space that could be used for the well size etc., I believe
some P&S sensors I believe get around this by going to backside and so on though so the tech is already there to get around it pretty much, it is more expensive, especially on FF sensor size though so they haven't yet bothered with it since the cost is not yet worth it for the tiny gains at this point (if FF went to super high MP then BSI and so on would help enough for some to maybe want to pay the extra)

at ultra high MP counts compared to really low MP counts using current FF DSLR methods the ultra high MP might start showing reduced performance, but for the type of tech currently used, I don't think the MP counts of the highest DSLRs are high enough yet for it to really noticeable matter (and they do have options for when it would start to matter a bit (and evne with SOny RS100, very high density and BSI helps but it's really not anything all that radical at all of a difference)
 
Upvote 0
Re: EOS 7D Mark II & Photokina

jrista said:
Assuming equivalent or better sensor technology, more pixels is never bad.

You mean assuming better technology only.
For equivalent technology, this works only up to a point - at least for front-illuminated sensors.

In a front-illuminated sensor, the photodiode of a pixel is located at the bottom of a well, basically (see the left diagram):

Sony_cmos_A-thumb-450x229.jpg


The well is formed by the layers of metal wiring above the photodiode.

As pixels shrink, this well becomes narrower and narrower.
At some point, the well becomes so narrow that the micro-lenses on top can no longer focus the light on the photodiode.
This leads to light losses - and the resulting image quality degradation.

Thus, to further shrink the pixels, you need to switch to a finer CMOS process (or maybe BSI).

The likely reason that the 5DIII has 'only' 22mp is not because Canon no longer believes in megapixels (they do).
Rather, Canon appears to have hit the shrinking limit of their 500nm CMOS process, on which the 5DIII sensor is made.

The 70D is likely made on a finer CMOS process (180nm?), though, as I can't imagine that they've
been able to stretch their 500nm process to make the 20mp/dual-pixel sensor of the 70D.

So, smaller pixels are indeed generally better.
It's not a free ride, though; there limits as to how much you can shrink with a given technology.
Beyond that, you need to change your technology - or image quality degrades with smaller pixels.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

x-vision said:
jrista said:
Assuming equivalent or better sensor technology, more pixels is never bad.

You mean assuming better technology only.
For equivalent technology, this works only up to a point - at least for front-illuminated sensors.

In a front-illuminated sensor, the photodiode of a pixel is located at the bottom of a well, basically (see the left diagram):

Sony_cmos_A-thumb-450x229.jpg


The well is formed by the layers of metal wiring above the photodiode.

As pixels shrink, this well becomes narrower and narrower.
At some point, the well becomes so narrow that the micro-lenses on top can no longer focus the light on the photodiode.
This leads to light losses - and the resulting image quality degradation.

Thus, to further shrink the pixels, you need to switch to a finer CMOS process (or maybe BSI).

The likely reason that the 5DIII has 'only' 22mp is not because Canon no longer believes in megapixels (they do).
Rather, Canon appears to have hit the shrinking limit of their 500nm CMOS process, on which the 5DIII sensor is made.

The 70D is likely made on a finer CMOS process (180nm?), though, as I can't imagine that they've
been able to stretch their 500nm process to make the 20mp/dual-pixel sensor of the 70D.

So, smaller pixels are indeed generally better.
It's not a free ride, though; there limits as to how much you can shrink with a given technology.
Beyond that, you need to change your technology - or image quality degrades with smaller pixels.

I know exactly what a sensor pixel looks like. ;) I also know that manufacturers have been using a double layer of microlenses to solve the problem with the FSI design you showed. They have been for years. I know some BSI designs even use double microlens layers. Further, I know Canon has a small form factor sensor design that uses a double layer of microlenses AND a lightpipe in an FSI design to minimize the problem even further. I'll raise your crude diagram with an actual cross section electron micrograph of Canon's 180nm Lightpipe sensor with Cu-interlinks (note, this is a rather old image, from about two years ago...Canon moves slowly, but I'd certainly hope they have sensors using BSI for pixels this small now...also note, given this is a 180nm process, the pixels pictured here are really quite small, I'd say less than 2 microns, so don't take this as an example of FF or APS-C fill factor):

dlsr-2-fig3b.jpg


As I said. Technology has been marching on. The simplistic grid of photodiodes and bare wiring/transistors is a thing of the past. Even gapless microlenses are a thing of the past. For really small pixels, BSI is ubiquitous these days, and we know Canon has some patents for BSI technology as well...so even the double-microlens layer/lightpipe design from the image above is a thing of the past.


I do agree that Canon is probably at the limits of the 500nm process, at least on a competitive front. They already make pixels much smaller than the 5D III's in their APS-C sensors, so they aren't at the limit of the process for their FF parts. They are obviously at the limits of their ability to remain competitive at 500nm for those larger sensors, though.
 
Upvote 0
Re: EOS 7D Mark II & Photokina

jrista said:
As I said. Technology has been marching on.

Right.

But even with light-guides (to guide the light onto the photodiode), there are still limits as to much you can shrink pixels.
These are physical entities and you cannot shrink them indefinitely with a given technology.
The light guide cannot have a diameter zero, which is obvious even from the picture you posted - if your keep shrinking the pixels.

You make it sound as if smaller pixels are always better - and that's not unconditionally true.
That's the only point that I'm making.

There's a physical limit that cannot be crossed.
That's why manufacturers are using finer and finer CMOS processes (Panasonic is down to 65nm now).
And also looking for alternative solutions - like BSI, Sony's stacked technology, etc..

So, smaller pixels are generally better - but only when newer, more advanced technologies are used.

There's also the issue of the full-well capacity of a photodiode.
Smaller full-well capacity automatically lowers SNR. You should know that.

So, it's a balancing act, really, for pixel engineers.
A blanket statement like 'smaller pixels are always better' is just that - a blanket statement.
Some necessary small print needs to be added to discussion 8).
 
Upvote 0
Re: EOS 7D Mark II & Photokina

x-vision said:
jrista said:
As I said. Technology has been marching on.

Right.

But even with light-guides (to guide the light onto the photodiode), there are still limits as to much you can shrink pixels.
These are physical entities and you cannot shrink them indefinitely with a given technology.

That's the only point I'm making.
You make it sound like smaller pixels are always better - and that's not unconditionally true.

There's a physical limit that cannot be crossed.
That's why manufacturers are using finer and finer CMOS processes (Panasonic is at 65nm now).
And also looking for alternative solutions - like BSI, Sony's stacked technology, etc..

So, smaller pixels are generally better - but only when newer, more advanced technologies are used.

There's also the issue of the full-well capacity of a photodiode.
Smaller full-well capacity automatically lowers SNR. You should know that.

So, it's a balancing act, really, for pixel engineers.
A blanket statement like 'smaller pixels are always better' is just that - a blanket statement.
Some necessary small print needs to be added to discussion 8).

Sure, there is no question there are limits to how small you can shrink pixels with an FSI design. I already mentioned that ALL small form factor sensors that use 1.2µm pixels and smaller use BSI designs now.

But we are primarily talking about larger sensors. In larger sensors, we don't have the kinds of problems with maintaining incident light ratio on the photodiodes. We don't even need lightpipes...a single layer microlens works sufficiently to control over 90% of the light. A second layer would again focus any dispersal from the color filter back into the "well", again minimizing any remainder losses.

There is also a limit to how far use of finer processes will improve things for larger sensors. For smaller sensors they are essential, even with BSI, as your packing so much into increasingly small spaces. I mean, when the 0.9µm generation hits, the pixels will be smaller than the majority of the infrared bandwidth! But, large sensors still have huge pixels. It will be many generations before we drop below 3 micron pixels, assuming we ever do. It's a lot harder to make large optics perform well outside of the central FoV, and I think lens design will ultimately become the bottleneck for keeping the megapixel race alive with larger sensors.

Assuming we do reach 3 micron pixels at some point, on either a 180nm or 90nm process...that would be a 96 MEGAPIXEL sensor in full frame, and a 37 megapixel sensor in APS-C. That's WAY up there. The highest resolution sensors will probably sit around 4.5µm to 3.7µm pixel sizes for a while still, a couple DSLR generations, which puts us out another eight years approximately?

Assuming everything is manufactured on a 180nm or smaller process soon, I don't think that fill factor will be the primary or even a significant issue for APS-C and FF sensors for so long that it simply doesn't matter. In that light, I still assert that you can always do more with smaller pixels. As far as I am concerned, BRING ON THE 96mp MEGAPIXEL MONSTROSITIES!! MUHAHAHA!!
 
Upvote 0