Lee Jay, jrista, and also sarangiman , sorry for the delay, but my ex-wife lives 40 km away from me so it took me a while. Plus, I'm very slow in writing.
Your only thinking about the individual pizza slices here. Your missing the bigger picture: The eater who eats 1/6th of a pizza eats 1/6th of a pizza SIX TIMES!! Therefor, the eater is not eating one pizza slice...the eater is eating A WHOLE PIZZA! This is the critical point that everyone seems to miss. If an eater eats two 15" pizzas, one cut into 6ths and one cut into 8ths...has the eater eaten less total pizza when eating the one cut into 8ths? NOPE!! He's still eaten a whole 15" pizza, same as he did when he ate the one cut into 6ths.but I still think the eater here is not the whole sensor, it's each individual photosite. If the eater was the whole sensor, you'd obtain zero resolution, no detail. But you need detail to produce a meaningful image, so you must compare the eater to the single photosite. So the more the eaters (the higher the resolution), the less amount of pizza each eater eats.
Your still thinking about it wrong. The eater is not the sensor, nor the photosite. The eater is light. The pizza is the sensor. The slices are the pixels. When you create an exposure, light illuminates the ENTIRE sensor...not only one or two or ten thousand pixels...but the whole thing.
Hence the analogy. The eater is light. The eater is always eating a WHOLE PIZZA. It doesn't matter if that pizza is sliced into 6ths, 8ths, 12ths, or 40 millionths. The eater is STILL going to eat the whole darn thing....every single time (i.e. every single exposure).
Make sense now?
I know BSI sensors have the wiring on the opposite side, and that large sensor only marginally benefit from this configuration, that's why this more expensive and lower yield technology is not used in larger sensors, but this is true for (relatively) low density photosites per area unit. The higher the density, thus the smaller the photosites, the greater the benefit. I don't know the numbers, but are you sure the wiring of conventional sensors matter so little in the light blocking effect on the photosites?
Yes, I am sure. It used to matter, but that was near a decade ago now. Canon was using microlenses almost that long ago, and that alone changed the game considerably. Pretty much everyone is using gapless microlenses, and some are using two layers of microlenses. The non-sensitive die area is no longer blocking, reflecting, or absorbing much light. It's a percent or two, which has a small impact on overall Q.E., but it isn't significant enough to worry about.
A vastly more impactful issue is the use of a CFA itself. The color filter over each pixel is reflecting anywhere from 60% to over 70% of the incident light!!! The CFA is the real killer of sensitivity, by a very, very LONG shot. The small differences in photodiode area are pretty minor these days when talking about pixel sizes 4µm and larger. Canon is actually suffering a lot more because of that than their competitors...their process is 500nm. I have long suspected that Canon has not made a 24mp APS-C sensor yet because that would put them into a pixel pitch range where the 500nm wiring and transistor size WOULD hurt the performance of smaller pixels (i.e. fill factor, DESPITE the use of microlenses, becomes an issue again, simply because the ratio of photodiode area to total die area is too small).
Most other manufacturers are making sensors with 180nm or smaller processes. At 180nm, the difference between a 4µm pixel and a 6µm or 10µm pixel are pretty moot. There is a small percentage difference, but it doesn't generally amount to significant enough differences in total light gathering capacity that it matters in the grand scheme of things. I mean, look at the D800 vs. 1D X. The former has only a 1% loss vs. the latter at low ISO, and a massive 37% LEAD over the latter at high ISO. Granted, the D800 has better technology...it's got a Q.E. of 56%, the sensor does use a 180nm fabrication process, it's got better microlensing, etc. The 1D X only managed to scrape back the lead because Canon's analog CDS is (currently) superior at higher ISO.
Anyway...the use of microlenses, light pipes, BSI designs, etc. pretty much negates the fill factor issue. The remainder of the issue is being resolved with tall photodiodes, materials science that allows more electrons of charge to be held in a smaller photodiode area, etc. At some point, I figure all sensors will be BSI with some kind of reinforcement technology to keep the sensor substrate rigid, minimizing yield losses. When every sensor is BSI, the whole fill factor issue will be gone for good.
Does a 24 MP APS-C sensor, even at 180 nm, still marginally suffers from the interposed wiring? jrista, it seems you know where to find such information, I'd like to deepen my knowledge, why don't you post the most interesting links you find, every now and then, on CR? I mean not now, don't get me wrong, but you're one of the most active posters here, I'm sure at least some CR members would appreciate some technical reading sometimes, I for sure.
I periodically have these sensor patent and technology binges.
I dig around looking for the latest and greatest news on what the image sensor world is doing, find and read patents (if I can, that's often a MAJOR PITA, as a lot of patents are only written in Japanese, and translating them can be a confusing endeavor.) I regularly read http://image-sensors-world.blogspot.com/
and browse through http://www.chipworks.com/
for the latest teardowns. Those two places are usually where I learn about new technology, and from there, I'll go digging for more information.
One thing to note...the VAST majority of the high end sensor technology I talk about is actually only really used in tiny sensors. Stuff 1/2" size of smaller, the kind of sensors used in hand held devices, cars, specialized video cameras, etc. Even though Sony's FF and APS-C sensors are currently the creme of the crop, and Canon's are up to a couple generations behind just about everyone at this point...in the grand scheme of things, FF and APS-C sensor technology across the board is quite primitive. There are some AMAZING things being done at the ultra tiny end of the scale. Were talking about pixel sizes of 1.4µm, 1.2µm, and 1.1µm are the current smallest generation. Most new small form factor sensors are 1.2 and 1.1 microns in size, however soon they will be reaching the theoretical limit for visible light image sensors, 0.9µm or 900nm. At 900nm, were starting to close in on the bandwidth of visible light, which ranges from around 750-800nm down to 320-380nm. (To my knowledge, light cannot pass an aperture smaller than it's wavelength. At some point, someone may figure out a way to get around that hurdle, but so far, I think that the 900nm pixel pitch is considered the minimum size for a pixel such that the luminous flux of an incoming wavefront can actually pass through the pixel aperture, or be refracted by a microlens, and still reach the photodiode.)
It took a lot of innovations just to make 1.4 micron pixels possible, and the current cutting edge 1.1 micron pixels required even more not just to be possible, but to still have enough sensitivity to be useful. Keep in mind, many of these sensors are only 1/8th of an inch in size at their largest, so the total light gathering capacity is minuscule. The proliferation of video into everything, including phones and tablets, now the booming market for car rear view video, etc. has demanded a level of high speed sensitivity that still cameras never dreamed of. That's where we got innovations like black silicon (similar to Canon's SWC lens nanocoating), which significantly increases the absorption rate of incident photons, multibucket pixels for high dynamic range, color splitting filters as an alternative to color filter arrays that don't waste any light, and a whole host of other improvements that radically improve the overall quantum efficiency of devices in very low light at high frame rates. Other innovations have succeeded in reducing dark current to practically meaningless levels (current generation FF and APS-C DSLR sensors have massive amounts of dark current...at their operating temperatures, probably as much as many electrons per second of exposure, so CDS and bias offsetting or black point clipping was essential to minimize dark current signal and reduce thermal noise)...dark current levels in modern cutting edge sensors is as little as a small fraction of an electron per second (in other words, it takes many seconds for even one electron to be freed in a pixel due to dark current).
I have an ultra high sensitivity 74% Q.E. Aptina sensor in my guide and planetary camera that I use for astrophotography. It has dark current of about 0.005e-/px/s. Sony's new ExView II CCD sensors (used in thermally regulated CCDs for astrophotography), which have 77% Q.E., have dark current of 0.003e-/px/s or less! Both of these sensors come in 1/2" and 1/3" varieties...so they are pretty small, half the size or less of an APS-C.
At some point, I figure the technology from the cutting edge, and ultra small, will eventually work it's way up to the larger form factors. However, on the SENSOR innovation front...the only things Canon has shown in the last couple of years are DPAF and a couple multi-layered sensor patents. Canon is an innovative company, however it seems most of their innovation is focused in areas other than image sensor development. In the same time frame, Aptina, Sony, Omnivision, Toshiba, and a number of other major players in the sensor market have filed as many as dozens of patents for sensor technology...each. If and when these cutting edge 1.1 micron sensor technology innovations find their way into FF and APS-C sensors...I don't suspect it will be Canon who does it first.