Re: EOS 7D Mark II & Photokina
You've made a lot of logical mistakes, so I'll address them one by one below.
pierlux said:
Sorry, it's a long post, maybe too long.
In physics, the Carnot cycle is an ideal representation of a perfect engine having 100% efficency. It is a useful thermodynamic representation to explain energy conversion. In the real world, such a system does not exist.
Too many of you claim that a higher MP sensor does not have more noise than a lower MP one, all conditions being equal, and support this claim with mathematics, but that's not true in the real world. Some even claim that smaller photosites have less noise than bigger ones: now, that's the kind of claim that should make all of us invoke Santa, bigfoot and rainbow-pooping unicorns going on vacation together with a flying saucer. The real world behaves differently.
I would say the exact same thing about the notion that bigger pixels gather enough "more light" to be meaningful given how sensors are designed today. Your missing how all the small changes to sensor design have largely nullified and negated the impact of
fill factor. Fill factor is the ratio of sensor die space dedicated to light sensitive area vs. non-sensitive area (i.e. wiring, transistors, etc.)
pierlux said:
Lee Jay said:
Saying bigger pixels let more light in is like saying cutting a 15 inch pizza into 6 slices instead of 8 gets you more pizza.
No. Your logic is flawed.
Nope, his logic is actually
flawless, and here is why:
pierlux said:
If the pizza represents the sensor, the number of slices correspond to the number of the pizza eaters, represented by the sensels, i.e. photosites, i.e. pixels. An eater who eats 1/6 of the pizza eats more than one eating 1/8 of it.
Your only thinking about the individual pizza slices here. Your missing the bigger picture: The eater who eats 1/6th of a pizza eats 1/6th of a pizza SIX TIMES!! Therefor, the eater is not eating one pizza slice...
the eater is eating A WHOLE PIZZA! ;D
This is the critical point that everyone seems to miss. If an eater eats two 15" pizzas, one cut into 6ths and one cut into 8ths...has the eater eaten less total pizza when eating the one cut into 8ths? NOPE!! He's still eaten a whole 15" pizza, same as he did when he ate the one cut into 6ths.
We don't generally use image sensors in slices. We frame our scene and we take a photo. Ignoring cropping for the moment, our sensor, the ENTIRE sensor, regardless of how many pixels it has, was used to create an image. Now, cropping does play a factor. If we crop, we are using less area, and throwing away some of the sensor area. That might be like eating two or three slices of pizza, instead of a whole entire pizza. However, what role does pixel size play? I mean, if we have one pizza cut into 6 slices, and another pizza cut into 12 slices...if we eat two of one or four of the other, we've still eaten the same amount of pizza.
pierlux said:
An ideal 24 x 36 mm sensor being a single light sensitive unit is a one pixel sensor which, let's say, collects 1 billion photons at a given time unit and luminous intensity; it has the lowest resolution possible, but it would be capable of letting you know if even a bunch of photons have hit it or not with 100% certainty, i.e. with zero noise. Ideally, if you divide that single huge photosite into 1 million smaller photosites (1 MP sensor), each photosite receives IDEALLY 1000 photons under the same conditions; in the practice it's less than 1000 because of the wiring and the spacing between photosites which equally absorb the photons, but do not convert them into a useful signal, instead convert them into heat, which is detrimental. This 1 MP sensor has sufficient resolution to resolve enough detail for a very small print, and with today's tech you could probably use it at 204,800 ISO or more with very low noise (and before any of you reply that you can reduce the size of the image and therefore reduce noise and equally obtain the print, try exposing a 36 MP sensor at 204,800 ISO or higher if you can...).
You started to get there with the one photodiode bit...then you lost it.
pierlux said:
Again, the same sensor with 20 MP exposed in the same conditions does not collect 50 photons per photosite, but MUCH LESS this time due to massive wiring and lots of wasted space between photosites, so you have a high resolution image, but with a lot of noise.
You are correct, smaller pixels collect less light per photodiode.
IN THE PAST, there WERE losses due to fill factor. Because wiring and transistors used up some die space, incident light that struck any non-photodiode space would either reflect or convert to heat.
THAT IS NO LONGER THE CASE TODAY! Technology has improved considerably, with microlenses, BSI sensors, deep photodiodes, increased charge hole density, and a whole host of other improvements.
pierlux said:
Actually, in my example with 1,000,000,000 total photons hitting a 24 x 36 mm sensor I think you'd have only random noise at 20 MP, but it was for the sake of explaining. I'm not talking quantum efficency at all here, it's just the number of photons you can effectively use I'm talking about. Moreover, we don't have a linear relationship between number of photons and noise, so it's not as if you have half of the photons per photosite you double the noise, the situation is worse in the real world.
You are correct about the non-linear relationship that noise has with signal strength. However, you are forgetting about the relationship of smaller pixels to larger pixels, and the impact that
averaging smaller pixels together has on noise. If you have 10µm pixels, they gather four times as much light as 5µm pixels. Now, let's say out 10µm pixels have a
full well capacity (FWC) of 100,000 electrons, or 100ke-. The 5µm pixels have a FWC of 25ke-. In terms of photon shot noise, the larger pixels have SQRT(100,000), or ~316e- noise, while the smaller pixels have SQRT(25,000) or ~158e- noise. If we simply ADD the noise of four smaller pixels together, we get 632e-. Oh no, more noise!!
Your forgetting that noise is not simply added together when normalizing...it is AVERAGED. We do get a total of 632e- noise in a 2x2 matrix of five micron pixels, however if we average those pixels together by downsampling the larger output image from the sensor with smaller pixels to the same dimensions as the output image from the sensor with larger pixels, the noise factor drops by the square root of the difference in in pixel count. Since we can fit FOUR times as many five micron pixels in the same area as ten micron pixels, the 632e- noise drops by a factor of SQRT(4), or 632e-/2, which is? Oh, look at that...316e-!!
Smaller pixels are NOT noisier on a normalized basis.
pierlux said:
The Canon 1Dx is 18 MP; in the Nikon D800, being 36 MP, each photosite collects LESS than one half of each of the Canon's photosites, that's why the 1Dx is much better in low light. And the D800 holds because of its superior sensor tech (let's face it, fortunately Canon's system is better as a whole), otherwise they wouldn't have made it 36 MP in the first place.
The 1D X does marginally better in low light. No camera does "much better" in low light, because at high ISO, we are primarily physics bound. But lets use some real numbers here. The 1D X has an FWC of 90367e-, and the D800 has an FWC of 44972e-. Now, that is the full well capacity, the maximum amount of charge each pixel can hold. If we just compute the grant total amount of light both sensors can gather if they were exposed to maximum saturation:
Code:
1DX: (5205px*3533px)*90367e-/px = 1,661,782,710,255e-
D800: (7424px*4924px)*44972e-/px = 1,643,986,358,272e-
The 1D X has slightly more. In terms of percent, if we divide those two numbers, we get:
Code:
((1,661,782,710,255e-/1,643,986,358,272e-) - 1) * 100= 1.0825%
The 1D X has a GRAND TOTAL light gathering capacity lead over the D800 of ONE PERCENT. That's it! One percent. Now, let's look at high ISO. At ISO 12800, the 1D X has a saturation point of 735e-, and the D800 has a saturation point of 507e-. The 1D X's pixels should do much better, right, because they are so much larger? Hmm, not according to the math:
Code:
1DX: (5205px*3533px)*735e-/px = 13,516,109,775e-
D800: (7424px*4924px)*507e-/px = 18,533,778,432e-
The D800 should be suffering because of it's smaller pixels. Each one, after all, is gathering LESS total light at ISO 12800 for any given exposure than the 1D X. However, in terms of absolute amount of light gathered...
Code:
((18,533,778,432e-/13,516,109,775e-) - 1) * 100 = 37.12%
Fundamentally, ignoring any other factors for the moment (I'll get into those shortly here), the D800, despite it's smaller pixels, is actually gathering THIRTY SEVEN PERCENT more light in total! That not only demonstrates the point that smaller pixels don't really have anything to do with light gathering capacity, but it also speaks volumes to the technology in the Exmor sensor.
Now, these numbers don't jive with tests. Why? Pixel size does not matter when it comes down to how much light a given sensor, regardless of it's pixel size, can gather. If you know anything about equivalence, you know that the primary thing that really matters from the standpoint of noise is total light gathered. This is why larger sensors perform better than smaller sensors, and always will (given they both use the same technology.)
So, why does the 1D X perform better at high ISO? Canon currently has Correlated Double Sampling (CDS) technology that performs better than Sony or Nikon's, so their dark current is extremely low at high ISO, and because of their use of a bias offset, Canon's read noise (dark current plus downstream RN) is lower in total. The 1D X has 1.6e- RN at ISO 12800, vs. the D800's 3.3e- RN at ISO 12800. That leads to increased dynamic range at high ISO, which gives the 1D X the edge:
Code:
1DX: 20*log(735/1.6) = 53.24dB
D800: 20*log(507/3.3) = 43.72dB
The 1D X has ~8.8 stops of dynamic range at ISO 12800, while the D800 has ~7.3 stops. Ironically, the technology that flattened the read noise floor for Exmor, and gave it a significant lead at low ISO, actually seems to hurt it a bit at high ISO (at least, in the D800).
Why does the D800 suffer here? When it comes to dynamic range at high ISO, pixel size does matter, because it affects the maximum amount of charge that can be held in each pixel. Since dynamic range is the ratio between saturation point and read noise,
the higher read noise of the D800 at high ISO costs it the lead it had at low ISO. If the D800 used a bias offset the same way Canon does, the offset could then be used to minimize read noise at high ISO. The D800 with 0.5e- read noise at ISO 12800 would have a dynamic range of 60dB. The D810 now actually uses a bias offset, and it's read noise drops to a very low 1.3e- at ISO 51200.
pierlux said:
In the pizza analogy, the more you cut the pizza, the more breadcrumbs, morsels, atomies you produce, leaving the eaters with less and less pizza to eat to the point that, putting together all the slim slices of pizza eaten by all, they add up to not even a quarter of the original one. And, actually, a pie should be a better fitting analogy.
Sure, there are crumbs and other small losses. However, those losses are small. Not just small, minuscule. They are, for all intents and purposes, meaningless. You could even shave off a little bit off the sides of each pizza slice (which often happens when using a classic pizza cutting wheel), and it would still largely be meaningless in terms of the grand total area of pizza that is still eaten.
Plus, the pizza crumb analogy ignores technological improvements in sensor design. Modern sensors are NOT just an open matrix of photodiodes and exposed wiring and transistors. Moderns sensors use one of two designs:
Front Side Illuminated (FSI) and
Back Side Illuminated (BSI). An FSI design does have wiring and transitors on the face of the sensor, however they no longer receive incident light in a modern sensor design. Canon started using microlenses a few generations ago, and all their current sensors use gapless microlenses. Microlenses direct incident light through the CFA and onto the photodiode. Sony sensors now actually use a double layer of microlenses...an upper layer that directs incident light into the CFA, and a lower layer that directs any dispersed light that actually makes it through the CFA into the photodiode. (Part of the reason why Sony sensors have better Q.E.) That cuts the losses due to fill factor to nearly nothing. There are still losses, but they are a tiny fraction of what they used to be. Fill factor is now no longer the concern it used to be.
pierlux said:
It's like having a 100 x 100 ft room all for yourself, 10,000 square feet is plenty of space. But if you want to accomodate 100 people inside it and offer them a bit of privacy, you have to build walls which eat space, not to mention furniture, so you end up with much less than 100 sq. ft for each dweller.
The primary concern with fill factor now is grand total photodiode area. Smaller pixels have smaller total photodiode area, and since historically the only thing that mattered from a total charge capacity standpoint was the area of the photodiode, smaller photodiodes hold less charge than larger photodiodes. The greater losses in sensor die area that can be dedicated to photodides when using smaller pixels can affect the grand total amount of light that a sensor with smaller pixels gather. A 10µm pixel fabricated with a 500nm process will have 9µm square photodiodes, where as a 5µm pixel will have 4µm square photodiodes. Instead of a 4x difference in area, that's a 5x difference.
A BSI sensor does not have these problems. With a BSI design, all of the wiring and transistors are relegated to the "front side", and the photodiodes, CFA, and microlenses are placed on the "back side" of the sensor substrate. This effectively gets you a 100% fill factor. Generally speaking, any sensor technology that can be used for smaller sensors can be used for larger sensors. Technically speaking, BSI COULD be used for large full frame sensors, however because of the fact that both sides of the silicon substrate are etched, the increase in fragility results in a yield that is well below acceptable for larger sensors (at the moment this includes APS-C sensors, however there are some patents out there that cover means if strengthening the wafer to allow larger sensors than 1/2" to be fabricated with a BSI design.)
A small BSI sensor will have a nearly 100% fill factor as far as the back side goes. Therefor, there are no losses in die space, and since BSI is a benefit that only smaller sensors can use currently, it normalizes the playing field. The differences in photodiode capacity even for FSI are largely moot these days, with new techniques allowing "deep photodiodes", where the fundamental silicon doping and structure of the photodiodes themselves can be tweaked to increase charge capacity with depth as well as area. There are even patents that describe means of increasing
"electron hole density" in a photodiode, allowing greater charge to be stored in a smaller photodiode area. The more sensor technology marches on, the less
fill factor will play a role in grand total light gathering capacity.
If we ignore
furniture and other accomodations, which really have absolutely ZERO relevance to a sensor analogy...then your walls don't really reduce the capacity. We could still fit 100 people in a 10,000 square foot room if we added walls. The people would just be more densely packed. We would need to pack in a lot more than 100 people into 10k square feet before the addition of walls started to matter. The walls eventually would matter, once you start packing in more people into smaller and smaller rooms...however that's where BSI sensor designs come into play. You then build another floor on top of the walled rooms, and move everyone upstairs. Suddenly, you have plenty of room again.
pierlux said:
Still not convinced? OK, you may say "who are you to stand up and make such claims against my maths?", so let's look at what Canon's engineers have done, I bet they know more than me or anyone else on CR about silicon performance and noise. This is what I wrote in a previous post:
"There's a reason the 1Dx has the best (to my eye) IQ of all the DSLRs available to date (yes, better than any Nikon I think): its 18 MP FF sensor. And there's a reason why Canon developed a prototype sensor with photosites 7.5 times larger than the 1Dx: to capture quality video in candlelight (candledrkness sounds better, though). Don't know if you remember, but check these:
http://www.canon.com/news/2013/mar04e.html
http://petapixel.com/2013/09/13/canon-debuts-exciting-prototype-sensor-exceptional-low-light-capability/
And Sony? Compare the the 36 MP Alpha A7r(esolution) and the 12 MP A7s(ensitivity), then say again that more MP does not mean more noise if you dare. At base ISO maybe, but try going at 800 and beyond...
And should somebody dare claim again that smaller photosites means less noise as I've read too many times, remember Santa & Co. are watching us from their flying saucer... And again, at base ISO maybe, but what's the point of shooting 36 MP and then reduce resolution in post to lower the noise and make small prints or web sized images?
I'm going to spend the weekend with my son, so I'll be having a look at CR every now and then, but I'm not going to post, sorry. Have a nice weekend you all, too!
Yup, still not convinced.
The 1D X's high ISO performance, as I
ACTUALLY demonstrated with
REAL math, is not primarily due to it's pixel area. In terms of pixel size, total sensor area, and sensor efficiency (Q.E.), the D800 is actually 37% more efficient than the 1D X at ISO 12800. The difference between the 1D X and the D800 at high ISO is actually the amount of read noise...the 1D X's lower read noise gives it more dynamic range, and THAT is what makes it better. Read noise has more to do with sensor technology and complimentary electronics (i.e. ADC unit) than it does with pixel size...so again, sorry...but your still wrong.
