Re: Canon 5D3 Announcement Date & Poll (and your thoughts)
KeithR said:
CanonFanNum1 said:
if Canon pushes resolution further, the IQ per pixel will decrease (along with increased noise).
Seriously - this is rubbish. It just
doesn't happen like that.
Same argument why the 1DX isn't more megapixels, its better-pixels.
There's
no such thing as "better" pixels.
You can have better pixels.
I think the thing you've forgotten (or don't understand) is that there are varying standards for ISO performance as far as the photographer's concerned:
a.) Per-frame (or per unit area) ISO performance - this is what you're talking about. For most people, this is most relevant.
b.) Per a fixed number of pixels (i.e. 4 pixels from one sensor compared with 4 from another). For some photographers with certain requirements, this may in fact be more relevant. If your target is bird photography (for example) and you typically shoot with your longest lens in challenging light (i.e. sunrise or sunset) and can't approach as close as you like, you both would like a high pixel density but also for the output to be as high quality as possible.
Alternatively (and I would agree with you that this following argument isn't a clear winner to me) some people argue that for less resolution-demanding applications - moderate sized prints or web presentation - they may get better performance out of a camera with larger pixels than the closest-specified one with smaller pixels (i.e. the 5D Mark II actually comes fairly close to the 7D in terms of per-pixel sharnpess for birding, after uprezzing the 8mp center area of the 5D roughly analogous to the 7D's full frame and of course there's no contest for full frame landscape shots if you want to preserve your wider perspective - that's before taking into account dynamic range and color saturation capacity, as well).
One of these standards isn't better than the other; they're different.
I've been banging the drum for submerging the importance given to the "lower pixel counts = lower IQ" meme for months. I criticized it when a Canon rep from Germany allegedly made comments about lower pixel counts being equivalent to higher quality, and I criticized it (although somewhat less enthusastically) when Chuck Westfall started falling into the same pattern. We have to realize that this is merely an argument about the best balance - not about a "best" strategy for all situations.
I am one of the first people to post reminders that you can't talk about the IQ of a "single pixel;" IQ comparisons only make sense when you have more than one pixel (and, realistically, far more than four) pixels to compare. But that wasn't the original intent of the poster you were replying to. It is an important concept, though, because it's one of the two extreme cases for pixel density that we can take up to show that the "pixel density vs. ISO performance" cliche actually is true, but only as a balanced result taking into account technology.
For fun, consider the extreme cases: If you have one monolithic, sensor-size pixel, clearly we can't say anything about its quality other than to say how effective it is at capturing light (it approaches 100% of photons hitting the sensor plane better than smaller pixels; typical camera sensor photosites lose quite a bit, especially older models without microlenses, and even the microlenses require the light to be coming in from a fairly precise direction, which is allegedly a problem with some lenses, wideangles being the prime example quoted). Start to break that sensor up into smaller pixels, and each of those pixels will still capture much more light (total) than any photosite on a camera sensor currently on the market - it still requires fewer tradeoffs in terms of noise reduction to be made, and the bar for rejecting a sensor due to faulty photosites is set higher (bad for the camera maker - good for the consumer). The other extreme is a sensor with a pixel count approaching infinity - each sensor site might capture only one photon, or none; at that point it is challenging at best to determine if a signal from any photosite is appropriate or wholly spurious. Readings could only be made by statistical analysis of all sensor sites.
In practice, larger photosites capture more light and so less aggressive gain needs to be applied to the signal read from them. They should be relatively more immune to breakage (as I mentioned earlier, there should be a higher threshhold for the number of bad photosites to require a sensor be rejected on a sensor with smaller, and hence more, photosites) and they should be more immune to bleeding of charge from one photosite to another. Smaller photosites, on the other hand, can overwhelm the problem of individually capturing fewer photosites by the fact that they still are statistically significant - and four pixels in place of one is a massive increase in translating the capture of light into information, so clearly the goal of sensor development is to maximize this trend. A good sensor design strives to find the balance of these two competing tendencies that best takes into account the current state of sensor technology.
KeithR said:
JR said:
Makes you wonder why would Nikon go all the way to 36MP doesn't it?
Because they know that the "more pixels = more noise" meme is nonsense?
Their ISO performance will likely take a hit
Where has this
ever happened so far? I'll tell you:
nowhere.
Every increase in pixel density so far has been accompanied by an improvement in noise (and, often DR) performance.
I'm not going to discount the tendency of the camera manufacturers to generalize the truth to something useless but presentably marketable.
However, you are dismissing the reality that every increase in pixel density in any case you wish to cite has also been accompanied by improvements in technology (and also that we are discussing the per-frame or per-area metrics of image quality, and dismissing the per-pixel count metrics).
In other words, the camera manufacturers have already, prior to releasing their cameras, already done the job of determining how many pixels to fit on their sensors, and they have had to factor in the practicality of producing sensors with the proposed pixel count on current processes (i.e. can it physically be made in quantity, at what baseline cost, and what will the reject rate be like) as well as making the determination about where to balance noise and pixel counts. There is also the matter of data transfer rates, and less relevantly storage capacities in commonly available media, which are the major factors holding back pixel sizes. So far, these pale in relevance to the other factors pushing forward pixel counts (although perhaps the argument that people want to speed up their workflow and get more use out of their hard drives has gained some traction recently).
Since they have done the job of determining the balance of ISO to pixel counts, it should therefore be no mystery that your observations can be correct despite the relative lack of dud cameras on the market.
I have made the case (just on my supposition) before that the relationship between noise and pixel counts is much more complicated than many people would admit - with noise probably rising more slowly than pixel counts. This wouldn't alter the overall outline of the case I've made that I can see.