2 Small or 1 big pixel is better?

Status
Not open for further replies.

LifeAfter

Photo is only 1 media to express among the others
Dec 1, 2011
90
0
5,306
46
Switzerland / Kosova
What do you think, can a sensor be made with 42MP with 6400 useful ISO (4fps),
and have a kind of sRAW 18-22MP with 12800 useful ISO (8fps)?

Somehow 2 cameras in 1?

While smaller pixels have more noise,
more pixels doesn't necessarily mean more noise if scaled with in-camera processing...

What do you think about it?
 
What you are referring to is called pixel binning. Its been used in camera sensors, but has not really caught on for DSLR's. There is not enough imrpovement, if any, to justify the cost.

Fujii has used it in their digital cameras.

Google it!
 
Upvote 0
Mt Spokane Photography said:
What you are referring to is called pixel binning. Its been used in camera sensors, but has not really caught on for DSLR's. There is not enough imrpovement, if any, to justify the cost.

Fujii has used it in their digital cameras.

Google it!

Beat me to it.
Yes, I believe Hassleblad uses pixel binning on their medium format cameras. Making 4 pixels work together as one can achieve better ISO results. Using sRAW does not do the same thing.
 
Upvote 0
bvukich said:
Phase 1 does 4:1 binning on their P45+ and P65+ 645 backs.

On the P65+ (for example) it drops the resolution from 60MP to 15MP, and you gain +2 stops of ISO (max goes from 800 to 3200).

Good point. They probably use 4 X 4 binning. Cost is not as big a issue at that price level. I think its one of the things Canon can do, but as long as simpler and lower cost methods exist, they are holding it back.
 
Upvote 0
Does canon not implement a hardware-level analogue of "binning" by making their pixel sensors more sensitive?

For example, the 1D X will probably have quite good ISO performance, though the pixel count isn't extremely high. Assuming there is no actual pixel "binning", the resulting pixels are larger, but still work similarly efficiently as 4 smaller ones (probably more, since they're new techy).

And now your question is: are these "bigger" pixels better than 4 smaller ones? (I say 4 for geometry sakes, 4 makes a nice 2x2 box)
I think that it depends on how the pixels are designed. If equally designed, from a physics perspective, it probably wouldn't matter much - if 2x2 pixels are averaged into one pixel, you're basically just using the photon count for the entire area - which is the same as what one larger pixel would have done.
 
Upvote 0
Averaging pixels will reduce both noise (actually signal to noise ratio, or SNR) and decrease the effective resolution (just like having a sensor with larger pixels).

The SNR will scale approximately with the 1/sqrt(N) where N is the number of pixels you are averaging (Google "central limit theorem" for an explanation of this in the case where N is large; if N is small, it still works, albeit with a prefactor that varies slowly with N and with the distribution function of the noise). A rule of thumb for small changes is than an X% increase in resolution is an even trade-off noise-wise with an (X/2)% percent increase in noise.

Long story short, 2 small 'noisy' pixels, averaged together will give you the same noise than one large pixel if the noisy pixels have about 40% more noise each than the large pixel. If your dominant source of noise is photon shot noise, this will exactly cancel out, so that the noise of two pixels averaged together will be exactly equal to one pixel of twice the size. I do not know how the other noise terms scale with photosite size in image sensors; there does seem to be some penalty for increasing sensor resolution, but it will not be nearly as large as those who compare images with different resolutions at 100% think it is (that is effectively comparing a larger print made with the high resolution sensor to a smaller print made with the lower resolution sensor).

Binning like this does not need to be done in-camera, it is easy to do in post-processing (and any halfway intelligent program should do this when you downsize an image for printing, though I do not know for a fact whether any actually do); the advantage to doing it in-camera is that it saves a bit on bandwidth, computation, and storage space.

(Note that I implicitly neglect the Bayer pattern in the sensor; its not going to change the results a lot, but someone who knows more about demosaicing might be able to contribute a better analysis if you specified the light spectrum you were interested in.)
 
Upvote 0
Wouldn't 4x4 pixels -> 1 ie 4x4 oversampling be useful also for video? eg go to I think it's 36MP, to then get 4x4 oversampled 1080p (or similar for 2k)?

For small jpegs, it'd be darn useful (the resaon sometimes being on the smaller jpeg settings is better for a wanted small file size presumably also too?)
 
Upvote 0
Tijn said:
For example, the 1D X will probably have quite good ISO performance, though the pixel count isn't extremely high. Assuming there is no actual pixel "binning", the resulting pixels are larger, but still work similarly efficiently as 4 smaller ones (probably more, since they're new techy).

Not really. The smaller the pixels the more dead area in total you have becasue of the gaps between them, which even the 'gapless microlenses' don't fully resolve. Also 2 smaller pixels will always (unless be get non-linear sensors or something) have a lower Dynamic range than 1 big pixel, no matter how you slice it.

No, I guess no pixel binning, but if the raw ISO is more than 1 stop better than the 5DII, I will strongly suspect raw processing (NR) in-camera. I suppose people aren't interested in 3MP images with 1-2 extra stops on these cameras becasue they can do the averaging in PP. For MF it makes more sense becasue you have more and larger pixels to begin with, less 'dead' area and expensive electronics with low sensor read noise.

What's this 'new techy' you speak of? I guess we might find out at some point, but I'd really like to know how they are going to get this new super-ISO. For instance the 1DIV is better than the 5DII mainly becasue of lower sensor read noise (which is more expensive electronics) and is now less than 2 photos. Better microlenses can account for a bit, but not even 1/2 stop. So where is it going to come from? A: in-camera raw NR is my bet.
 
Upvote 0
Mt Spokane Photography said:
bvukich said:
Phase 1 does 4:1 binning on their P45+ and P65+ 645 backs.

On the P65+ (for example) it drops the resolution from 60MP to 15MP, and you gain +2 stops of ISO (max goes from 800 to 3200).

Good point. They probably use 4 X 4 binning. Cost is not as big a issue at that price level. I think its one of the things Canon can do, but as long as simpler and lower cost methods exist, they are holding it back.

I need to check my S95, but it has a low-light mode, that outputs a 2-3 MP .jpeg (no RAW). I assumed this did some pixel binning. But I found it better to shoot raw and do NR and then downsample, which is exactly what I would expect for such small photosites. But if I were either shooting .jpeg or not doing much PP, I think it might help. Might have to check it out again.

If this is the case then I guess it is more of a marketing thing not having it in DSLR's. But then my S95 seems to have a lot of better features than my 5DII (metering, auto ISO, auto modes that actually choose decent values, faster phase detection (for landscapes in LV or focusing f4 lenses with x2EX), are things that come immediately to mind).
 
Upvote 0
This is a myth born from many misunderstandings of what in photography is relative, what is absolute and what is perceptually relevant to humans, and also from a desire to believe that there is a magical recipe which gets stunningly clean images. Having larger pixels does not make photos displayed on screen / paper look cleaner. (Sure, some technologies may not scale beyond a certain threshold.)

Having 1 pixel as large as 2 smaller ones makes no difference (in noise terms) for the entire image (which is the only thing that matters for a human viewer). What matters is technology and sensor size.

You can read a detailed explanation (with a link to samples) here: http://www.canonrumors.com/forum/index.php/topic,255.msg3911.html#msg3911


In fact, http://www.cambridgeincolour.com/tutorials/image-noise-2.htm shows that a higher resolution (= smaller pixels) is perceived as "less noisy".

The explanation is:

"If the two patches above were compared based solely on the magnitude of their fluctuations (as is done in most camera reviews), then the patch on the right would seem to have higher noise. Upon visual inspection, the patch on the right actually appears to be much less noisy than the patch on the left. This is due entirely to the spatial frequency of noise in each patch."

(Search for this explanation and look at the images above it.)


You should also consider that a compact camera sensor is (for simplicity) 16 times smaller that a full frame sensor. This means that one captures 16 times less light, meaning that it's noise should be 4 stops higher than that from a full frame sensor. And it's about there.

What I am saying here is that while a full frame sensor has 10...20 MP, the full frame equivalent resolution of a compact camera sensor is 200...300 MP. Despite this stupefying density, the images from small sensors still have a smilar a noise level for an area with an equivalent physical size cut from the full frame sensor.


Here are even more details:

http://www.josephjamesphotography.com/equivalence/#8

(It's good to read the entire myth section http://www.josephjamesphotography.com/equivalence/#myths and also the "Noise" section http://www.josephjamesphotography.com/equivalence/#noise )
 
Upvote 0
Status
Not open for further replies.