You can use software (noise reduction and downsampling) to trade all that extra resolution for much lower noise in the overall image with the same sharpness (resolution), and in fact that's what you end up doing when you compare the two images at the same final size.That's something else again. This discussion was related to the discussion regarding the intensity and total quantity of light and whether how that was affected (if at all) by the size of the sensor [before post processing].
Yeah...and you can't do what I said unless you have all that extra light captured in all those extra pixels.
All those photons that are collected by all those extra pixels count in the total signal (sharpness) to noise (noise) of the final overall image, and that's the reason that a larger sensor out-performs a smaller sensor in low-light despite having the same sized pixels. No, those extra pixels don't count (again, before post processing).
Yes, they do.
One assumption that we always make is that quantization noise is negligible. That means, you can't see the individual pixels. If you can, that's another whole problem.
Since you can't see the individual pixels, your eye is essentially averaging some small number of pixels together. The averaging works like this - the noise goes down with the square root of the number of pixels averaged. Average 4 pixels, you cut the noise in half. Average 9, you cut the noise by a factor of three.
This works out the same as the decrease in shot noise from all that extra light - SnR goes with the square root of the number of photons collected.
In reality, all larger pixels do is block average. It turns out that block averaging is about the worst performing method of noise reduction there is. Even the most basic noise reduction is better, and modern advanced method are enormously better. So, smaller pixels that are block averaging less combined with modern noise reduction software will out-perform larger pixels since the larger pixel are doing the dumbest kind of noise reduction there is.