Is it just me or does this seem like a bunch of made up nonsense?
We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.
These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.
Actually, if something mentioned by TheSuede recently is correct, we are capturing only about 16-18% of the light entering a camera. We capture between 40% to 60% of the light incident on the photodiode.
That means, 40-60% of the photons that pass through the lens, through the IR cut and AA filter, through the CFA, and actually reach the photodiode effectively free an electron. However, only 30-40% of the light that actually reaches the CFA makes it through...as the CFA is explicitly designed to filter out light of certain frequencies. So...50% of 35% is 17.5%...modern cameras are currently working with VERY LITTLE light. We have a long, long way to go before we are recording as much light as we can...and in a bayer type sensor, that would still be at most 40% of the light that makes it through the lens. The lens itself, assuming a multicoating, can cost as much as 15% light loss or more (depending on the angle to a bright light source). Nanocoating improves that, reducing the loss to only a few percent. The IR cut and AA filters cost a few percent as well.
The only way we could preserve more of the light that makes it through the lens would be to either move to grayscale sensors (eliminate the CFA), or use some kind of color splitting in place of a CFA. Combined with nanocoatings on lens elements and an efficient filter stack over the sensor, total light loss could drop to 10% or less, meaning the Q.E. of the photodiode itself determines the rest. 50% of 90% means we would preserve ~45% of the light reaching and passing through the lens on the camera.
As for "noise in the incoming light"...that is kind of a misnomer. Photon shot noise is caused by the random distribution of photon strikes on the sensor's pixels. With larger pixels, noise caused by that physical fact is reduced, as for any given level of light, each pixel on a large-pixel sensor picks up more light than in a small-pixel sensor. To some degree, assuming the same physical characteristics of the silicon used in both a high density vs. very low density sensor, the high density sensor will sense almost the same amount of light in total as the low density sensor...minus small losses due to a greater amount of wiring which reduces the total surface area that is sensitive to light (and yes, losses will occur despite the use of microlenses.) On a size-normal basis (i.e. scaling the higher resolution image down to the same image dimensions of the lower resolution image), the higher resolution image should perform nearly as well as the lower resolution image....assuming the physical characteristics of the sensors are otherwise identical (same temperature, same Q.E., same CFA efficiency, etc.)