Taking pictures or videos throughout the day has become part of our normal lifestyles and no longer done just to capture special events. Whip out your mobile camera to immortalize a delectable-looking meal, to record your latest dance moves, or even just when you’re having a good hair day, and you’re ready to share your images with friends right away. These seamless experiences have become possible thanks to remarkable advancements in recent mobile photography, and at the very heart of this revolution is the mobile chips that transform light into digital data – image sensors.

The image sensors we ourselves perceive the world through – our eyes – are said to match a resolution of around 500 megapixels (Mp). Compared to most DSLR cameras today that offer 40Mp resolution and flagship smartphones with 12Mp, we as an industry still have a long way to go to be able to match human perception capabilities.

Simply putting as many pixels as possible together into a sensor might seem like the easy fix, but this would result in a massive image sensor that takes over the entirety of a device. In order to fit millions of pixels in today’s smartphones that feature other cutting-edge specs like high screen-to-body ratios and slim designs, pixels inevitably have to shrink so that sensors can be as compact as possible.

On the flip side, smaller pixels can result in fuzzy or dull pictures, due to the smaller area that each pixel receives light information from. The impasse between the number of pixels a sensor has and pixels’ sizes has become a balancing act that requires solid technological prowess.

Cutting-Edge Pixel Technologies

Drawing from the technology leadership and experience our memory business possesses, Samsung has been managing to expertly navigate this balance in our image sensors. In May 2019, we were able to announce the industry’s first 64Mp sensor, and just six months later, brought 108Mp sensors to the market.

For our latest 108Mp image sensor, the ISOCELL Bright HM1, we implemented our proprietary ‘Nonacell technology,’ which dramatically increases the amount of light absorption pixels are capable of. Compared to previous Tetracell technology which features a 2×2 array, the 3×3 pixel structure of Nonacell technology allows, for instance, nine 0.8μm pixels to function as one 2.4-μm pixel. This also mitigates the issue raised by low-light settings where light information is often scarce.

In 2019, Samsung was also the first to introduce image sensors based on 0.7μm pixels. The industry had considered 0.8μm as the smallest possible size pixels could be reduced to, but to our engineers, ‘technological limitations’ are just another challenge that motivates their innovation.

Sensors that Go Beyond Our Senses

Most cameras today can only take pictures that are visible to the human eye at wavelengths between 450 and 750 nanometers (nm). Sensors able to detect light wavelengths outside of that range are hard to come by, but their use can benefit a wide range of areas. For example, image sensors equipped for ultraviolet light perception can be used for diagnosing skin cancer by capturing pictures to showcase healthy cells and cancerous cells in different colors. Infrared image sensors can also be harnessed for more efficient quality control in agriculture and other industries. Somewhere in the future, we might even be able to have sensors that can see microbes not visible to the naked eye.

Not only are we developing image sensors, but we are also looking into other types of sensors that can register smells or tastes. Sensors that even go beyond human senses will soon become an integral part of our daily lives, and we are excited by the potential such sensors have to make the invisible visible and help people by going beyond what our own senses are capable of.

Aiming for 600Mp for All

To date, the major applications for image sensors have been in the smartphones field, but this is expected to expand soon into other rapidly-emerging fields such as autonomous vehicles, IoT and drones. Samsung is proud to have been leading the small-pixel, high-resolution sensor trend that will continue through 2020 and beyond, and is prepared to ride the next wave of technological innovation with a comprehensive product portfolio that addresses the diverse needs of device manufacturers. Through relentless innovation, we are determined to open up endless possibilities in pixel technologies that might even deliver image sensors that can capture more detail than the human eye.

Some of our articles may include affiliate links. If you purchase through these links, we may earn an affiliate commission at no extra cost to you.

Go to discussion...

Share.

14 comments

  1. Just the beamer with 500 MPix is just missing ... and this is the only thing why I am interested in 8k video cameras (at the moment): There will be "cheap" displays in three years or so to replace my 5 yr. old 4k TV I bought at 500 EUR/$ (o.k., its tiny with 40" and has a strong color shift off axis but in the right position it has gorgous colors).
    Just one thing: This time it shoud be OLED to have the right black level for Scifi stuff - and reduces the power consumption of these much larger displays - at least during scenes with small space ships before dark space backgrounds with crisp stars.
  2. The image sensors we ourselves perceive the world through – our eyes – are said to match a resolution of around 500 megapixels (Mp).

    I think that's only partially true.

    The retina has a variable pixel pitch. At its maximum, it has a very high density, which--if spread out over the entire retina--might hit this figure. But that max res area is actually quite small. You don't notice this because your eyes will simply shift to what you want to focus on (really good servos there).

    The difficulty with a photograph, of course, is your eye can wander freely over it, but the focus point can't, and certainly if a camera sensor were ever laid out like our retinas, the area on a photograph with the high res would be unchangeable.

    So to look "right" to us the camera actually has to exceed the human eyeball, because the human eyeball is pointable without us even realizing it, the photograph isn't.
  3. Somehow I highly doubt that my eyes are capable of resolving 500mp.

    It has not, definitely. But if you scan a scene our brain stitches details delivered by the macula, the most resolving part of our eye.
    Our photographs are too scanned by our eyes hence you need the maximum resolution everythere.
    This might become more important if we use VR environments with glasses/sensors which show every detail around us and maybe let us zoom into our VR environment.

    That's the reason that I would like more resolution (lens, camera) for wide or ultra wide angles - on the other hand I am more a tele guy who sees a 100mm lens (on FF) as standard focal length. The 24...26MPix are abolutely o.k. (at the moment)
  4. how can 500 or 600 mp be achieved, when difraction usualy limits sharpnes at a certain point?
    that was my first question as well.

    even worse: how do 0.7um size (sub)pixels handle wavelength of visible light ... 400-700nm = 0.4-0.7um ?
  5. The 600 Mpx equivalent of the human eye statement is very misleading. It comes from assuming that your eye is moving around and stitching together all the scenes it is seeing as it scans through them. It's like saying that if you take 50 shots from your 12 Mpx mobile phone and stitch them together then you have a 600 Mpx mobile.
  6. The 600 Mpx equivalent of the human eye comes from assuming that your eye is moving around and stitching together all the scenes it is seeing as it scans through them.

    Exactly as you said. And do not forget that on the macula no "image data" is recorded. That's why the eye is unable to capture one single moment: the brain has to stitch the previously recorded images together. Therefore we don't need high megapixel sensors with small pixels. And I think it's better to have a smaller resolution sensor with a higher DLA than a higher resolution sensor with a smaller DLA.
  7. Exactly as you said. And do not forget that on the macula no "image data" is recorded. That's why the eye is unable to capture one single moment: the brain has to stitch the previously recorded images together. Therefore we don't need high megapixel sensors with small pixels. And I think it's better to have a smaller resolution sensor with a higher DLA than a higher resolution sensor with a smaller DLA.

    Actually, no--that's why we need higher resolution over a wide area.

    Your eye looks at the real world with a very small, high pixel density area, and stiches it...you're right about that part.

    But it cannot do that with a low-res photograph, because the actual detail isn't there to stitch together.

    When you're looking at a photograph intended to represent the real world, just like looking at the real world, it's free to travel around, and wherever it lands, it expects to see high res. So it has to be there, edge to edge, or your eye won't see it on whatever part of the picture it is looking at.
  8. Without running the numbers, I would guess that the well improvements discussed in this video offer at most an incremental improvement to quantum efficiency, from perhaps .42 to .45 or something like that. Definitely not moving the needle in any significant way, and not enabling any significant improvements in resolution. Even if one could approach a well sensitivity of 1, that offers at most a 1.4X improvement in resolution holding dynamic range constant. Physics are the limiting factor more than technology.

    The noise reduction here is the big plus, but I imagine that other manufacturers are working on similar sensor technologies.

    Now if you purchase a 98,000 lux flash compatible with your existing camera body, that might generate the dynamic range needed to downsize a camera which fits the impossibly shriveled hands of your average camera complainer. ;)

Leave a comment

Please log in to your forum account to comment