Computational Photography in late 2018 ... as implemented in Google Pixel 3 smartphone

Jul 31, 2018
297
111
just came across this interesting write-up on dpreview re. state of computational photography, as implemented on latest Google smartphone Pixel 3.
https://www.dpreview.com/articles/7...s-the-boundaries-of-computational-photography

I certainly don't share 100% of the hype for the specific product. And up to now ;-) I still prefer "dedicated camera device with lots of ground glass in front of some decently large image sensor" for my purposes.

BUT ... progress in computational photography seems to happen even faster than I thought. I fully welcome this and am hoping to get an "ultra-compact, fully-capable, solid state camera" ... some day.

Generally speaking, I am all for making the mundane/technical aspects of photography easier, more supportive and intuitive so I can focus my attention on the "creative side": composition, light, moment, subject interaction and "access to locations and events not open to the public" ... to capture and create the images I want.

Not sure whether traditional camera makers and their octagenarian-stud boards are really on top of these developments or whether they will again just sit around like lame ducks, doing nothing and watch their "traditional camera and lens" business go down the drain - like last 10 years, when smartphones totally swallowed their sh*tty compact digital cameras. Interesting times for sure.
 
Mar 25, 2011
16,848
1,835
Smart phones have stalled as far as sales go, so manufacturers are looking to the camera as a way to entice new buyers. They can afford to invest a lot more money than traditional camera makers into enhancement of photos for the average consumer. Thats good. Some of those consumers will be inspired to get a large sensor camera. The tiny sensor in a cell phone has a huge depth of field, does not require much focusing, and the computational aspect helps simulate a reduced depth of field. I think the smart phones are adequate for 90+ percent of the photographers. The tricks used to help the poor low light sensitivity that a small sensor has, such as stacking are available in some mirrorless and DSLR's already, but they're limited to still scenes.

I'm pretty happy to see the improvements, we will all benefit in the long run.
 
  • Like
Reactions: 1 users
Upvote 0
Jul 31, 2018
297
111
i have real doubts how stacking 15 images with moving subjects/elements in the frame is really done. just as a simple example a street scene where (only one) person in the foreground walks across a street. how does the software "align" that subject in the frame? at starting position (1st frame), at end position (frame 15)? or some frame in between?

if only taken from a single capture, then the person will be rendered much more noisy/lower IQ than surrounding scene? but it also cannot simply "average" the person from all 15 frames because it would be just 1 big blur or "strobe effect" then?

or will the result be an image with moving person edited out of the scene entirely ... which would technically be easiest to do (see typical "HDR"/stacking software?).
users might not be happy with getting "background only" images, with main (moving) subject cloned out? :)

Can't see how software would really handle this dilemma.
 
Last edited:
Upvote 0

LDS

Sep 14, 2012
1,761
292
how does the software "align" that subject in the frame?

That's the real issue of "computational photography" - an algorithm will decided what to do. Maybe you'll be able to choose among different algorithms (as Instagram became successful because of filters), but the risk is that unless you're a programmer with the required skills. you'll we be restricted to the software results.

It is true we have boundaries already, but one thing is being limited by the law of Physics, another by a Google programmer...
 
Upvote 0
Mar 25, 2011
16,848
1,835
i have real doubts how stacking 15 images with moving subjects/elements in the frame is really done. just as a simple example a street scene where (only one) person in the foreground walks across a street. how does the software "align" that subject in the frame? at starting position (1st frame), at end position (frame 15)? or some frame in between?

if only taken from a single capture, then the person will be rendered much more noisy/lower IQ than surrounding scene? but it also cannot simply "average" the person from all 15 frames because it would be just 1 big blur or "strobe effect" then?

or will the result be an image with moving person edited out of the scene entirely ... which would technically be easiest to do (see typical "HDR"/stacking software?).
users might not be happy with getting "background only" images, with main (moving) subject cloned out? :)

Can't see how software would really handle this dilemma.

If subjects are moving in stacked photos, the usually disappear or appear as ghosts. Thats why stacked images are generally only useful for still subjects.

However, if you want a photo of a scene to appear without people, then stacking can effectively remove them, which might be a good result.
 
Upvote 0
Jul 28, 2015
3,368
570
Computational systems are relying more now on recognising what the subject is - they use it for face/eye recognition and for the new algorithms that advise the photographer on what is a 'good composition'. It is not beyond the realms of possibility that if you focus on a face it will map the edges of the person and overlay what it sees as the 'best' image of a walking person onto a composite of the background. It probably won't be perfect for some time but it would be interesting to see the capabilities.
 
Upvote 0