Cheap / simple lens' image quality improved by computational imaging algorithm

Status
Not open for further replies.
If there is noise introduced, it is nothing compared to the overall improvement in the image.

Its been well over a year now since Canon included a lens correction feature in DPP which takes into account lens flaws. It takes a huge amount of computer power as well as making the image file much larger, but it works.

As personal computers get more powerful, this kind of computation will allow all kinds of image corrections that are currently limited to NASA's super computers or spy agencies.
 
Upvote 0
It makes me wonder how long it will be before we can take the next step, and dispense with the lens altogether. Just point your camera at the scene you want to photograph, let the light fall directly on the sensor and let the processor do the rest. It must be possible in principle - all the data is there in the light.
 
Upvote 0
Mt Spokane Photography said:
As personal computers get more powerful, this kind of computation will allow all kinds of image corrections that are currently limited to NASA's super computers or spy agencies.

Perhaps...but as far as image processing goes, supercomputers aren't necessarily the be-all end-all. For example, several years ago, deconvolution microscopy was a new technology (mathematical reconstruction of an optically thin section, based on estimated or empirically measured point spread functions). Running on an SGI Octane workstation (granted, that's not a personal computer), it would take about 10 hours to deconvolve a typical image stack. I thought, hey, my institute has a Cray T3E, why not run on that? Well, it turned out that it took about 9.75 hours to run the same sort of image stack on the supercomputer. The difference? The Octane could do one stack at a time, whereas the Cray could do close to 100 stacks at once. So, much faster overall throughput...but not really better for a single image.
 
Upvote 0
neuroanatomist said:
Mt Spokane Photography said:
As personal computers get more powerful, this kind of computation will allow all kinds of image corrections that are currently limited to NASA's super computers or spy agencies.

Perhaps...but as far as image processing goes, supercomputers aren't necessarily the be-all end-all. For example, several years ago, deconvolution microscopy was a new technology (mathematical reconstruction of an optically thin section, based on estimated or empirically measured point spread functions). Running on an SGI Octane workstation (granted, that's not a personal computer), it would take about 10 hours to deconvolve a typical image stack. I thought, hey, my institute has a Cray T3E, why not run on that? Well, it turned out that it took about 9.75 hours to run the same sort of image stack on the supercomputer. The difference? The Octane could do one stack at a time, whereas the Cray could do close to 100 stacks at once. So, much faster overall throughput...but not really better for a single image.

still people often take more than a single image
so doing 100 in the time of 1 is sort of useful
 
Upvote 0
LetTheRightLensIn said:
still people often take more than a single image
so doing 100 in the time of 1 is sort of useful

Absolutely. But the supercomputer wouldn't really be making it possible, as much as making it more practical. Similarly, I notice that with a multicore processor, my RAW converter has higher throughput...however, processing 1-4 images takes about the same amount of time, and I only realize the benefits of the multicore CPU when processing a larger batch of images.
 
Upvote 0
bainsybike said:
It makes me wonder how long it will be before we can take the next step, and dispense with the lens altogether. Just point your camera at the scene you want to photograph, let the light fall directly on the sensor and let the processor do the rest. It must be possible in principle - all the data is there in the light.

Errrr - no. The light would still have to be focused. Unless you fancy an insects view of the world ;)
 
Upvote 0
bainsybike said:
Sporgon said:
Errrr - no. The light would still have to be focused.

But couldn't that be done by software, instead of a physical lens?

Whats missing here is the direction of the light. A lens "sorts" the light rays. You need something like a light field camera to do the job.

You could also just use a hole (as pinhole cameras do). But that's usually not what people mean when they say "without a lens".
 
Upvote 0
neuroanatomist said:
Mt Spokane Photography said:
As personal computers get more powerful, this kind of computation will allow all kinds of image corrections that are currently limited to NASA's super computers or spy agencies.

Perhaps...but as far as image processing goes, supercomputers aren't necessarily the be-all end-all. For example, several years ago, deconvolution microscopy was a new technology (mathematical reconstruction of an optically thin section, based on estimated or empirically measured point spread functions). Running on an SGI Octane workstation (granted, that's not a personal computer), it would take about 10 hours to deconvolve a typical image stack. I thought, hey, my institute has a Cray T3E, why not run on that? Well, it turned out that it took about 9.75 hours to run the same sort of image stack on the supercomputer. The difference? The Octane could do one stack at a time, whereas the Cray could do close to 100 stacks at once. So, much faster overall throughput...but not really better for a single image.

Absolutely, software must be written specifically for the configuration of a specific computer type, and even then, it might not process faster. Some computer designs are optimized for images or video and operate faster than a general purpose computer. Game consoles fall into that category of being faster than a general purpose computer being configured specifically for games.

Still, I think things are a bit faster than my first computer, a Atari 400 followed by a XT Clone.
 
Upvote 0
What I'd like to see is a lens profile for each of my lenses individually; Didn't Tom Clancy (So long, we hardly knew ye) make a passing mention of using a laser to profile a lens to improve the resultant images? I doubt it could be usefully done in-camera but I could make time on my computer for that - Folding@home can wait!

Jim
 
Upvote 0
Status
Not open for further replies.