I mentioned this under another topic, but I thought it might be interesting to explore a little more directly.
Canon's announcement of the 1D X emphasized that the images could by enlarged through up-sampling. Maybe this is just hype to cover up for the reduced resolution. Or maybe not.
I'm neither an engineer nor a software specialist, so I confess that I don't know anything about the technology. But I do wonder if Canon engineers have concluded that rather than cram ever more megapixels onto a sensor, it is more efficient and produces better image quality to interpolate additional information through software.
Amazing things are already being done with software (content aware fill, for example). Some of the stuff on the horizon is also mind-boggling (Adobe's sneak peak of a proposed Photoshop feature that corrects out of focus images and the "light field camera" are just two examples)
Granted, up-sampling cannot add detail where none exists, but why should it be difficult to take a studio portrait shot at 18 mp and increase the effective resolution by two or three times, interpolating the data from existing pixels?
Perhaps we are entering an era where sensors are used for dynamic range and light sensitivity and software is used to expand the resolution.
Your thoughts?
Canon's announcement of the 1D X emphasized that the images could by enlarged through up-sampling. Maybe this is just hype to cover up for the reduced resolution. Or maybe not.
I'm neither an engineer nor a software specialist, so I confess that I don't know anything about the technology. But I do wonder if Canon engineers have concluded that rather than cram ever more megapixels onto a sensor, it is more efficient and produces better image quality to interpolate additional information through software.
Amazing things are already being done with software (content aware fill, for example). Some of the stuff on the horizon is also mind-boggling (Adobe's sneak peak of a proposed Photoshop feature that corrects out of focus images and the "light field camera" are just two examples)
Granted, up-sampling cannot add detail where none exists, but why should it be difficult to take a studio portrait shot at 18 mp and increase the effective resolution by two or three times, interpolating the data from existing pixels?
Perhaps we are entering an era where sensors are used for dynamic range and light sensitivity and software is used to expand the resolution.
Your thoughts?