Question about image quality

Jun 5, 2011
564
1
8,516
I opined on another forum that by using various PS photomerging techniques, one could obtain 50 MP images by using a 22 MP camera, 1Ds3.

The thread progressed as expected until the term "Nyquist limit resolution" was introduced.
It was indicated that my gear produces about 4.7 LP/mm, whereas the new 5Ds coughs out 7.1 LP/mm.

My question...does the term "Nyquist limit resolution" have any relationship to IQ?
 
chauncey said:
Can you expand on that a bit?

If you look at some of the other cameras on the market, you see a mode called "sensor shift" where they take multiple images of the same scene <EDIT> where the sensor is shifted by a fraction of a pixel </EDIT> and interpret the differences between pixels to create a single higher resolution image from several lower resolution images. The more pictures combined, the more pixels you can create in a high res composite.... but the process is not linear. You are moving on a curve and eventually you hit a point where additional pictures make no difference.

This is very different than traditional stitching, where the images are all partially overlapping and the more you add, the bigger it gets.
 
Upvote 0
The idea is that with a single image, your maximum resolution is limited by the Nyquist theorem, a limit based on the size of the pixels. So, a 24 MP APS-C iamge will out-resolve a 24 MP FF image, for example, because the pixels of the APS-C sensor are smaller. A pano image won't break that limit, obviously, but renders that limit practially irrelevant.

Consider something like the Mont Blanc Pano, a 365 gigapixel pano comprising 70,000 images shot using a 70D with 400/2.8 II + 2xIII, and imagine comparing the zoomed-out image to a single shot with the 11-24L on a 5DsR. Zooming in on the latter, you'd quickly hit the Nyquist limit...zooming in on the huge pano, you can see climbers on the very-distant mountain face (easiest to spot in PetaPixel's writeup).
 
Upvote 0
chauncey said:
I opined on another forum that by using various PS photomerging techniques, one could obtain 50 MP images by using a 22 MP camera, 1Ds3.

The thread progressed as expected until the term "Nyquist limit resolution" was introduced.
It was indicated that my gear produces about 4.7 LP/mm, whereas the new 5Ds coughs out 7.1 LP/mm.

My question...does the term "Nyquist limit resolution" have any relationship to IQ?

Don't worry about Nyquist for stacking images. If false color or herringbone patterns are not in the original images, they will not appear in the stacked ones.

There are some good write-ups on the Nyquist Frequency, its all about high frequency response where Analog signals are sampled in the conversion to digital. In the case of cameras, the signal from the photo sites is analog, and a A-D converter is used to produce a digital output. That's where Nyquist comes to play.

Its the same basic issue for digital audio.

https://en.wikipedia.org/wiki/Nyquist_frequency
 
Upvote 0
There are two ways to produce 50MP images from 22Mpix files.

1. Side by side stitching.. simply add two 22Mpix images to make a 44Mpix image, add another image and....

2. Fractionally offset stacking, or adding images together to increase information content (i.e. offset of ~1/2 pixel).. sometimes used in astronomy, can increase resolution by upsizing each image, then stacking.. sometimes called drizzelling, ideally needs decent sharpening to pull out all the detail as the very highest frequency details will be somewhat attenuated.. personally I'd want to use wavelet sharpening, but I don't think photoshop has that.

Nyquist will simply mean the original data is limited by the number of photosites (basically it says you can't detect differences in brightness across a single photosite).. but stacking offset images means you're adding more information... stacking 100% aligned images doesn't add more spacial resolution.

Of course ultimately a great image taken on an old low pixel count camera will blow away any dumb image taken with the latest cutting edge camera.
 
Upvote 0
Don Haines said:
If you look at some of the other cameras on the market, you see a mode called "sensor shift" where they take multiple images of the same scene and interpret the differences between pixels to create a single higher resolution image from several lower resolution images.

In addition, full-pixel shifts give you R,G,B at each photosite.
 
Upvote 0
chauncey said:
Aah...My question...does the term "Nyquist limit resolution" have any relationship to IQ?

it limits what fine destail you can represent. One could contrive a scene to produce artifacts due to under sampling which will adversely affect IQ similarly to moire, for example, but I wouldn't claim there is a cut and dry relationship between the objective nyquist limit and he subjective image quality.
 
Upvote 0
There are probably applications where it is important, but given the resolution of current cameras and the way most people view the final image or photograph, it is probably irrelevant to most people.

Going back to the original thought, assuming similar generation sensors, wouldn't a stitched image have (1) better dynamic range and noise control and possibly (2) less distortion if you are able to use a good normal / telephoto lens vs a typical wide angle lens? Therefore, while the Nyquist limit of your camera hasn't changed, the resulting image bypasses that and leaves you with a higher quality image compared with a single image from a higher MP camera?
 
Upvote 0
Hillsilly said:
wouldn't a stitched image have (1) better dynamic range and noise control

Perhaps in the areas of overlap you could get a noise/DR advantage, but I don't think many (any?) stitching codes integrate in those regions.

Or do you mean if you shoot a pano to simulate wider lenses and then downsample?
 
Upvote 0
It sounds as if you guys are saying that if I take a series of images using longer glass and photomerge them to the same field of view of an
image captured using a high MP camera, assuming my merging skills are up to task...my merged image will be of equal or greater quality.

Is that right?
 
Upvote 0
chauncey said:
It sounds as if you guys are saying that if I take a series of images using longer glass and photomerge them to the same field of view of an
image captured using a high MP camera, assuming my merging skills are up to task...my merged image will be of equal or greater quality.

Is that right?

Yes, this is exactly true.
You don't need any merging skills, there is software that does that automagically. Hugin, for one, is open source (pronounced: free) and although not very intuitive at first, it does a *great* job with stitching.
To improve your skills on taking "stitchable" pictures look up what the no-parallax point of a lens is (which is also referred to as nodal point by some). While on this subject, there are tripod heads specifically designed for panoramic pictures with no parallax error, and if you are creative but on a low budget, you can even build one yourself (I did).
 
Upvote 0
There are limits other than Nyquist.
The air around us is not perfectly transparent or uniform. More distance equals less potential resolution. There isn't a fancy name and equation because it depends - the air over Mona Kea is clearer than the air in Beijing.
Lenses have limited resolving power, your 1200 mm L will never resolve the sand grains on Pluto. A super cheap long lens, like the mirror lenses from Rokinon, would probably be incapable in this application.

That said, the most effective path to boost resolution is to use a longer lens to assemble a wider scene. Other techniques might apply if you can't get that longer lens, or have limited time to take the images, etc.

I'm saying this because once you decide to boost resolution (after deciding that you really need to), you need to look at each situation in detail and solve the myriad problems that might arise. Mechanics of the pan are gonna frustrate you if you haven't done your homework first.
 
Upvote 0
chauncey said:
It sounds as if you guys are saying that if I take a series of images using longer glass and photomerge them to the same field of view of an
image captured using a high MP camera, assuming my merging skills are up to task...my merged image will be of equal or greater quality.

Is that right?

Why do you think I haven't bought a 5Ds ? ;)

You can't beat physical image size and greater magnification for IQ. Stitching software is so good now it makes it very easy - even without a tripod :-X As you say, to get the same fov on a multiple (larger) format you have to use a longer lens. On an old 10x8" camera the standard, '50mm equivalent' lens was 340 mm.

A 5DIII shot in three portrait orientation frames with a normal overlap can finish with the same 1.5 x 1 format as the single frame from a 5Ds, and your stitch will have less mp, but greater size and magnification.

It also means that you can get away with fewer focal length lenses, so for a photographic skin-flint like me it works great.
 
Upvote 0
Here is an example (not as good as Sporgon's, stuff, but he's on a different level) of stitching, but not necessarily to "go wide."

When I shot this, I had two lenses with me, wide, and wider. I opted to stitch vertical frames with the longer of the two lenses (28mm) to retain better detail throughout.

The result is a roughly standard looking FOV (I could have cropped out 13%from the horizontal to make it 3:2 and I doubt anyone would be the wiser that it's not a single frame - I liked the space behind the boat and the convergence at the LHS, however), but made up of close to 66 million pixels (50% more than the native res of the camera) with less distortion than an ultra-wide, etc.

MAB-20150907-1417-XL.jpg
 
Upvote 0