Where has the post with the complaint about the 80D's IQ gone?

jrista said:
To properly compare a 50mp camera to a 21mp camera, you must first downsample the 50mp images to 21mp.

I don;t understand why.
If someone owns the 5D3 then buys the 5DSR they don't think 'the 5D3 was 22MP, I am now using 50MP so I should downsample before creating my final output'.
They follow the same workflow and compare what they used to get.
 
Upvote 0
I own a 6D and a 7D Mark II. I have been toying with idea of purchasing the 80D and although it doesn't have all the bells and whistles of the 7D II I am curious if the 24 mp sensor with the A/D converter on chip will give better IQ and reduced noise over the 7D II. I went to DPReview and studied their noise charts to compare the differences. They did not have a comparison with the 7D II but they did have a comparison with the 6D. Their charts appeared to show the noise levels looking much better than the 6D. Is this possible? Anyone with real life comparisons between these 3 cameras?
 
Upvote 0
AlanF said:
Jon
It is easy to argue and does appear to be logical that downsampling to the same megapixel size will give the same resolution, noise etc as on the less dense sensor. But, in practice, it does not appear to be as straightforward as that. I have done lots of comparisons of different lenses on a 5DS R, 5DIV, 5DIII and 7DII (plus some on 80D an 7D), and can write from experience.
1. Transitions on a 50 MP sensor are smoother, and when downsampled have a different quality from those taken directly on a 20 or 30 mp sensor, may be not as crisp. The downsampling algorithms do not give the same results as direct measurements.
2. The 50 MP sensor is more sensitive to diffraction effects and lens defects. For example, my 300/2.8 II + 2xTC and 400mm DO II f/4 + 2xTC do not perform that well on the 5DS R but give very crisp images on 5DIV and 5DIII. The 5DIV gives much better results than a downsampled 5DS R image.

#1 Seems highly subjective. Depends a lot on the algorithm used. There are lots of algorithms. Use one that preserves as much of the information as possible, such as Lanczos, and you should find the 50mp trounces pretty much everything, possible exception being the 43mp Sony sensor (as it lacks a low pass filter entirely rather than using a blur reversal approach, although Sony employs some spatial filtering in their BionzX chip that could hurt their sensor's performance.)

#2 is absolutely false. Diffraction is an optical effect, it is what it is regardless of what the sensor pixel size is. Smaller pixels are not more sensitive nor succeptible to diffraction...they are simply capable of resolving a diffraction spot better than larger pixels. This is a common misconception about diffraction, one that endlessly circles around and around the internet despite the fact that it is just totally wrong. Diffraction cannot ever make smaller pixels perform worse than lager pixels. Physical impossibility.

This is actually a key factor in astrophotography resolution...we generally prefer to be oversampled, because objective measurements indicate that smaller pixels almost always produce higher resolution (measured as smaller stars, or to be more specific, smaller FWHM.) Ideal oversampling is when the pixels of the sensor are ~3.3x smaller than the best resolved spot that the optical system (which in astrophotography also includes the atmosphere) can deliver. Under ideal band limited conditions, we want to sample by 2x, which allows us to optimally reconstruct the original information perfectly with proper processing.

By terrestrial photographer standards, 2-3x oversampling to you guys would, based on the comments here, appear "very soft and blurry"...when in reality, under measurement, such data is actually better and resolving more detail. You guys just aren't familiar with the best techniques to make the most of oversampled data. Such as deconvolution...true deconvolution (NOT sharpening, sharpening is a totally different concept that is actually destructive in nature, whereas deconvolution is reconstructive in nature), using a PSF to reverse blurring caused by the lens...which can recover a significant amount of detail, and functions best when your data is oversampled (i.e. 50mp, 80mp, 100mp sensor), and worst when your data is undersampled (i.e. 21mp sensor).

This is true for DSLRs, CCDs and CMOS cameas. Even with a low pass filter, one would have to be ludicrously strong on a small-pixel sensor to actually result in such significantly worse resolution that it was not as good as a sensor with larger pixels.
 
Upvote 0
Mikehit said:
jrista said:
To properly compare a 50mp camera to a 21mp camera, you must first downsample the 50mp images to 21mp.

I don;t understand why.
If someone owns the 5D3 then buys the 5DSR they don't think 'the 5D3 was 22MP, I am now using 50MP so I should downsample before creating my final output'.
They follow the same workflow and compare what they used to get.

You misunderstand. To make an effective comparison, data must be normalized. I am not saying you have to downsample your images to use them. Although if you think about the most common distribution medium, the internet...normalization is generally implicit, as we share images so they fit on screen, which means they are downsampled, and most of the time we aren't sharing images at native size (certainly not these days...a 30, 40, 50 mp image doesn't fit on the average screen by any measure.) Even if you consider other common distribution medium...wedding albums and books, the affordable print, even larger sized prints...50mp is well more than necessary for an 11x24 or even a 20x30...so again, normalization is effectively imposed. Whether you use a 20, 30, 40, 50, or 100mp camera, if you share online or print anything smaller than a 20x30" print, the physical dimensions of the image are usually going to be roughly the same.

Comparing non-normalized 5D III and 5Ds data to each other is comparing apples to oranges. The scale of the information is totally different. The noise and signal level of a 5Ds pixel represents a fraction of the noise and signal of a 5D III pixel. However if you combine the information from ~2.3 5Ds pixels...well now you have pixel noise and signals that are directly comparable to 5D III pixel noise and signals. The comparison now makes sense and can be understood.
 
Upvote 0
jrista said:
#1 Seems highly subjective.

As opposed to objective measurements like 'the 5DIII has unacceptable IQ'?


jrista said:
By terrestrial photographer standards, 2-3x oversampling to you guys would, based on the comments here, appear "very soft and blurry"...when in reality, under measurement, such data is actually better and resolving more detail. You guys just aren't familiar with the best techniques to make the most of oversampled data. Such as deconvolution...true deconvolution (NOT sharpening, sharpening is a totally different concept that is actually destructive in nature, whereas deconvolution is reconstructive in nature), using a PSF to reverse blurring caused by the lens...which can recover a significant amount of detail, and functions best when your data is oversampled (i.e. 50mp, 80mp, 100mp sensor), and worst when your data is undersampled (i.e. 21mp sensor).

I always find it's better to empirically determine a PSF by imaging point sources of light at various depths through the focal range, rather than relying on a theoretical PSF, don't you? Oh, sorry, I forgot us guys aren't familiar with stuff like that. Who's Nyquist, anyway? Oh yeah, a horse. Didn't he win a race of some sort recently?
 
Upvote 0
neuroanatomist said:
jrista said:
#1 Seems highly subjective.

As opposed to objective measurements like 'the 5DIII has unacceptable IQ'?


jrista said:
By terrestrial photographer standards, 2-3x oversampling to you guys would, based on the comments here, appear "very soft and blurry"...when in reality, under measurement, such data is actually better and resolving more detail. You guys just aren't familiar with the best techniques to make the most of oversampled data. Such as deconvolution...true deconvolution (NOT sharpening, sharpening is a totally different concept that is actually destructive in nature, whereas deconvolution is reconstructive in nature), using a PSF to reverse blurring caused by the lens...which can recover a significant amount of detail, and functions best when your data is oversampled (i.e. 50mp, 80mp, 100mp sensor), and worst when your data is undersampled (i.e. 21mp sensor).

I always find it's better to empirically determine a PSF by imaging point sources of light at various depths through the focal range, rather than relying on a theoretical PSF, don't you? Oh, sorry, I forgot us guys aren't familiar with stuff like that. Who's Nyquist, anyway? Oh yeah, a horse. Didn't he win a race of some sort recently?

When did I say anything about a theoretical PSF? I assume by that you mean a "synthetic PSF"? That would really be more of a configured Gaussian (or possibly Moffat or Lorentzian) spot, rather than a PSF. I use PSFs modeled from measuring actual point sources when doing deconvolution myself. That is the only way to get accurate enough deconvolution to avoid rabid artifacts and other issues that can arise during deconvolution.
 
Upvote 0
jrista said:
Mikehit said:
jrista said:
To properly compare a 50mp camera to a 21mp camera, you must first downsample the 50mp images to 21mp.

I don;t understand why.
If someone owns the 5D3 then buys the 5DSR they don't think 'the 5D3 was 22MP, I am now using 50MP so I should downsample before creating my final output'.
They follow the same workflow and compare what they used to get.

You misunderstand. To make an effective comparison, data must be normalized. I am not saying you have to downsample your images to use them. Although if you think about the most common distribution medium, the internet...normalization is generally implicit, as we share images so they fit on screen, which means they are downsampled, and most of the time we aren't sharing images at native size (certainly not these days...a 30, 40, 50 mp image doesn't fit on the average screen by any measure.) Even if you consider other common distribution medium...wedding albums and books, the affordable print, even larger sized prints...50mp is well more than necessary for an 11x24 or even a 20x30...so again, normalization is effectively imposed. Whether you use a 20, 30, 40, 50, or 100mp camera, if you share online or print anything smaller than a 20x30" print, the physical dimensions of the image are usually going to be roughly the same.

Comparing non-normalized 5D III and 5Ds data to each other is comparing apples to oranges. The scale of the information is totally different. The noise and signal level of a 5Ds pixel represents a fraction of the noise and signal of a 5D III pixel. However if you combine the information from ~2.3 5Ds pixels...well now you have pixel noise and signals that are directly comparable to 5D III pixel noise and signals. The comparison now makes sense and can be understood.

Thanks jrista for the clarification. Actually, I agree with you, but my terminology and frame of reference was a bit different which diverted my brain.
 
Upvote 0
Can one get sharp images with the 80D?

I've attached a few screenshots in Lightroom where I zoomed in 300%:
 

Attachments

  • 80D sharpness 300% crop (ISO 160).jpg
    80D sharpness 300% crop (ISO 160).jpg
    846.5 KB · Views: 131
  • 80D sharpness 300% crop (ISO 200).jpg
    80D sharpness 300% crop (ISO 200).jpg
    992.1 KB · Views: 148
  • 80D sharpness 300% crop (ISO 400).jpg
    80D sharpness 300% crop (ISO 400).jpg
    872.1 KB · Views: 124
Upvote 0
OK. I have a (for some might seem silly) question. Assuming I am doing a nighttime photo of the Milky Way or another astro-photography image, and assuming my options are a 14mm lens vs a 24mm lens with stitching, which would give me better IQ? I know that's a vague and broad question, but let's assume the same body and sensor is used with each lens and the same equivalent exposure and processing is used. Let's also assume that there are no DOF or focus variations between the two setups and both are done on a rock-solid set of legs. The only variable is the lens.

Seems to me that the total amount of information that is recorded is higher with a two-shot stitching exposure than a single shot single exposure after cropping makes them about the same. Therefore, would the two-shot stitched image (of course perfectly put togeter into a single image) have better resolution than a perfectly exposed single image? Or would any cropping make them about the same?
 
Upvote 0
JPAZ said:
OK. I have a (for some might seem silly) question. Assuming I am doing a nighttime photo of the Milky Way or another astro-photography image, and assuming my options are a 14mm lens vs a 24mm lens with stitching, which would give me better IQ? I know that's a vague and broad question, but let's assume the same body and sensor is used with each lens and the same equivalent exposure and processing is used. Let's also assume that there are no DOF or focus variations between the two setups and both are done on a rock-solid set of legs. The only variable is the lens.

Seems to me that the total amount of information that is recorded is higher with a two-shot stitching exposure than a single shot single exposure after cropping makes them about the same. Therefore, would the two-shot stitched image (of course perfectly put togeter into a single image) have better resolution than a perfectly exposed single image? Or would any cropping make them about the same?
Stitching results in a higher resolution file. Whether that translates into a more detailed representation of the subject depends on the quality of the lenses being compared, as well as what settings are being used.

FYI two shots with 24mm will not give you 14mm angle of view. Aspect ratio is 3:2 so you'd need a 20mm lens shot in counter-orientation, from which you can then trim to 14 or 15mm equivalent angle of view:
 

Attachments

  • 15mm via 20mm stitching.jpg
    15mm via 20mm stitching.jpg
    34 KB · Views: 125
Upvote 0
What is the purpose of JPAZ asking the question about astro-photography in this thread?
There is no relation with the subject in this thread with that subject.
Or is he just killing the interesting and informative replies about the 80D's IQ and IQ in general that keep coming?
 
Upvote 0
Gosh, sorry to make you think I was hijacking. Just a train of thought that came into my befuddled brain with all the discussion about picel sensitivity and density and because I do appreciate the knowlege of others. Have a great weekend.....
 
Upvote 0
YuengLinger,
Just trying to be helpful here so please take my random thoughts with a grain of salt.
For birding with my 7D Mark II I usually use my 400 mm f/5.6 L prime lens. That should be about the same as your 100-400mm lens wide open. With that lens, even in bright sunlight, there is noise in the photograph even at iso 100. There is virtually no noise in my pictures when I use my 70-200 mm f/2.8. Using the 400 mm on my 6D significantly has reduced noise. This leads me to believe that a crop sensor camera will always have more noise than a full frame camera settings being equal. I am not sure how DPAF affects the noise levels.

If you notice the black line in the right underside of the cormorant's throat and along the top of the heron's beak, I have found that is usually caused by chromatic aberration correction along with over sharpening. By reducing the sharpness of the photo I can usually get rid of that unless the CA is extreme. In both cases the CA was caused by blown out highlights.

IMHO the heron crop at 100% is way over sharpened. I usually keep the sharpening to a minimum during RAW processing and then do my final sharpening in the JPEG image.

Often I find the photos straight out of the camera to have too much contrast, so in RAW processing I usually have to reduce the brightness, contrast, shadows and highlights to get as much detail out of the blown out areas and then use curves to bring up the dark areas of the photo.

I have also noticed that if you can set the lens correction to a minimum, (assuming you are using this feature), you will decrease the noise by quite a bit.

I do not know about comparing the photos to those of earlier camera bodies. but I don't think your photos looked too bad. The eyes of the cormorant were sharp and properly focused. As I said, I think the heron could be improved by reducing the sharpness.

Hope that helps.
 
Upvote 0
haggie said:
What is the purpose of JPAZ asking the question about astro-photography in this thread?
There is no relation with the subject in this thread with that subject.
Or is he just killing the interesting and informative replies about the 80D's IQ and IQ in general that keep coming?

Hey haggie HELP!!!, you are urgently needed over in this other thread, where the topic of a new Canon 85mm lens has degenerated into discussions of gapless microlenses and global shutters. Please put on your forum police sirens and get over there!!

::) ::) ::)
 
Upvote 0
WRT noise at ISO 100, what is the baseline for comparison?

Here are a couple of 300% crops at ISO 100. [size=14pt]I don't have any complaints.[/size]
 

Attachments

  • 80D Noise - ISO 100a.jpg
    80D Noise - ISO 100a.jpg
    693 KB · Views: 146
  • 80D Noise - ISO 100b.jpg
    80D Noise - ISO 100b.jpg
    754.2 KB · Views: 127
  • 80D Noise - ISO 100c.jpg
    80D Noise - ISO 100c.jpg
    845 KB · Views: 115
Upvote 0
StudentOfLight said:
WRT noise at ISO 100, what is the baseline for comparison?

Here are a couple of 300% crops at ISO 100. I don't have any complaints.

At first I was sure you were trying to be funny posting screen shots to make a point about IQ, but I'm beginning to wonder. ::)

Tried the 80D one more time, hopeful, open minded. Unfortunately, the last one I got had a sensor that appeared to be slightly misaligned, as lenses that were fine on other bodies had to be AFMA'd to high positive numbers. Not a single lens needed less than +6, I was as high as +17. (Right, not one in negative territory.) I'm guessing a little QC lapse here, as it seemed new, shipping and camera boxes were pristine...I've banged around bodies and not had this happen.

If you like your 80D, you can keep your 80D. Great on paper, adequate in person. I'd rather get close to the promised IQ in a smaller package, so...

Final result, Canon, is you've inspired me to explore the world of mirrorless, and I'm now taking a hard look at the Fuji X-T2. No, I wouldn't expect it to be excellent for shots of birds, but I really need something small, light, and with good IQ to have for family, travel, and the OCD photography I do every chance I get.

Still love my full-frame Canon! But I'm not waiting around for Canon to release a mediocre mirrorless, then another one incrementally better, and so on.
 
Upvote 0
YuengLinger said:
StudentOfLight said:
WRT noise at ISO 100, what is the baseline for comparison?

Here are a couple of 300% crops at ISO 100. I don't have any complaints.

At first I was sure you were trying to be funny posting screen shots to make a point about IQ, but I'm beginning to wonder. ::)

I thought he was posting screenshots to demonstrate noise at 100 ISO
 
Upvote 0
Mikehit said:
YuengLinger said:
StudentOfLight said:
WRT noise at ISO 100, what is the baseline for comparison?

Here are a couple of 300% crops at ISO 100. I don't have any complaints.

At first I was sure you were trying to be funny posting screen shots to make a point about IQ, but I'm beginning to wonder. ::)

I thought he was posting screenshots to demonstrate noise at 100 ISO
I decided to post screenshots (which included the settings I was using) in order to be helpful. Settings like detail and masking if applied incorrectly will emphasize noise. Let me post another side-by-side comparison screenshot with "bad" settings to demonstrate the difference:
 

Attachments

  • 80D Noise - ISO 100 with 'bad'' settings.jpg
    80D Noise - ISO 100 with 'bad'' settings.jpg
    805.2 KB · Views: 127
Upvote 0
davidj said:
That's a shame. There was a comment in there by Mount Spokane Photography (I think) about the M5 being slightly disappointing by having the features of a P&S based on a read through of the manual, and I was keen to see that thought elaborated.

The M5 still retains the powershot firmware...that is the only thin g i am most concerned of. IMHO, it is creating problems that wouldn't exist if they just used the same rebel firmware the M1 did. And the UI is NOWHERE as fast as the m1. Stupid canon is stupid.
 
Upvote 0