DxOMark Sensor Performance: Nikon vs. Canon

Status
Not open for further replies.
jrista said:
I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time. ;D
 
Upvote 0
LetTheRightLensIn said:
jrista said:
I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time. ;D

My argument has always been that you cannot realize a beneficial improvement in DR when you downscale, at least by the definition of DR that I was using. I freely admit I'm not generally very eloquent in my wording my arguments, and I am trying to be clearer and more specific. According to elflord's explanation, where black point (and this S/N zero) shift closer to pure black when you average noise. That description of DR, from a purely theoretical standpoint, while I'm willing to accept it as the math DXO uses to produce their specific numbers, does not actually describe the kind of dynamic range explained by TheSuede in his reply to me just a few posts above. Theoretically it's sound...in the pure, ideal environment it is described within. I believe there are extenuating circumstances that are not generally factored into that neat and tidy theory. I could reiterate them, but I've done that so much, if you want to know my stance on any particular argument, just reread my posts.

Just as I have always been arguing, Screen DR really actually tells you about THE HARDWARE. Print DR is more like SQF, a normative but otherwise subjective (as it needs to be) mechanism by which to compare IMAGES, or more specifically the amount of noise present in an image and the resultant S/N when noise frequencies are normalized, produced by cameras on a level playing field. I understand the purpose of normalizing images to put NOISE into the same frequency. I also understand the purpose of normalizing images for the sole, pure purpose of producing a workable model within which to score sensors on that same level playing field. But there are scores, and then there are realities...

I refuse to accept that any movement in the black point results in anything useful, as in, an increased ability to recover detail. The simple act of averaging cost you a significant amount of detail (in the case of the D800, by a factor of 4.5). Additionally, the kind of leeway we are all familiar with when it comes to RAW exposure latitude is reduced by orders of magnitude once you convert to RGB (namely, the brightest highlights and deepest shadows are relatively rigid and do not have much leeway to be adjusted...they are essentially as "baked in" as noise; push them too far, and you either clip or block, and end up with muddy gray/brown lifted shadows or dull/grayish sorta-highlights.) So assuming you wanted to try and recover those deeper shadows with a TIFF, you might be able to recover a little, but nowhere near the four to six stops you might with an original, and thus unscaled, RAW. I consider the normalization of noise to be an entirely different concept for an entirely different purpose than dynamic range...always have. This whole argument hinges on what DXO is describing with the terms "Print DR" and "Landscape Score". Referring to the change as a useful improvement in dynamic range, that should thus give you the ability to recover even more detail from shadows that would otherwise be even deeper into noise than you could recover before is simply not true. The information buried that deeply into the noise floor is well and truly gone, it cannot be recovered by any means. All you can do is make noise darker by averaging, but that further destroys USEFUL detail, and simply makes the detail that was already consumed by noise (as well as the noise itself) a deeper shade of black. It does not make it any more usable, useful, or "recoverable".

If the mathematical definition of DXO's Print DR simply refers to the normalization of noise, which thereby concurrently reduces detail as it reduces the noise floor (black point, S/N 0db), so be it. But I do not believe that is how most people "grasp" the concept of dynamic range, hence the complaint about misleading scores, numbers, and terminology, hence the general confusion about what, exactly, DXO's "Landscape" score really actually means, frustration and anger that the "Landscape" score carries so much weight in DXO's model, etc. Now, I am happy to accept that it's DXO's to decide how they weight and distribute points among their own scoring model. It's just that there are reasons, valid reasons IMO, for why people have a hard time with DXO's scores. I've tried to put a logical voice to those reasons.

I am trying to be more clear about my position in this grand debate. I'm trying to refine my stance, based on a clearer understanding of the stance of opposing parties, so we all know where everyone stands.
 
Upvote 0
jrista said:
LetTheRightLensIn said:
jrista said:
I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time. ;D

My argument has always been that you cannot realize a beneficial improvement in DR when you downscale, at least by the definition of DR that I was using. I freely admit I'm not generally very eloquent in my wording my arguments, and I am trying to be clearer and more specific. According to elflord's explanation, where black point (and this S/N zero) shift closer to pure black when you average noise. That description of DR, from a purely theoretical standpoint, while I'm willing to accept it as the math DXO uses to produce their specific numbers, does not actually describe the kind of dynamic range explained by TheSuede in his reply to me just a few posts above. Theoretically it's sound...in the pure, ideal environment it is described within. I believe there are extenuating circumstances that are not generally factored into that neat and tidy theory. I could reiterate them, but I've done that so much, if you want to know my stance on any particular argument, just reread my posts.

Just as I have always been arguing, Screen DR really actually tells you about THE HARDWARE. Print DR is more like SQF, a normative but otherwise subjective (as it needs to be) mechanism by which to compare IMAGES, or more specifically the amount of noise present in an image and the resultant S/N when noise frequencies are normalized, produced by cameras on a level playing field. I understand the purpose of normalizing images to put NOISE into the same frequency. I also understand the purpose of normalizing images for the sole, pure purpose of producing a workable model within which to score sensors on that same level playing field. But there are scores, and then there are realities...

I refuse to accept that any movement in the black point results in anything useful, as in, an increased ability to recover detail. The simple act of averaging cost you a significant amount of detail (in the case of the D800, by a factor of 4.5). Additionally, the kind of leeway we are all familiar with when it comes to RAW exposure latitude is reduced by orders of magnitude once you convert to RGB (namely, the brightest highlights and deepest shadows are relatively rigid and do not have much leeway to be adjusted...they are essentially as "baked in" as noise; push them too far, and you either clip or block, and end up with muddy gray/brown lifted shadows or dull/grayish sorta-highlights.) So assuming you wanted to try and recover those deeper shadows with a TIFF, you might be able to recover a little, but nowhere near the four to six stops you might with an original, and thus unscaled, RAW. I consider the normalization of noise to be an entirely different concept for an entirely different purpose than dynamic range...always have. This whole argument hinges on what DXO is describing with the terms "Print DR" and "Landscape Score". Referring to the change as a useful improvement in dynamic range, that should thus give you the ability to recover even more detail from shadows that would otherwise be even deeper into noise than you could recover before is simply not true. The information buried that deeply into the noise floor is well and truly gone, it cannot be recovered by any means. All you can do is make noise darker by averaging, but that further destroys USEFUL detail, and simply makes the detail that was already consumed by noise (as well as the noise itself) a deeper shade of black. It does not make it any more usable, useful, or "recoverable".

If the mathematical definition of DXO's Print DR simply refers to the normalization of noise, which thereby concurrently reduces detail as it reduces the noise floor (black point, S/N 0db), so be it. But I do not believe that is how most people "grasp" the concept of dynamic range, hence the complaint about misleading scores, numbers, and terminology, hence the general confusion about what, exactly, DXO's "Landscape" score really actually means, frustration and anger that the "Landscape" score carries so much weight in DXO's model, etc. Now, I am happy to accept that it's DXO's to decide how they weight and distribute points among their own scoring model. It's just that there are reasons, valid reasons IMO, for why people have a hard time with DXO's scores. I've tried to put a logical voice to those reasons.

I am trying to be more clear about my position in this grand debate. I'm trying to refine my stance, based on a clearer understanding of the stance of opposing parties, so we all know where everyone stands.

1. I think you are trying to normalize your claims to match what the others had been telling you for a long time. ;)
2. What do you think the bottom end measurement for DR is? You measure the SNR about the black point. There is nothing more or less magical about their Print plots for DR compared to their Print plots for middle gray SNR. You compare the darkest level noise at the same noise scale to be fair. And yes, it is true that of course you can't both maintain the full MP count of detail and expect to get the Print screen DR at the same time, but yu might find out that your 40MP camera doesn't actually pale compared to your 8MP camera, and maybe even beats it, if you compared them at the same scale for DR and SNR even if at 100% and thus different scales the new 40MP might look noisier.

Anyway:
a. the fairer way to compare between sensors is the print plot and DxO is not doing anything horrendous there

b. yes, the actual numbers reported for the print plots as absolutes are basically whatever numbers in the sense that they are not anything to care about unless you happen to print at a very certain scale and view from a very certain distance and downscale in one particular way but they are the way to make relative comparisons between cameras and sensors that is a lot more fair than using the Screen plots (and for the longest time you had been insisting one must only use the Screen plots to compare cameras relative to one another, but whatever)

d. yes, it's generally better to compare the plots on DxO and pay less attention to overall scores since how do you possible sum up a sensor in one single number that would satisfy everyone at once or even a single person for all circumstances? you can't, it is just some chosen weighting and summation and that only gives you a very general and mushed together hint but again does a high score come because the cam is great at low ISO DR, at high ISO DR, at SNR, at color purity, etc. who knows, so it's better to look to the plots, at least the lower level overall scores (although even there the plots give a much clearer picture)

e. yeah, the lens tests at DxO DO seem to be pretty suspect, not sure what they are doing there, a diferent group tests them, I believe, and lens testing is MUCH trickier and copy variation more relevant but with all of the 300 primes worse than L zooms worse than non-L zooms and 2.8 IIs worse than the original version 70-200 and so on, it is kinda bizarre, I honestly don't bother even looking at their lens tests any more. The 300 2.8 IS II is trash? The 70-200 2.8 IS II worse than the 70-200 2.8 IS better than the 70-200 2.8 non-IS? The 70-300 non-L better than the 70-300L and 300 f/4?? not sure what to say
 
Upvote 0
Woody said:
Has anyone looked at the results posted at Senscore:

http://www.senscore.org/

Even though I am aware their dynamic range results are averaged over the entire available ISO range, I cannot see how the 5D3 can be that much better than the 5D2 and 1Ds3. Comments?

thanks for the link, never looked at them before

5d3 is significantly better than 5d2 at high ISO from what I've seen of it, possibly enough to cause this scoring because we don't really know what weighting they give to what areas of performance to come up with a vague final score like that.
Like they also considered the 5D3's excellent AF system as worth more pts.
 
Upvote 0
And the other take-away from that link is the rumors section mentioning the D4x at 54 MP, right where I'd expect it to be, 2.25x the D3200's sensor.

When that comes out we can start a whole new round of, uhm, sophisticated deliberation when its merits are measured by DxO and published against whatever Canon will have to compare to it.
 
Upvote 0
Aglet said:
And the other take-away from that link is the rumors section mentioning the D4x at 54 MP, right where I'd expect it to be, 2.25x the D3200's sensor.

When that comes out we can start a whole new round of, uhm, sophisticated deliberation when its merits are measured by DxO and published against whatever Canon will have to compare to it.

Oh my God. I would hate to shoot 54mp. The difference from 22 to 36 in print is small, but the file size increase is huge. I would imagine the jump from 36 to 54 to be a nightmare to post process, with only a coupe inches gained in print size. So not worth it. Just my two cents.
 
Upvote 0
Aglet said:
And the other take-away from that link is the rumors section mentioning the D4x at 54 MP, right where I'd expect it to be, 2.25x the D3200's sensor.

When that comes out we can start a whole new round of, uhm, sophisticated deliberation when its merits are measured by DxO and published against whatever Canon will have to compare to it.
LOL! You are choosing your words beautifully! I love your description of a normal everyday C vs. N bar brawl ;)
 
Upvote 0
Woody said:
Has anyone looked at the results posted at Senscore:

http://www.senscore.org/

Even though I am aware their dynamic range results are averaged over the entire available ISO range, I cannot see how the 5D3 can be that much better than the 5D2 and 1Ds3. Comments?

Who knows. When you mish mash so many factors into a single score and nothing is explained....
It does have better DR than the 5D2 and 1Ds3 at higher ISO (although a trace worse than the 5D2 at lower ISO and worse than the 1Ds3 at lower ISO).
 
Upvote 0
compupix said:
Canon has some catching up to do with respect to sensor performance as measured by http://www.DxOMark.com. Canon doesn't even come close to the top performing Nikons. (High score is better.):

Pts Model
=======
96 Nikon D800E
95 Nikon D800
94 Nikon D600
81 Canon 5D III
79 Canon 5D II

(The Canon 1Dx is not yet rated.)
What are the chances that one of the reasons for the new sensor in the 6D is to catapult Canon's sensor performance into the mid 90's? I can't see Canon doing that considering the $3,500 EOS 5D III just came out and has a score of just 81. But Nikon's new $2,100 D600 kicks butt with a score of 94!

Sensor performance isn't everything... but, if I were to choose Nikon or Canon today, I wouldn't be choosing Canon.

Are thes numbers linear? 100 Points is two times better than 50 Points?

I don't think so. My 40D has sth. around 60 points and makes very good image quality (cleanliness, plasticity, contrast). The 5Ds produce extremely good image quality with around 80 Points, Nikons have 90-95 Points and produce more extremely good image quality. But I am shure that the difference between 40D and (5Dii/iii + Nikons latest FF models) is obvious for each parameter while the 15 points difference between 5Dii/iii and Nikons models is less obvios and depends on the shooting scenario.

So in my opinion the dxo mark values lack in two terms:
  • What does a difference of e.g. 10 points mean in relation to it's starting point?
  • Are they measuring the all the parameters and the right parameters which defines image quality for the broad spectrum of applications?

I really like the 2 steps of additional DR which the Nikons have as real advantage. They are - now - the winner in that discipline. But throw away all the Canon glass if Canon will enhance their sensors in the next few years to reach similar DR (I am shure that will happen!)? - NO. it would be wasting of money for me.
Canon has one problem with the current situation: They can buy Sony sensors but perhaps they don't want to do that. Or they have to work around Sony's patents finding another way to increase DR - that's a guess of mine but that's the way things go. Someone has a patent which protects a widely defined technical measure so others have to invent essentially new ways to increase their performance.
That is a chance for us photographers, that Canon has to fight for an alternative sensor concept with increased DR but perhaps a new design that improves other parameters like color fidelity, read out speed etc.

Today the bodies behind the lenses change much more often than the lenses! So I stay calm and observe the market besides taking photographs.
 
Upvote 0
neuroanatomist said:
sanj said:
I had NOT paid ANY attention to the Canon/Nikon debate so far. But the pictures posted here by Mr. Risedal make me sit up and take notice.
And take notice is the only thing I can do as I have Mr. X, 3 and whole bunch or lenses already.
I was happily cruising along and then I see these photos... :(

So...one guy takes a few pictures with a specific agenda in mind, deliberately choosing an exposure that is not optimal (and not just a little off - several stops underexposed), and then processes them in ways which may be totally irrelevant to your images, and that makes you doubt your decision to shoot with Canon gear?

Neuro you are one of the most respected guy on this forum and I love your photos on flicker. But if the two photos which Risedal posts were are IDENTICAL settings, then to my eye the difference in IQ is significant. I do realize there is much more to photography and cameras but I guess here we discussing only IQ. I do not find anything lacking in my cameras but I would like to know if some other camera has better IQ than mine all else being equal. Just to be aware. Am trying to learn..
 
Upvote 0
sanj said:
I would like to know if some other camera has better IQ than mine all else being equal. Just to be aware. Am trying to learn..

Yes, some other camera has better IQ than yours. Several others, in fact. It does depend, of course, on how you define 'better'. DxOMark doesn't define it, for me.
 
Upvote 0
neuroanatomist said:
sanj said:
I would like to know if some other camera has better IQ than mine all else being equal. Just to be aware. Am trying to learn..

Yes, some other camera has better IQ than yours. Several others, in fact. It does depend, of course, on how you define 'better'. DxOMark doesn't define it, for me.

Well that settles it then, retail stores need to totally reorganize their stock. We need to see every camera in the store tagged with a index file card indicating the the DxO score in big bold numbers. That way we don't have to worry about brand or price.
 
Upvote 0
jrista said:
I fully understand how print works. I've been printing for many years, I calibrate my own papers, etc. Don't confuse PPI and DPI. Dots per inch (DPI) in a print is not necessarily the same as Pixels per inch (PPI). In your normal ink jet print, printers are usually 2400x1200 or 2880x1440, depending on the brand. That is the number of discrete ink droplets per inch, is usually a constant (some printers allow you to change DPI), and has little to do with the print resolution other than possibly having a ratio with the PPI. One can choose to print at a variety of "resolutions", or "print pixel densities". Technically speaking one could print at any PPI, although it is best to print at one that evenly divides the highest native. In the case of Epson, that would be anything that cleanly divide 720, and for the rest anything that cleanly divides 600. Thus we get 720/600ppi, 360/300ppi, 180/150ppi, and possibly 90/75ppi for those rare gargantuan prints at 60" plus.

Thank you for correcting that (dpi vs ppi), one shouldn't write stuff like this while being still intoxicated... :) Though I do think you understood what I meant. I've read the rest of your replies also, and think I see what you mean.

I'm not to sure I fully agree with your seemingly total dismissal of DxO's "print DR" validity. But I think I understand why you object to the word "print" used to signify the scaling operation. The "print DR" may be a semantically misleading label, what they really should have written is:
"per pixel when scaled to the MP amount necessary to make an A4-size 300dpi print"
-but that's a bit long-winded when your available space for labels in the graph/value boxes is about 5-15 characters long... :)
.......

Other than this, my only criticism is that they they over-inflate the DR values in the "print" mode. By a constant error of about +0.3 to +0.35 Ev (or bits) - but since they do so equally for all cameras with more than 8/(0.7^2) = 16MP or more it hardly matters. It still shows a comparative difference correctly. It does not favor or handicap any camera.
jrista said:
Thanks to dithering or the RIP, the total number of dots per pixel printed, and the placement of dots of each color within each pixel, can amount to a HUGE volume of colors. "Dots" need not be placed purely side-by-side, they can overlap in different colors as necessary to create a tremendous range of color and tonality, largely limited only by the type of paper (which dictates ink/black density and white point). It should also be noted that the human eye cannot actually differentiate 16 million colors. Most scientific estimates bring the number of "colors" to around 2-3 million. Our eyes are much more sensitive to tonality, the grades of shades, which is also not necessarily the same thing as color. Tonality in print is more dependent on paper than on inks used or dots placed. Gamut, range of color (as well as maximum potential black density) is more dependent on inks used.

In terms of PPI, pixel size in print can indeed be translated to/from pixel size on screen. So long as you know the pixel densities of both, there is a clear translation factor. My screen is a 103ppi, which means I have to zoom images down to around 33% their original size to get a rough idea of how all that detail will look in print. Zooming will NEVER tell the whole picture, though, since zooming or scaling on a computer do so by averaging information. A print DOES NOT average, at least not the way I print. I can print one of my 7D photos without any scaling at all on a 13x19" page with a small border, at 300ppi, and the printed area itself covering 17.28x11.52". The print contains exactly the same information as my 100%, uncropped, native image strait out of camera. That print simply stores the information more densely. A 13x19" print is comfortably viewed (at full visual acuity) within a few feet. My point about print is that it is not scaling...it is the same original information that came out of the camera (plus any PP), just represented in a denser manner.

Only a very incompetent rip engine will translate an image pixel to a certain, square piece of real estate on the paper. All modern rip engines that I know of actually upsample the base image by quite a lot to be able to extract the maximum amount of detail per print dot in the end result. A pixel in the image sent to the printer is NOT directly translated into ink dots on a square area of paper. Try viewing a print under microscope and see for yourself. Not an important point, but your argument about "no scaling" is still invalid in reality with all modern printers or commercial rip engines.

And your text later on does inherently mean that looking at the eye's behavior, you do actually understand exactly what I'm talking about (downsampling area average noise scaling) - just by backing away from the print! A very large print that looks slightly unsharp, and also noisy (when you inspect it up really close) will:
a) seem to be sharper
b) look less noisy
-when you take one or a few steps back.

This is the downsampling effect. As the linear resolution of the eye cannot resolve individual screen pixels or print dots when you take a step back, the eye in itself averages (downsamples!) the target area's actual information content into a lower total resolution, lower noise image. Just as the eye downsamples print dot formations in the rip to a constant tone interpretation.
 
Upvote 0
TheSuede said:
Other than this, my only criticism is that they they over-inflate the DR values in the "print" mode. By a constant error of about +0.3 to +0.35 Ev (or bits) - but since they do so equally for all cameras with more than 8/(0.7^2) = 16MP or more it hardly matters. It still shows a comparative difference correctly. It does not favor or handicap any camera.

In the real world you would likely use advanced NR anyway though which might bring those .3 Ev back too.
 
Upvote 0
TheSuede said:
Other than this, my only criticism is that they they over-inflate the DR values in the "print" mode. By a constant error of about +0.3 to +0.35 Ev (or bits) - but since they do so equally for all cameras with more than 8/(0.7^2) = 16MP or more it hardly matters. It still shows a comparative difference correctly. It does not favor or handicap any camera.

Hmm, where does that additional inflation come from? You have me curious now...

TheSuede said:
jrista said:
Thanks to dithering or the RIP, the total number of dots per pixel printed, and the placement of dots of each color within each pixel, can amount to a HUGE volume of colors. "Dots" need not be placed purely side-by-side, they can overlap in different colors as necessary to create a tremendous range of color and tonality, largely limited only by the type of paper (which dictates ink/black density and white point). It should also be noted that the human eye cannot actually differentiate 16 million colors. Most scientific estimates bring the number of "colors" to around 2-3 million. Our eyes are much more sensitive to tonality, the grades of shades, which is also not necessarily the same thing as color. Tonality in print is more dependent on paper than on inks used or dots placed. Gamut, range of color (as well as maximum potential black density) is more dependent on inks used.

In terms of PPI, pixel size in print can indeed be translated to/from pixel size on screen. So long as you know the pixel densities of both, there is a clear translation factor. My screen is a 103ppi, which means I have to zoom images down to around 33% their original size to get a rough idea of how all that detail will look in print. Zooming will NEVER tell the whole picture, though, since zooming or scaling on a computer do so by averaging information. A print DOES NOT average, at least not the way I print. I can print one of my 7D photos without any scaling at all on a 13x19" page with a small border, at 300ppi, and the printed area itself covering 17.28x11.52". The print contains exactly the same information as my 100%, uncropped, native image strait out of camera. That print simply stores the information more densely. A 13x19" print is comfortably viewed (at full visual acuity) within a few feet. My point about print is that it is not scaling...it is the same original information that came out of the camera (plus any PP), just represented in a denser manner.

Only a very incompetent rip engine will translate an image pixel to a certain, square piece of real estate on the paper. All modern rip engines that I know of actually upsample the base image by quite a lot to be able to extract the maximum amount of detail per print dot in the end result. A pixel in the image sent to the printer is NOT directly translated into ink dots on a square area of paper. Try viewing a print under microscope and see for yourself. Not an important point, but your argument about "no scaling" is still invalid in reality with all modern printers or commercial rip engines.

Is it that they "upsample" the image? Or is it more that they "transform" the image into an entirely different form...a form of layers of tiny dots of a specific color, selected from the range of ink colors available in the printer, arranged (dithered) in such a way as to produce an accurate color reproduction of the original source, which is ultimately exactly what gets laid down onto paper by the printer hardware itself? In a sense, the information streaming out of the RIP is almost always at a higher density, as DPI is usually at least two, and often many more, times greater than PPI. Depending on the printer, it may be an "image" representing ink droplets containing 8-11 color components in resolutions as high as 2400x1200, 4800x2400, 2880x1440, 5760x1440 dots per inch, which in the case of say a 13x19" print might be as high as 74,880x27,360 dots per page, or 2,048,716,800 (2 billion) dots total!

I actually own a loupe that I used to use to examine the actual dots laid down by my printer when I first started printing. I was pretty fascinated with the whole thing back then (and were talking many years ago now.) I tend to examine my prints for other quality factors now...like white point and dmax, as well as the amount of detail as tones fall off into black, tonality across the board, color gamut, bronzing & metamerism (if I'm printing on a paper type that exhibits those), etc. None of that requires I look at the actual dots laid down on the paper surface. But I know what you are talking about.

TheSuede said:
And your text later on does inherently mean that looking at the eye's behavior, you do actually understand exactly what I'm talking about (downsampling area average noise scaling) - just by backing away from the print! A very large print that looks slightly unsharp, and also noisy (when you inspect it up really close) will:
a) seem to be sharper
b) look less noisy
-when you take one or a few steps back.

This is the downsampling effect. As the linear resolution of the eye cannot resolve individual screen pixels or print dots when you take a step back, the eye in itself averages (downsamples!) the target area's actual information content into a lower total resolution, lower noise image. Just as the eye downsamples print dot formations in the rip to a constant tone interpretation.

Well, I see what you are saying. I am not sure I would call what the brain (rather than the eye, since it is not really the eye doing the processing) does as you back farther and farther away from a print "downsampling". I tend to think of the brain more along the lines of a highly efficient super resolution processor. Our eye/brain vision center has a "refresh rate" of about 500 frames per second. However, due to the way our brain additively processes that information in a kind of "circular buffer", it is always adding more recent information to information it already has (while discarding the oldest information), to produce the crystal-clear, high resolution, ultra high dynamic range world we see. From what I understand, the cones (color sensitive cells) in our eyes don't have anywhere near the kind of density to support 1 arc second of color visual acuity, and the rods are barely close enough. A lot of our visual acuity is due to how our brains process the visual information received...and our acuity is a bit higher than the biological devices of our eyes would really attest to.

There are other complexities with vision, as well...such as the way the brain maximizes perception in the central 2° foveal spot, while purposely diminishing perception and acuity in the outer 10° region. There are also our blind spots with kind of throw a wrench into the mix when trying to determine what "resolution" our eyes see at, or what the brain is actually doing with the constant stream of visual information it receives from the eyes.

So, I'm not sure I would call anything that is done with a print downsampling in any manner. When it comes to vision, I consider what our brain does to be more along the lines of additive supersampling (super resolution). The output of which does diminish as distance increases, but I still wouldn't call it downsampling.
 
Upvote 0
neuroanatomist said:
sanj said:
I would like to know if some other camera has better IQ than mine all else being equal. Just to be aware. Am trying to learn..

Yes, some other camera has better IQ than yours. Several others, in fact. It does depend, of course, on how you define 'better'. DxOMark doesn't define it, for me.

I have no clue about DxO. Never been to their site and do not know enough technical stuff to understand complex charts etc. I do know what 'better' IQ to me means: Lesser noise, crisper image BOTH at high and 100 ISO, Dynamic range (within reason, depending upon the current technology and the lighting) which helps me avoid clipped whites and dead blacks, sharpest results from sensors of the same size in production.
Am I going to be shot for saying these things?
 
Upvote 0
Status
Not open for further replies.