Other than this, my only criticism is that they they over-inflate the DR values in the "print" mode. By a constant error of about +0.3 to +0.35 Ev (or bits) - but since they do so equally for all cameras with more than 8/(0.7^2) = 16MP or more it hardly matters. It still shows a comparative difference correctly. It does not favor or handicap any camera.
Hmm, where does that additional inflation come from? You have me curious now...
Thanks to dithering or the RIP, the total number of dots per pixel printed, and the placement of dots of each color within each pixel, can amount to a HUGE volume of colors. "Dots" need not be placed purely side-by-side, they can overlap in different colors as necessary to create a tremendous range of color and tonality, largely limited only by the type of paper (which dictates ink/black density and white point). It should also be noted that the human eye cannot actually differentiate 16 million colors. Most scientific estimates bring the number of "colors" to around 2-3 million. Our eyes are much more sensitive to tonality, the grades of shades, which is also not necessarily the same thing as color. Tonality in print is more dependent on paper than on inks used or dots placed. Gamut, range of color (as well as maximum potential black density) is more dependent on inks used.
In terms of PPI, pixel size in print can indeed be translated to/from pixel size on screen. So long as you know the pixel densities of both, there is a clear translation factor. My screen is a 103ppi, which means I have to zoom images down to around 33% their original size to get a rough idea of how all that detail will look in print. Zooming will NEVER tell the whole picture, though, since zooming or scaling on a computer do so by averaging information. A print DOES NOT average, at least not the way I print. I can print one of my 7D photos without any scaling at all on a 13x19" page with a small border, at 300ppi, and the printed area itself covering 17.28x11.52". The print contains exactly the same information as my 100%, uncropped, native image strait out of camera. That print simply stores the information more densely. A 13x19" print is comfortably viewed (at full visual acuity) within a few feet. My point about print is that it is not scaling...it is the same original information that came out of the camera (plus any PP), just represented in a denser manner.
Only a very incompetent rip engine will translate an image pixel to a certain, square piece of real estate on the paper. All modern rip engines that I know of actually upsample the base image by quite a lot to be able to extract the maximum amount of detail per print dot in the end result. A pixel in the image sent to the printer is NOT directly translated into ink dots on a square area of paper. Try viewing a print under microscope and see for yourself. Not an important point, but your argument about "no scaling" is still invalid in reality with all modern printers or commercial rip engines.
Is it that they "upsample" the image? Or is it more that they "transform" the image into an entirely different form...a form of layers of tiny dots of a specific color, selected from the range of ink colors available in the printer, arranged (dithered) in such a way as to produce an accurate color reproduction of the original source, which is ultimately exactly what gets laid down onto paper by the printer hardware itself? In a sense, the information streaming out of the RIP is almost always at a higher density, as DPI is usually at least two, and often many more, times greater than PPI. Depending on the printer, it may be an "image" representing ink droplets containing 8-11 color components in resolutions as high as 2400x1200, 4800x2400, 2880x1440, 5760x1440 dots per inch, which in the case of say a 13x19" print might be as high as 74,880x27,360 dots per page, or 2,048,716,800 (2 billion) dots total!
I actually own a loupe that I used to use to examine the actual dots laid down by my printer when I first started printing. I was pretty fascinated with the whole thing back then (and were talking many years ago now.) I tend to examine my prints for other quality factors now...like white point and dmax, as well as the amount of detail as tones fall off into black, tonality across the board, color gamut, bronzing & metamerism (if I'm printing on a paper type that exhibits those), etc. None of that requires I look at the actual dots laid down on the paper surface. But I know what you are talking about.
And your text later on does inherently mean that looking at the eye's behavior, you do actually understand exactly what I'm talking about (downsampling area average noise scaling) - just by backing away from the print! A very large print that looks slightly unsharp, and also noisy (when you inspect it up really close) will:
a) seem to be sharper
b) look less noisy
-when you take one or a few steps back.
This is the downsampling effect. As the linear resolution of the eye cannot resolve individual screen pixels or print dots when you take a step back, the eye in itself averages (downsamples!) the target area's actual information content into a lower total resolution, lower noise image. Just as the eye downsamples print dot formations in the rip to a constant tone interpretation.
Well, I see what you are saying. I am not sure I would call what the brain (rather than the eye, since it is not really the eye doing the processing) does as you back farther and farther away from a print "downsampling". I tend to think of the brain more along the lines of a highly efficient super resolution processor. Our eye/brain vision center has a "refresh rate" of about 500 frames per second. However, due to the way our brain additively processes that information in a kind of "circular buffer", it is always adding more recent information to information it already has (while discarding the oldest information), to produce the crystal-clear, high resolution, ultra high dynamic range world we see. From what I understand, the cones (color sensitive cells) in our eyes don't have anywhere near the kind of density to support 1 arc second of color visual acuity, and the rods are barely close enough. A lot of our visual acuity is due to how our brains process the visual information received...and our acuity is a bit higher than the biological devices of our eyes would really attest to.
There are other complexities with vision, as well...such as the way the brain maximizes perception in the central 2° foveal spot, while purposely diminishing perception and acuity in the outer 10° region. There are also our blind spots with kind of throw a wrench into the mix when trying to determine what "resolution" our eyes see at, or what the brain is actually doing with the constant stream of visual information it receives from the eyes.
So, I'm not sure I would call anything that is done with a print downsampling in any manner. When it comes to vision, I consider what our brain does to be more along the lines of additive supersampling (super resolution). The output of which does diminish as distance increases, but I still wouldn't call it downsampling.