Ryan_ said:
When in photoshop, I make a document 70" wide and 36" tall and set a PPI of 100 and add my image to enlarge it. Then I do the same thing except I make the PPI 300. When viewing each at 100% the 300ppi one looks terrible and the 100ppi one looks decent. The 300ppi file is much larger in MB as well.
That's expected, because if you create a "blank" image and then put a photo into it, you have to manage yourself any resampling needed to show the image at the correct size and resolution. Simply enlarging it will just add pixels with the simplest interpolation algorithm (add new pixel identical to the neighboring ones), which will lead to ugly result, especially for large changes.
IMHO, creating a blank image is mostly useful for people using Photoshop to draw and paint from scratch, and being able to define the desired output size and resolution helps them to create an image with the desired specification for the selected output device, or to be used as a part of a more complex work.
With a photo, unless you wish to create something more complex, like an ad or magazine page, there's little need to create a blank image and paste an image into it, and even then a DTP application would manage this kind of project better than Photoshop.
If you're printing yourself, you can simply select the output size (and sometimes resolution, depending on the printer), and let the print engine apply the transformations needed to output the image at the desired size. Of course, if there are not enough pixels in the input image, and the process needs to resample and "create" a lot of new pixels, some quality loss is inevitable (and if an image is reduced too much, as well). When it becomes unacceptable depends on several factors.
If you have to send an image to an external service, and they won't perform the above step for you, or you want more control, you can just resize the image for the final output size and resolution, have Photoshop resample it with the proper algorithm, and then perform the final tuning for the output device (using the soft proof features), including sharpening. There are plug-ins like PixelGenius PhotoKit Sharpener which have pre-sets for different output devices, which simplify this step a lot. Then send the image for printing, or, if you need it, copy it into another image (at the same resolution) if needed.
Basically, as an image metadata both PPI and DPI settings tells the relationship between the source physical size and the capture/create medium resolution at the creation/capture stage, and can thereby be used later to understand when the image needs to be resampled to be shown at a different size and resolution on a different device. Otherwise devices with different resolutions would have no way to know how to properly transform it.
Because an image is made of pixels, not dots, PPI is the correct term. Photoshop uses correctly PPI when creating a blank image, because it is made of pixels, not dots.
Pixel and dots may not be interchangeable. Why? Because usually a pixel can represent any color value for a given color depth and color space. A dot may not, depending on the output technology (usually dots are used by output devices only). In many printing technologies, like inkjet printers using four or more colors, each dot can't represent the whole color range of a single pixel. Thereby, more dots in a dithering pattern are needed to "fool" the eye and look like the pixel they have to display.
The printer process will analyze pixel data, and turn them into the required dot pattern. The more dots per inch a device can output, the more complex and refined the dithering pattern can be. Once there was a rule of thumb that the input resolution was OK at 1/3 of the output resolution, so an image set at 240ppi (for a given size) would print well on a 720dpi printer (for the same size). Today photographic printers are far more capable.
Other output technologies may deliver more colors for a single dot, and thereby require less dots for pixels. Thereby comparing the quality of an output device looking only at the DPI value regardless of the technology is useless.
Input sensors capturing image in pixels have also a PPI value (often incorrectly labeled DPI) defining the sensor resolution, but it is usually useful only for devices like scanners, because they can know the source size, and thereby from the image pixel size and the stored PPI metadata for an image you can compute the original image size (it's pixels/PPI), and when the resolution can be set, it is useful to capture only the required number of pixels for a given output device, to reduce the captured image size (for storage, transmission, etc.), and avoid/reduce later resampling which can always alter somewhat an image.
For a camera, the sensor PPI value has little meaning. For example a 24MP APS-C sensor has an higher PPI than a 24MP full frame one, but at the same equivalent focal they will deliver more or less the same image at the same pixel size, the lens actually changes the captured image size, and usually you're more interested in this. There is also no strong relationship between the source image size and its final output size, the latter size is usually chosen to fulfill display needs only. You could still capture less pixels if only using low-res outputs, but because most cameras work better at the native resolution, it's what you mostly use, and let software resample the image later to achieve the desired size for a given output.
Screens resolution should also be defined in PPI, not DPI, but it looks a lot of developers has little knowledge of this. Also, when the only one interested in these data where graphic professional, they were more used to the older DPI term than the newer PPI which was specific to the newer electronic devices, once screen (and not only screens) resolutions were defined in "lines", not pixels.
Higher res screen again allow to display larger images without need of resampling while keeping the screen size not too large (just smaller pixels may be harder to see).
Higher res can be useful, for example, Lightroom asks you to perform sharpening at 1:1 because otherwise the image is resampled, at the changes applied makes more difficult to understand the "correct" sharpening settings. Just, on a screen, you can resample an image on the fly, and simulate different resolutions.
That's why also you should apply "input sharpening", to correct for the capture stage loss of details only, and "output sharpening" specific for a given output device (screen, different types of printers, etc.), to ensure it looks OK when the image is resampled for the final output.