Yep....and that limit is reached when pixel density reaches infinity. What you end up approaching is the limit created by optical aberrations + diffraction. Many of the best lenses have almost no aberrations and thus you're approaching this limit:
@Lee Jay: You might want to check your facts. Assuming a perfect lens, there is a limit on how many line pairs per millimeter (lp/mm, or cycles/mm) that you can resolve with any optical system, and even assuming an optically perfect system at its maximum aperture (i.e. no aberrations), diffraction still limits the resolution of the image projected by the lens. For example, a "perfect" f/2.8 lens at Rayleigh can resolve an absolute maximum of 532 lp/mm, and the same lens stopped down to f/22 can only resolve 68 lp/mm. Image detail at Rayleigh is the lowest level of contrast that can reasonably be differentiated by the human eye, or 9% contrast. For reference, the 7D's 18mp sensor resolves 116 lp/mm, so that perfect lens at f/22 resolves only about half of the resolution the sensor can capture, or almost twice
its nyquist rate. At a more reasonable level of contrast, say 80% (a pretty common target level for camera lens MTF charts), the same lens would only be capable of resolving about 13lp/mm @ f/22!!
This article on Luminous Landscape might shed some insight on optical resolution and its physical limitations:http://www.luminous-landscape.com/tutorials/resolution.shtml
Assuming no optical aberrations, the physical size of the aperture affects the maximum resolution, and that maximum SHRINKS as f/# is increased since the airy disc grows with smaller apertures. Note that when I say image, I mean the virtual image of the world the lens captured and projected. The only reason to increase sensor resolution beyond the resolution of the projected image is to minimize artifacts that may be produced by the sensor itself, such as moire (in B&W or Foveon sensors) and color moire (in bayer sensors), but only to a certain point. Many smaller camera sensors used in compacts already FAR outresolve their lenses with incredibly tiny 2 micron or smaller pixels. The lenses for such cameras tend to have such small apertures that they are effectively pinhole cameras, which produces enough diffraction to go well beyond the diffraction limit of the sensor. At that point, the entire system is diffraction limited, and you are getting NEGATIVE returns...very soft images that almost look watercolor in appearance in the worst case. The simple fact of physics is that diffraction always exists, at any aperture, and the tighter the aperture, the larger the airy disc.
Most astrophotography I've done myself or done by friends is usually at an aperture of f/8 or f/10, since those are very common apertures of high quality consumer-grade telescopes. The types of sensors in CCD imaging devices and DSLR's attached with a T-ring are usually diffraction limited at around f/11-f/16, and end up encountering detrimental softening at f/22-f/28 and beyond. That's nothing to speak of the effects of atmospheric interference that further degrade the quality of light. The only f/30 images I've seen from telescopes came from a Celestron EDGE telescope, and while the definition of edges of celestial objects seemed
sharper, other detail was visibly soft.
This is why a lot of astro folks shooting planets with 5.6 micron pixels (roughly 40D size) will shoot at f/28-f/40 (f/30 is common) - that's where they can extract all the detail their aperture can give them. For example, these were shot by a dedicated amateur with a backyard scope of 14" of aperture at somewhere close to f/30:
Regarding the links, there does not appear to be any embedded EXIF metadata, so I can't verify the aperture. The shot of Jupiter appears fairly soft, but I'm honestly not sure if that is due to atmospheric interference or a small aperture. Even in astrophotography circles that I know, its well known that reducing aperture beyond a certain point reduces spatial resolution, it does not increase it (reducing aperture increases resolution it early on because optical aberrations overpower diffraction at wide apertures.) The resolution of any imaging system, lens and sensor/film included, is not an ever increasing linear curve, its more of a bell curve. Image clarity (resolution and acutance) peaks and optical aberration bottom at a certain midpoint. Before that midpoint optical aberrations diminish sharpness, while after that midpoint diffraction diminishes sharpness. The sweet spot is usually only around a stop wide given lens+sensor. The following link demonstrates the impact on image resolution with the Canon 450D. It has a 5.2 micron sensor (similar to the 40D), which is diffraction limited at just over f/10. I've configured the tool to compare f/2.8 to f/11 to demonstrate how diffraction visibly degrades resolution at the smaller aperture (mouse over the image to see f/11, mouse off to see f/2.
I've heard of astronomers using smaller aperture telescopes to view very bright objects, however I think thats a matter of reducing exit pupil size of the telescope relative to the pupil size of your eyes, rather than as a matter of increasing the detail in the image projected by the lens. A slightly smaller exit pupil in a telescope (or binoculars) relative to the entrance pupil of your eyes helps prevent your eyes from reacting to a perceptibly "bright" object by shrinking the pupil even farther. That tends to darkens the image too much, which has the effect of reducing contrast, which affects how easy it is to see detail in whatever it is your observing. I don't know that the same rules apply when photographing stellar objects, and from a physics standpoint, assuming you used a Canon 7D to photograph stellar objects, I would assume an f/8 telescope would probably produce the clearest images with the best detail.