To answer the OP, I don't think, with FF sensors at least, we will even reach the limits of current manufacturing technology for quite some time. From a density perspective, a FF sensor would have to have 46.7mp 4.3 microns in size to have the same density as the 7D with its 18mp APS-C sensor. Assuming we use the same fabrication technique to create a hypothetical 46.7mp FF sensor, its safe to assume that it would have roughly the same characteristics, including the noise. There are obvious differences, for one with it being larger, it affects FoV differently...but I think thats an out of scope factor here. All things being equal, we could assume that this 46.7mp FF sensor functions the same as its 18mp APS-C counterpart.
Its well known that the 7D is a bit noisier than one might expect at ISO100 when viewed at 100%. Scaled down on screen to 300dpi print size, however, and it looks fantastic, and provides more fine detail than older cameras, so the noise doesn't seem to be a deal-breaking problem. According to DxO labs, the 7D has the highest DR of any Canon APS-C sensor. Its rumored that Canon newest L-series lenses can currently resolve about 45mp of FF resolution, which puts the 7D right at the limit...its capturing all the detail it can from current lenses. Lens aberrations are an optical phenomena and projected in the virtual image. Assuming you print at a consistent size, say A3+ 13x19", the 7D should not have any affect on optical aberrations at all, while concurrently producing clearer, sharper details at the same print size as a lower resolution sensor. Scaling for preview on screen, say via the web, generally requires down-scaling images to a small fraction of their original size, and there should be no reason for concern of any kind regarding optical aberrations, noise, or any other possibly undesirable artifact.
Back to our hypothetical 46.7mp FF sensor...all the benefits of the 7D for realistic scenarios, like printing A3+ or scaling for the web, should be as good or better now. One could also choose to print twice as large, say 24x36 or even 30x40, and expect the same amount of clarity and detail, without any additional enhancements to optical aberrations beyond what the larger print size might mean. This hypothetical 46.7mp sensor is still not significantly outresolving the best glass on the market today, either. Simply put:
A 46.7mp FF sensor shouldn't "stress" anything from an IQ standpoint beyond any level we have today (although it might indeed put stress on frame rate.)
That would mean a 36mp FF sensor also shouldn't stress anything, and should actually do very well considering it has about 10.7mp of headroom until it hits the limits of anything as of today. Given that the 5D Mark II has some of the best DR of any Canon sensor, and its only marginally better than the 7D, one could assume that a 36mp sensor would have even better DR, possibly by quite a margin, over the 7D. This is nothing to say of the kinds of design and manufacturing improvements that have apparently been realized in the 1D X's 18mp FF sensor, and in many new Sony sensors of a variety of sizes up to FF.
This is all assuming that other technology, such as optics and sensor fabrication, don't also improve right along with increases in MP. Were moving along with optics, getting there but not really close to the quality of a hypothetical diffraction-limited "perfect" lens. The effect an aperture has on diffraction is an aspect of the lens, and it affects the maximum spatial resolution of the virtual image the lens projects. Thats an abstract concept, independent of other factors, and is the same regardless of what camera you may use that lens on. (NOTE: Don't read that to mean that the sensor does not affect IQ of a whole camera system...it most definitely does. The sensor just doesn't have anything to do with how an aperture affects IQ in a virtual image.) Assuming Canon improves lenses right along with MP, and goes beyond the current 45mp cap. So long as the sensor is not significantly outresolving the lens, we can continue to see benefit from more megapixels. I say significantly, because there is something to be gained by outresolving the lens a little bit, as it can help improve moire of fine repetitive spatial frequencies as you approach the nyquist rate of a sensor.
Taking the concept of outresolving the lens farther. The argument against more MP (excluding the anectodal statements that indicate that outresolving the lens immediately means bad IQ) is that once you do outresolve the lens, more MP doesn't offer any benefit. Generally speaking, this is true, but there are a few ways it can be beneficial. From a physics standpoint, there is a hard wall to resolution (again, ignoring superresolution): imaging at or beyond the wavelength of light. Its light were imaging, and the smallest spatial resolution you can resolve in an image would be the wavelength of light (which medians at about 550nm, but ranges from around 340nm to 780nm across the whole color spectrum.) Physical brick walls are a LONG way off, however, way beyond the scale of resolution we can realistically talk about off into the relatively near future. Assuming we take sensors to twice the maximum of current canon lenses...90mp. Were WAY outresolving the lens now...so whats the benefit? For one, you can pretty much eliminate color moire, which is a problem that exhibits with bayer type sensors (where there are alternating rows of RGRG and GBGB pixels.) Second, you could utilize the pixels on the sensor more effectively. Normal bayer demosaicing interpolates the intersections of every RGBG 2x2 pixel quad, which reuses overlapping sets of pixels to produce many RGB output pixels. This effectively reduces the color resolution of the final image, although it maintains full luminance resolution, not to mention that you start out with half the pixels for red and blue as you do for green, so your already a little color anemic. With a sensor double the resolution the lens can project, you can now use one full bayer quad for each single RGB output pixel. Eliminate interpolation entirely, and use a full constitute of color information for each and every output pixel. This has the added side effect that you are now averaging four bayer pixels, and all the noise they contain, into a single RGB pixel, so noise should be greatly improved. (As it turns out, Canon already does this in their cameras from the last couple years. They call it mRAW and sRAW, or medium raw and small raw. Both image formats use double or quadruple the bayer pixel information per output pixel to produce a much cleaner, less noisy result...albeit at half or quarter the native sensor resolution.) Outresolving a diffraction-limited lens may not be a horrid thing if we can make better use of the sensor pixels. Image output resolution would still contain the same amount of resolution that the lens projected, so you have to start questioning what it actually means to outresolve the lens. Could we use a 180mp sensor, and use 16 bayer pixels per RGB pixel to improve quality even more? To normalize noise even more? The 1 micron or less pixels of high resolution point and shoot cameras seems to be a problem at very narrow physical apertures (a millimeter or two). I am not sure if thats a physics problem, or a "cheapskate" problem, wherein the technilogy in P&S cameras are simply not up to snuff.
The only realistic stressers I can see with higher resolution sensors is image readout rate and file size. Readout rate may be something that may already addressed, as Canon prototyped a 120mp APS-H sensor earlier in 2011 that they proclaimed had a good readout rate. They did not state any explicit framerate, however its probably safe to assume 2-4fps, which is a pretty normal rate. The rate at which images can be saved to memory is also a factor, however soon 300mb/s CF\SD cards should be finding their way into photographer hands, which will help address that problem. That leaves file size, which itself is also something that can be addressed with faster image processors and more advanced compression algorithms, as well as cheaper space.