The fundamental problem arises when you encode the color information. It doesn't seem to matter if its a chrominance pair, or RGB triplet, or anything else. Once you encode the color information...take it out of it's separate storage values, and bind those discrete red, green, and blue values together into a conjoined value set (i.e. RGB sub-pixel values for a full TIFF pixel, for example), you lose editing latitude.
That sounds like it's just a problem with the way the software is handling the data. A TIFF is still assigning full RGB values to each pixel, the debayering is done, and I'm assuming the original values are lost. Whereas if you store the information in a pre-debayered state, even with one pixel averaged out (which was next on the to-do list anyway) it shouldn't be any different from reading the original RAW... with the slight exception that adjusting the value after averaging would be different than adjusting the values of two pixels and then averaging them (I assume that when you adjust things in post it's playing with the RAW numbers before debayering).
But that still sounds like a fairly inconsequential concession to make compared to storing data after debayering.
There are some things in a RAW editor that must be done before debayering (i.e. whitebalance), and some that are usually done after debayering. It's just that some things are more effectively performed with an original digital signal, and others with a full RGB color image. Exposure and white balance are the two main things that benefit most from being processed in the original RAW, where the signal information is pure and untainted with any error introduced by conversion to RGB.
You also have to realize that RGB binds the three color components together....they cannot be shifted around much in an independent way, not like you can with RAW, without introducing artifacts. At least, not with real-time algorithms. There are other tools, like PixInsight (astrophotography editor) that have significantly more powerful, mathematically intense, and often iterative processes that put most of the tools in something like Lightroom to shame. One example is TGVDenoise...which is capable of pretty much obliterating noise without affecting larger scale structures or stars at all. Problem is, at an ideal iteration count (usually around 500) on a full RGB color image, running TGVDenoise can take several minutes to complete. And that is just one small step in processing a whole image.
So sure, with the right tools, you can probably do anything with a 16-bit TIFF. It's just that with lower precision but significantly faster algorithms like are often found in standard tools like Lightroom, you either end up with artifacts, or run into limitations with the data or the algorithm that won't let you push the data around as much.