Removal of an AA filter is far from a high end feature. It is a gimmick for all but a very few niche photographer types who's work primarily involves photographing things with entirely random data that could not produce much aliasing regardless. For the vast majority of photographers, use of an AA filter is quite essential to producing BETTER image quality. Aliasing produces nonsense, noise, useless detail. ANTI-Aliasing restores that useless nonsense noise to a more accurate form.
On this I 100% agree with you though.
One side note, the Canon 7D isn't maybe the best body to use (I mean not in the equipment sense and taking pictures but in the testbed to compare detail vs other cameras in a lab sort of sense), since it has those heavily split greens in the CFA array so the de-mosaic routines have to do very tricky things which tend to leave a bit of residual loss of micro-contrast behind (it's actually surprising that they manage to not leave behind major resolution loss, they must be doing some pretty sneaky stuff to handle the split greens and preventing mazing artifacts while not hitting the resolution but a trace).
What do you mean by "heavily split greens"? The 7D has a pretty standard bayer sensor as far as I know...they shouldn't need to do any special processing (and ACR/LR seem to handle 7D files just fine with their AHDD algorithm.)
You know how they use two greens for each red and blue so you have a G1 R G2 B well with the 7D they decided to make G1 very much not the same as G2. It seemed like they wanted to sneak just a little bit more light to the sensor by making one the greens even yet more color blind.
If you developed 7D files right when it first came out with ACR (or even DPP!) you'd notice also sorts of maze patterns appearing. It was especially bad, if I recall correctly, in orangey-yellow blocks of color. Some of were like what the heck is with the artifacts in these 7D images and then we were noticing weird things where measuring the SNR of the G1 channels in the RAW data always gave different results than measuring on the G2 channels.
A few of us brought the complaints to the converter makers and Canon's attention and some even returned their initial 7D copies thinking that maybe that had the Bayer array somehow misaligned or something. Then a few weeks later Adobe released a new ACR and I think shortly after that a new DPP came out. The early speculation would be that getting around the split greens would cause major loss of resolution or noticeably remaining artifacts, but somehow the converter makers found a way to pretty much solve all the mazing artifacts while only barely hitting the resolution at all (if you still had the early ACR beta that supported the 7D and process a file with it, you can get a touch more micro-contrast out of it with 7D files than using the later fixed beta and final releases). It's hard to say but it seemed like the fix maybe effectively knocked 1-2MP MP off the 18MP sensor, not really a big deal, perhaps it made the files a bit more filmlike looking.
And you can see references by Adobe to split green parameters in ACR. It's not just the 7D that needs them but a few other cameras as well from some of the smaller players in the digital camera world. I think Canon is not splitting the greens so much again and I;m not sure if Nikon ever did (i'm pretty unsure about this last stuff though).
If you didn't buy a 7D within the first 2-3 weeks of their very first arrival in the U.S. you probably missed the whole mazing thing with ACR (and maybe 3-4 weeks for DPP and the others).
Hmm. I can't imagine that such a thing is a huge problem. It's not all that different from Sony's "Emerald Green" CFA that they introduced many years ago (they called it RGBE). Their "Emerald" had more blue in it than the standard green. Based on all the sample images at the time, it actually produced better color accuracy...but it would be the exact same thing as your describing with the 7D.
There have been similar approaches in the past by other companies as well. Some simply do away with the second green and make it a "white". Fuji threw in an extra tiny little white pixel between all the primaries that were very widely spaced, but gathered extra luminance data. If what you say is correct, that would have caused even more problems for demosaicing, however it improved resolution a bit, and improved DR (although the DR improvement seemed minor, especially compared to what Sony did with Exmor.) Kodak, Sony, and Fuji all now have RGBW sensor designs, and Sony is even starting to patent non-square pixels (they have patents for triangular and hexagonal pixels now, and supposedly one of these pixel designs is going to be used in their forthcoming 54mp FF sensor.)
I also can't imagine that it would cause a loss in resolution. I mean, the crux of any bayer demosaicing algorithm is interpolating the intersections between every (overlapping) set of 2x2 pixels. Because there is reuse of sensor pixels for multiple output pixels, there is an inherent blur radius. But it is extremely small, and it wouldn't grow or shrink if one of the pixel colors changed. You would still be interpolating the same basic amount of information within the same radius. I remember there being a small improvement in resolution with my 7D between LR 3 and 4, and things seem a bit crisper again moving from LR 4 to 5. I suspect any supposed loss in resolution with the 7D was due to the novelty of Adobe's implementation of support for the 7D, not anything related to having two slightly different colors for the green pixels. The quality with which LR renders my 7D images only seems to get better and better with time and each subsequent version, so as Adobe optimizes their demosaicing implementation, any inherent error is clearly diminishing.
BTW, there is no way anything Adobe has ever done that could possibly "knock off 1-2mp worth of resolution" from the 7D. The most basic demosaicing algorithm will produce noisy, mazed and stair-stepped results. Better demosaicing algorithms factor in more information from a greater area of pixels as necessary (i.e. in order to avoid maze artifacts), however the amount of blur they introduce is fractional...not even close to diminishing resolution by another two megapixels over the bayer design itself. You can clearly see this when comparing DPP to ACR/LR with something like a strand of hair. DPP will produce a fairly jagged result, ACR/LR produce a very clean result. Based on the sample below, ACR is actually sharper and supports even finer detail resolution:
Additionally, the output resolution of my 7D + EF 600/4 L II is WAY, WAY, WAY better 7D + 100-400 @ 400. The difference that a good lens makes with the 7D's resolving power would completely overwhelm any perceived difference that the ACR/LR demosaicing algorithm makes. That's entirely in line with theory as well, as output resolution is approximated by the RMS of the resolutions of each component. With the 600/4 II, the 7D produces exceptionally sharp results, and is able to very finely delineate detail with a good lens that otherwise appears quite soft with the likes of the 100-400 L, 300/4 L, 70-200 L, etc. I mean, just look at the detail resolved on these birds (all 7D, all of which are processed with Lightroom):