There is no specific luminance channel directly off the sensor. It's RGB photo sites, and the native ratio on all the canon sensors is 4-2-2. So big deal that mRaw is 4-2-2, so is Raw, no?
Each photosite gives us one 14bit measurement of one of the RGB channels. All this data is essentially the raw data file (with minamal processing).
mRaw and sRaw are just using fewer sites. And just like full raw, they are demosaiced by DPP or whatever raw software you use.
Correct. There is no luminance channel on the sensor. However every single pixel on the sensor can be read to determine the luminance level of that pixel, regardless of what color channel it is. Three pixels, a red, green, and blue (or even a full quartet, 2x2 RGGB) can be read to produce a single luminance value (y).
That said...Is the difference between an encoded format and a raw format unclear?
Raw...UNENCODED. ZERO processing.
m/sRAW...ENCODED. Processed. Interpolated. Demosaiced. Not RAW.
Since the picture doesn't seem to be clear to anyone who still thinks mRAW and sRAW are not RAW formats but demosaiced (or processed) and encoded formats, here is what happens when generating either one:
- Sensels are all read.
- Luminance channel data (Y) is produced from the raw values read from every sensel, regardless of its color.
- Chrominance channel data (Cr, Cb) are produced from raw values of every other sensel read and a corresponding Y component.
- The luminance and chrominance channel data is assembled and compressed with lossless compression.
The format and structure of this data is the same as for JPEG images, with the primary differences being non-lossy (lossless) compression and higher bit depth (14 bits vs. 8 bits). Outside of those two differences, mRAW and sRAW could be considered a better form of JPEG, rather than a smaller form of RAW. The very word RAW should be a clue here...RAW...unprocessed, uncooked, unmodified, natural and untainted in every way. As in raw meat, strait off the bone. Raw meat isn't yet a meal...it has to be cooked. To continue the analogy, m/sRAW would be like meat that is cooking
...its been sliced and diced, seasoned, marinated, and just needs some heat to become edible.
Neither mRAW nor sRAW actually contain original sensel data off of the sensor. Each value that is encoded in the output image, be it from luminance data or chrominance data, has been processed. According to the author of the article I linked, in his attempt to be accurate, prefers not to call the Y channel an actual "luminance" channel, as its based on linear data rather than gamma-corrected data. So he calls it Luma, which has long been a short-hand way of referring to a linear luminance component. Similarly, in an attempt to be accurate, he calls the Cr and Cb channels "Chroma" rather than chrominance, for the same reason. (In the world of CIE and standards-based color and transforms, luminance and chrominance must be appropriately weighted components to be real or "true".)
To generate a single Y (luma) value, three sensels, red, green, and blue, must be read and weighted:
y = (0.296 * r) + (0.592 * g) + (0.114 * b)
No sensels are skipped in the production of y values, so were not losing any information here. Similarly, to Cr and Cg (chroma) values, every other (every even) sensel in a row is read and processed against the corresponding luma value:
cr = r - y
cb = b - y
Green pixels are not read into their own distinct color channel, as green is a byproduct of combining y (luma...which is based on red, blue, AND green
sensels) with Cr and Cb (which are the difference between a red sensel and a luma value and a blue sensel and a luma value). Each y, cb, and cr triple are then stored with 14-bit precision for saving to the actual image file (which is still ultimately a .CR2 container file, it just contains something entirely different than a normal full RAW .CR2.) The final output data in an m/sRAW file has very little to do with a two dimensional matrix of R/G/B/G sensels on a CMOS die like a true RAW file does. The final output data is a transformation of RAW data into something entirely different, and more reminiscent of pre-digital analog TV signals sent along airwaves and cable into peoples homes.
It doesn't really matter if you think the processing described above is "minimal" or not...its still a transformation, and a fairly radical one. You don't have original untainted source information to regenerate luminance and chrominance with appropriate gamma weighting for the particular device you actually work your images on (which may be 2.2, or possibly 1.8 ). Luminance information is ALREADY weighted...its got a 0.296 weight for the red channel, 0.592 weight for the green channel, and 0.114 weight for the blue channel. Thats baked. Its in, its done. Burned off the sensor, and now it sits between you and the actual RAW data that would have given you a richer editing experience. The blue and red channels are also already baked, since they are the difference of a red or blue sensel read and the corresponding y value for those same pixels...which was weighted.
The entire point of RAW is to get the data from the camera to the computer BEFORE such processing occurs...before ANY processing of ANY KIND occurs...since its effectively THAT processing that a RAW editor like ACR, Lightroom, Aperture, or one of the open source tools do. And, to be blunt, those tools to a far better job, with more advanced algorithms that require more horsepower than a camera image processor has to produce better images. Having shot mRAW for about two weeks solid after I first got my 7D, I was rather dismayed to see a variety of "demosaicing" artifacts (or YCC encoding artifacts, for those who want to be more accurate) baked into my images. Funky color fringing that had nothing to do with chromatic aberration, or odd aliasing along round edges that don't occur with more advanced demosaicing algorithms like AHDD.This is just a warning.
I'm trying to be honest to those who expect 100% RAW capability out of an mRAW or an sRAW, with the assumption that you can push exposure, white balance, and noise removal around to the same degree as with an actual true RAW.
From someone who spent a fair amount of time experimenting with mRAW, I was EXTREMELY dismayed
by the limitations and encoding artifacts. Both mRAW and sRAW produce far smaller files that import faster, load faster, and operate faster in LR's develop module. But there ARE tradeoffs...tradeoffs you should be aware of and take into account when choosing your image mode. Neither mRAW or sRAW are true raw formats. They are effectively super-jpeg, 14-bits with lossless compression. The deeper bit depth gives you about the same post-process editing latitude you might get from a 16-bit TIFF image.