mistaspeedy said:
I also don't think the processor makes any difference at all to the RAW images at all. The processor and jpg processing algorithms do make a difference to the quality of JPG images, but those are not what is being compared in the video. (RAW images are being tested in the video)
'Raw' is processed too.
Basically, all the sensor does is collect the light; it's the processor which interprets it.
Yes this applies to 'raw'. No, it's not just for .jpgs. There is a huge,
huge difference between the processor within a camera and a 'processor' as in software converting to a different format. They may be frequently called the same thing but they are entirely different.
'Raw' files are not actually truly 'raw' data. To create the 'raw' image file the processor needs to interpret the light collected by the sensor.
The way I've found people easily understand this in the past is to liken it to food production, so bear with me a second.
Think about basic food ingredients such as flour or sugar, which you might use to make a cake. When you buy them they seem to be in a pretty basic form—'raw', you could say—but you know that before you were able to buy that flour and sugar it first had to go through many stages of production. It had to grow, be harvested, cleaned up, processed and packaged up. Only after the ingredients have already gone through all that can you buy them and then turn them into a cake.
You can think of the camera's processor—remember, that's the psychical processor driving everything, not the software—as being that intermediate stage between food first being grown and you getting to cook with it. When you open your 'raw' image and start to 'process' it, that's the equivalent of you starting to bake that cake; the base ingredients you're working with were already harvested (sensor) and cleaned and packaged up (processor).
Another way to think of it is like a solar panel. The panel—that's the camera sensor—can collect the light, but it requires much more—the processor—to actually turn that sunlight into energy, which then can be used to power whatever you want. (That last bit is your post-processing.)
So, what is the camera's processor responsible for? It's taking the light the sensor reports and puts it all in order. Noise is mostly down to the processor as it tries to interpret variations in the under-stimulated sensor. The processor decides what data can be safely discarded (yes, even with lossless 'raw', some data is lost) when multiple neighbouring pixels report exactly the same information. (This is the basis of what we commonly refer to as dynamic range). When you see colour banding or specific noise patterns, that's usually down to the processor.
If you look at Fuji cameras, you can see how important the processor is compared to the sensor. Their 'A' cameras use a bayer sensor while their other 'X' cameras us a unique 'X-Trans' array. Despite the sensor's pixels being in a different order, though, all their cameras end up with the same look to the raw files because their processors are the same. A lot of people think DxO don't measure Fuji cameras because of the X-Trans array, but DxO can't measure the X-A bayer camera either... because it's really the Fuji processor that is getting in the way.
This is also why Nikon and Sony cameras have varied in image despite them using the same sensors. They use the same sensor, but not the same processor. As a result, the basic interpretation of colour, contrast, brightness, and noise are different.
Canon users of a decade or so ago, especially portrait photographers, will remember how big a deal it was when DIGIC II came along, and then again when DIGIC II was replaced, and how colours and contrast in Canon cameras changed so much in that period even though the sensors themselves mostly hadn't changed. (Especially the APS-C sensors). There are still some people who swear by DIGIC II cameras (and the first generation of Fuji processors, for that matter) as producing the best files for portraiture.
This is all done to the 'raw'. You are
never getting truly untouched 'raw' data. If you did you wouldn't be able to open it. Every 'raw' file, from every manufacturer, has to have been processed by the camera's processor—again, this is the physical chip we're talking about, not the software .jpg conversion—before you can open it.
And before anybody asks "well why don't they give me the option to have the file unprocessed, and why doesn't someone develop a way to open that
truly raw data?" the answer is quite simple: imagine buying a lens which never allowed you to focus it, ever. Yeah, now you see why you don't want
truly raw data.
Final word on the subject: it would really help ease confusion if people got used to being more specific with their terminology. When you're talking about editing a raw file be sure to call it "post-processing", not just "processing". Don't call Photoshop a "processor", for example, but a "raw processor" or "post-processor" are fine.
There can be up to five different stages of 'processing'from the time the shutter is pressed to the time the file is finished and you typically use at least two
physical processors in that process, so you can see how the process of using the correct names for each processor used in the process will make the process of discussing processes and processors less confusing.
Yeah, try and get your heads around that one.
Mikehit said:
You are splitting hairs - it is whether the D850 as a camera is better than the 5DIV as a camera. People use 'sensor' as a shorthand because it is the easiest thing to talk about even though the sensor provides the raw material. Yes, a manufacturer could screw up a great sensor with a poor processor but what is really important is whether you can get good photos from it.
When you want to get an idea of what these parts can do, so you can start to get an idea of what future cameras may be like, it's very important to make the distinction.
E.G.
The 7D3 may well use the same processor—in fact it will likely have two of them—but a different sensor. Knowing which part is responsible for which aspects of the 5D4's image quality can help us estimate the nature of the 7D3's image quality, which helps inform early adopters. Conversely, if you ascribe everything simply to the sensor, you've learnt nothing about the next camera.