jrista said:
Your comparing general-purpose processors to special-purpose DSPs designed to handle, in hardware, the specific processing needs to a specific camera, or small set of cameras. The two, a general-purpose ARM and a DIGIC DSP, are NOT directly comparable. The clock rate of a DIGIC may be "abysmally slow", however it's IPC is extremely high compared to the ARM.
I'm area of the difference between normal CPUs and dedicated DSPs. I'm also aware that modern 64-bit ARM CPUs have vector engines and graphic chips that are fast enough to almost certainly make that DSP hardware completely unnecessary. The reason you use DSPs is because the CPUs can't handle the processing. The CPU in an iPhone 5S is faster than a single-processor 2 GHz G5 Mac from just a few years ago. And Canon RAW image rendering seems to be pretty close to instant on an iPhone 5 using just the CPU, as far as I can tell from my Safari experiments, so I would expect that a CPU comparable to the one in the 5S ought to be able to handle a DSLR's image processing without breaking a sweat. Granted, converting to JPEG takes extra work, but not
that much extra work.
You should be able to get by with a small amount of dedicated hardware to control the ADC sweep across the CMOS part and shove the data into a small chunk of dual-port RAM so the CPU can then copy it into normal RAM using NEON instructions. Mind you, I could be wrong—I'd have to actually write the code before I could say with absolute certainty—but I'm pretty sure we're either past the point where those DSPs are unnecessary or at least rapidly approaching it.
BTW, I'm not sure what you mean by "IPC". To me, that means interprocessor communication, which isn't relevant here. Do you mean IOPS?
jrista said:
Regarding memory, I wouldn't say that it's just the memory that consumes power...because the IPC of the DIGIC chips is high, they ARE doing a LOT of work, regardless of the clock rate. Despite that, the primary power consumer is unlikely to be either the memory nor the DSP. Moving physical components requires more power...flapping a mirror @ 12fps, moving large focus groups in lenses, those are going to consume more power. If your shooting action, those things are going to consume a lot more power. With tiny transistors these days, its easy to build low-power electronics...but the force required to move a physical object will always be the same.
35mm cameras used to run for months on a tiny button cell. So I'd expect that, compared with the CPUs, LCD panels, RAM, the mechanical bits should pretty much be lost in the noise, power-wise (though that may not be true with mirror lock-up—not sure). Then again, people took fewer shots in those days, so maybe that's not a fair comparison.
jrista said:
I don't think "throwing an ARM at the problem" is a solution. The IPC of an arm is low, they are GENERAL purpose processors, so they will require far more cycles to perform the kind of image processing necessary to handle the information coming off the sensor. A specially-designed DSP that has the necessary logic built into the hardware will perform image processing a lot faster for less power, as it's a SPECIAL purpose device. That's why we have GPUs in our computers...they are specially designed to tackle the problem of pixel processing in a more efficient manner than a CPU ever could.
It doesn't really matter how many cycles the processing takes. What matters is the clock time and, to a lesser extent, the power consumption. If the general-purpose CPUs can handle the processing in the required time, it makes a lot more sense to use those rather than custom DSP hardware, because in the downtime between photos, you can repurpose that extra CPU power for other useful tasks, unlike DSP hardware, which is pretty much a one-trick pony.