« on: September 21, 2012, 04:34:46 PM »
Any software development types here that can give the masses an idea of how different the code would have to be to handle the 4K video stream? It seems like the general thought is that its easy/free to 'turn on' this feature, but I'm guessing there is more to it than that.
It may in fact involve doing _less_ rather than more in the pipeline. If they use the native resolution of the sensor for their 4K (which they do, it's simply cropped to native res) you won't have to do a downsampling step. The rest of your processing can be the same pipeline as stills as long as the processors can handle the throughput without power or heat problems. The codec at the end will have more data to process, but the codecs can handle that and are industry standards. In both cases, the 1DC proves that the stock 1DX hardware is capable of the feat.
If that is the case, then the 1DX is exactly a _crippled_ version of the 1DC: resolution-destroying code is inserted into the firmware of the 1DX that is left out of the 1DC. This might not be terribly hard to hack; I can think of two approaches off hand instantly.