dilbert said:
Jordan23 said:
dilbert said:
The Samsung is also saving 12-bit files during continuous shooting vs. the full 14-bits in single shot mode.
Looks like the Canon 7D2 is just over half the speed of the Samsung NX1, which is an improvement on the 420MP/sec vs 200MP/sec where the Canon camera is under half the speed of the Samsung NX1.
Not quite, you have to take into account the actual file-sizes.
File sizes are just what's output to the storage card. That's the final output stage of the pipeline.
Internally, the camera needs to process the entire output of the sensor, so the Samsung needs to digest 5Gb/s (or 420MP/sec) of data even if it writes out less.
Generally speaking that (the bolded) is correct, however, from what I gather about Samsung's NX1 design, they actually switch the readout to 12-bit conversion via the ADC. They do that SO THAT they can achieve 15fps.
First, modern sensors always have a border of masked and inactive pixels. The total pixel count that is read off the sensor is 31 million. So, assuming 31mp, at 14-bit, at 15fps:
((31,000,000 * 14) / 8) * 15 = 813,750,000byte/sec
To read out 31 megapixels fifteen times per second, they would actually need a data throughput of 814MB/s minimum. There is additional overhead for metadata and whatnot, so I'd say a safe throughput is 820MB/s. However, if we assume that the data coming off the ADC is actually 12 bits:
((31,000,000 * 12) / 8) * 15 = 697,500,000byte/sec
The Samsung needs about 700MB/s throughput to achieve 15 fps at 12-bit readout. I don't really know if it's that much harder to make a DSP that can process 820MB/s vs. 700MB/s, but either way, the processor is faster than DIGIC 5+, which I calculated as having 250MB/s data throughput per chip (500MB/s total). This is quite a bit faster than the 7D II even needs in order to read images out at 10fps:
((21,000,000 * 14) / 8) * 10 = 367,500,000byte/sec
So the 7D II's dual DIGIC 6 don't even need to work as hard (assuming they aren't doing extra noise reduction work and whatnot, which we know they are) as the dual DIGIC 5+ in the 1D X. If we just assume that the total potential throughput of the 7D II is 400MB/s with overhead, each DIGIC 6 is only processing at 200MB/s, which is less than the DIGIC 5+. I don't know how much additional processing power the DIGIC 6 needs to do all the things it does, but let's assume it's double. That would mean a pair of DIGIC 6's process at 800MB/s, which is still short of the 820MB/s necessary to read out 15 frames per second in full 14-bit quality. (That's probably not even accurate, as it would be an image processing pipeline...the input rate from the ADCs would still probably be at most 200MB/s per DIGIC 6, it's just that the chip probably does more work "per cycle" for lack of a better term, as information moves through the various stages of the pipeline.)
So, it's not surprising to me that Samsung opted to drop the bit depth at high frame rate. It allowed them to get away with DSP(s) (I don't know if it uses two) that operate at a slower overall throughput, which probably saved on cost. If Samsung made the assumption that most 15fps shooting would be done at ISO 400 and up, then the use of 12 bits does not represent a loss in either dynamic range nor tonal levels nor color fidelity, so the savings was probably worth it, even if it costs them a few sales because some photographers need 15fps with full 14-bit data at ISO 100 or 200.