5D Mark IV will probably get compressed 4K resolution.
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
My point is the transfer rate does not change. It is kind of like ethernet..... packets are sent at the same speed.... it's how many packets per second that gives you the transfer rate. A fast memory card and a slow memory card both transfer data at the same block speed. A faster memory card is ready sooner with the next block and that's what makes it faster.
Yes but the transfer rate and the throughput rates are different. If we assume that they are using block transfers, the transfer rate is 2GB/sec. That rate remains the same no matter what the read or write speed of the card is....
For example, if the read rate of the card is 200MB/sec, the data is transferred in pulses of 2GB/sec... the line is only active 10 percent of the time. Get a 500MB/sec card and the data is still transferred in pulses of 2GB/sec, but the pulses are more frequent and the line is now active for 25 percent of the time.
Is the transfer synchronous (8 bits per byte and no framing) where 16Mbits/sec = 2Mbytes per second?Based on industry practices it appears that they are quoting raw throughput.
Is it asynchronous (8bits per byte plus 2 framing) where 16Mbits/sec = 1.6Mbytes per second?
Is it asynchronous block (256bytes of 8 bits per byte plus 8 bits framing) where 16Mbits/sec = 1.9998Mbytes per second?
XQD and CFast are based on PCIe and SATA technologies. As such are restricted by the same limitations. Like SATA rev 3.0 peaks at 600MB/s and PCIe rev 3.0 peaks at 800MB/s.
You're being a little imprecise there. PCIe peaks at 985 MB/s of bidirectional bandwidth per lane. However, PCIe allows you to aggregate (bond) up to 32 lanes. An x32 PCIe bus, therefore, maxes out at almost 16 gigabytes per second in each direction. Mind you, XQD currently provides only a single lane, but you could trivially turn it into a much faster standard just by throwing enough additional pins at the problem (four extra pins per lane, ignoring any ground pins that might be required to prevent crosstalk).
For a data card standard, unless I'm missing something, you could easily do away with all but three of the first 22 pins in the PCIe standard (the two SMBUS pins and one 3.3V rail). The next 14 would probably be required, though perhaps not all of the grounds. So you're at about 17 pins for the first lane, and possibly fewer. If you then add more lanes using the same ground-opposite-data scheme that PCIe connectors use, add 8 pins per additional lane.
So if you used the same 50-pin connector that CF cards use, for example, you ought to be able to do 4x PCIe with nine pins to spare (assuming that you either require everything to do 4x or require the mode to be negotiated over the SMBUS instead of using detect pins). If you use those nine pins as detect pins in some particularly smart way, you might even be able to achieve backwards compatibility with CF in both directions....