9VIII said:
Actually there's a global shortage of Flash memory right now, and HDD's have been the same price since 2010 because everyone knows it's practically a dead technology, 10TB is probably the biggest mechanical Hard Drive that will ever be on the market (at least without using stripped recording which drastically reduces write performance).
Assuming you mean
striped recording, it's factually untrue that striped volumes reduce write performance. In fact, they dramatically increase write performance, and almost every serious server uses striped data sets to improve performance. The easiest way to make either mechanical or solid state drives faster is to buy a bunch of them and split up the write tasks between them.
There are massively expensive RAID setups that make a 1000mm primes look like rounding errors, and buildings full of them that cost tens or hundreds of millions of dollars to build, and a lot of them use mechanical storage for a lot of storage tasks -- for example, Microsoft's Azure or Amazon's AWS datacenters. Some of us also choose mechanical storage on the cloud for non-performance reasons, like better geo-replication options.
In addition, many people who run on-premise, production servers choose mechanical over solid state drives. They're superior in two ways: first, if they die, they tend to die slowly, so you can typically recover data easily. On the other hand, SSDs die in an epic and sudden way that's either difficult, expensive, or impossible to recover data from. Second, unless you get extremely expensive SSDs, very expensive RAID cards, or both, you can't REST them while they are in a striped set -- which makes all those cheap retail drives just short of useless.
And finally, there's limited benefits on a lot of types of servers. For example, for email or database, you're going to cache nearly everything you use most of the time anyways, so those blazingly fast random read times will still be there, but practically, they aren't noticeable much to users because usually, the data they want to access is cached in much faster RAM.
A strategy is often to combine both, and put most of your storage on mechanical storage arrays, but use solid state as a temporary/scratch drive, or for data that you can predict will require extremely random reads that don't cache well.
By the way, mechanical drives have improved dramatically since 2010. You can get excellent 2.5" models now, which are king in the server world. The reason that the price per TB hasn't fallen on mechanical drives has less to do about technology than it does about market forces. There just isn't enough demand for those 12TB systems to bring the prices down. Those giant drives also introduce other problems, like length of time to back up, and how hard you cry if you lose everything.
There have also been massive improvements in SSD technology since 2010, a lot of it highly benefiting retail PCs in the tablet and laptop segments. m.2 SSDs are a big step up in both performance and form factor for drives that don't need to be changed often, the controllers have gotten a lot better, and MLC durability has improved a lot. Basically, mid-range drives today are great, whereas they were slightly lacking a decade ago.
Not that this has much of anything to do with 7D3