I have never bought drives with consecutive serial numbers, I do check to see that they are from different batches, they often have different firmware versions as well, but I do generally buy them from the same manufacturer. My drives are powered up 24/7, they are on a UPS and protected from spikes and brownouts. The raid 5 arrays can withstand 2 drives failing and keep on going. My current set of six 2TB drives are reaching 5 years old, with no failures. I'll likely be replacing them with either 6TB Drives, or SSD's. The NAS before this one lived 5 years then the power supply died, but the disks were not affected. Even before that, I had a 4 drive Raid Disk Array of 150TB Drives, no failures, Before that, 4 - 72 TB Drives, Before that, 4 36 TB and before that a 4 Drive Raid Array of 13GB Disks (1990's) with no failures. All of my CD's made during the late 80's to late 90's rotted away, but I still have that old data on my hard drives.
I think Mt. Spokane
is pretty sharp, has some great skill and experience with storage arrays. In short, he gets it, he knows what he's doing and he is covering all the bases. I also think that he is incredibly lucky. RAID 5 has diminished in popularity because it is so fragile and only tolerates ONE drive failure in the array. RAID 6 (among several other types) is becoming a more common PARITY Array type because RAID 6 can tolerate 2 simultaneous drive failures in the given array. RAID 5 performance is fair at best, RAID 6 performance is poor. Rebuilds after a drive failure in either array can take days, weeks or even months. For this reason, RAID 1+0 (10) is also popular if performance is more important. I won't go into the endless details of RAID here but I will say that unless you are ready to face a steep learning curve and a lot of stress and expense, don't implement a RAID array for yourself any more complicated than a RAID 1 mirror. It's just not worth it IMHO. Not only do you need to understand the technology, you need to understand the hardware and how to operate it. And once you venture past a simple RAID 1 mirror array, the hardware is critical for performance, acceptable reliability and even the possibility of recovery.
In my experience, drives fail in an unpredictable way and I'm amazed that Mt. Spokane
has had such a long timeline with no failures. So while that is wonderful for him, I think it's a bit outside of the expected norm. I also predict that once he installs 6TB drives, he will probably see his first failure(s). It seems like as drive capacity/density has increased over the last few years, so has the failure rate (or at least the likelihood of data loss). And just because a the drives haven't failed, doesn't mean there can't be data corruption or loss. Backups still must be maintained and like you might expect, the more complicated the array, the easier it is for errors to creep in. Which is why you need more expensive controllers, etc for any array more complex than RAID 1.BUT - we are digressing. This thread is about BACKUP.
A lot of ideas have been tossed around. I stated my thoughts above and I'll repeat that if you put your faith in writable dye based media, you better test it every year or two because there is a definite history of this type of media failing after a few years
. As for what kind of drive to use, I won't argue about what drives are best. All I know is that we all have a LOT of data to back up in the TB range. So external hard drives, whatever kind you prefer, are about all that is affordable and fast enough to get the job done. And while 'The Cloud' is big, it is very slow and out of your control. (Hello Mr. Dotcom?)