Gear Talk > Technical Support

Best Methods For Long Term File Storage ??

<< < (18/18)

And-Rew:

--- Quote from: Mt Spokane Photography on November 15, 2012, 12:47:36 PM ---
--- Quote from: dirtcastle on November 14, 2012, 08:23:09 PM ---@ MSP
I'm hoping that someone steps up and creates a storage media that is reliable, its certainly possible, but only available to the technically astute, and who knows if anyone could read the media in 50 years.  Its not happening because no one sees a market, or maybe there is no good known technical solution (I doubt that).

--- End quote ---

Never gonna happen in these money making orientated days.

Why would some one develop a storage medium that lasts years? They'd be out of business after a few years once every one had bought one of these long term storage devices.

It would be like buying a car that doesn't rust and comfortably does 200mpg round the town and 300mpg on a run. Not gonna happen any time soon...
--- End quote ---

Ellen Schmidtee:

--- Quote from: tron on November 15, 2012, 10:26:56 AM ---
--- Quote from: Ellen Schmidtee on November 15, 2012, 10:18:48 AM ---I rent a locker in my gym, and keep a hard disk there. I go there often enough to rotate it.

--- End quote ---
That way you keep fit too ;D

--- End quote ---

I rented a locker because the monthly fee for the gym + locker is a lot cheaper bank safe, and has more opening hours than a bank, e.g. the bank is closed on weekends.

Then the gym owner told me the lockers are for people who actually excercise, so I started doing that as well. Life isn't easy...

thedge:
And to answer the OP's question...

To me as an IT guy with data backup OCD the current best solution for longer term file storage is ZFS. Its the only solution that is reliable and has integrated file checksumming to check for bit rot, corruption, memory errors, etc. Yes you can run third party programs to do checksumming on any file system, but ZFS does it on file access in addition to a full scan whenever you want.

Significant Benefits to ZFS:
-File checksumming to check for corruption
-Scrubbing your drives, which is a complete read of every single file to check it against its checksum for corruption
-Hardware agnostic and "port agnostic", meaning a pool of ZFS drives can be moved to any other computer that can read ZFS, plugged in and read, without worrying about correct order of the drives in SATA ports (as with most hardware RAID solutions) or cross vendor compatibility (aka, an Areca RAID array cant be read on an LSI card), with ZFS you only worry about the pool software version which is easy to work with (backwards compatible and pool version can be upgraded etc)
-Immediate detection of failing hard drives, cables, interfaces, etc, far earlier than most solutions so far in my experiences with failing hard drives
-Resilvering a pool (aka, rebuilding an array) is based on data on the array, not the entire thing, so a ZFS pool that has 20TB capacity but only 2TB of data only has to rebuild the 2TB of data when a drive fails vs hardware RAID which rebuilds the 20TB including 18TB of nothing
-Copy on write, which leads to (among many other awesome things) snapshots of the data, which is fantastic for back ups (oops, deleted that file, but can be copied out of the snapshot that was automatically taken 15 minutes ago, or the one from an hour ago, two hours ago, or a day ago, or a week ago, or a month ago, or a year ago)
-Filesystem is always consistent, there is no file system corruption if the power cord is yanked out or some other sudden failure
-ARC and L2ARC (Adaptive Read Cache and Level Two Adaptive Read Cache) caching, which means that a file that is read from ZFS will get stored in the servers RAM (ARC) or on a designated SSD for faster access in the future

I could go on but those are the most of the juiciest ones.

I have all my images and my Lightroom catalog stored on my NAS which is running OpenSolaris and ZFS. Yes there is a performance hit for having the catalog over the network, but with ZFS's ARC and L2ARC it is quite mitigated. If I power the server off (which clears out the ARC) and power it up, then open my catalog there is a noticeable slowdown as it is read from the disks. Then after that it is faster as the parts of the catalog that are being read regularly are sitting in the RAM on the server. Same with previews, its slow the first time a folders previews are loaded then fast after that, even after Lightroom is closed and reopened since they are still in ARC.

ZFS is not perfect, it does need some computer knowledge to make use of it. Its other downside is it is not well suited to running on crap old hardware. It needs a 64 bit CPU and 4GB RAM to start. More RAM is better. It wont work well on that old Celeron in the basement for example. But barring that, its pretty fantastic.

For the curious people, the specs on my ZFS NAS are:
Intel Core i3-2100 3.1Ghz
16GB ECC RAM
Supermicro X9SCL motherboard
Three M1015 cards, each with 8 SATA ports (yes, 24 ports total)
2x 16GB SSDs as mirrored boot drives (aka, RAID 1)
4x 300GB Raptor 10,000RPM drives in striped mirror (aka, RAID 10)
2x 50GB SSD as write cache drives
1x 120GB SSD as L2ARC read cache
8x 500GB Seagates in RAIDZ2 (aka, RAID 6)
8x 1TB WD Scorpios in RAIDZ2 (aka RAID 6)

Yes, a lot of disks. The four 300GB Raptors and two 50GB SSDs are storage for my VMWare server. The eight 500GB drives store my photos, Lightroom catalog, previews, documents, email, backups, etc. The eight 1TB Scorpios I bought as cheap refurbs to store my many TB of downloaded TV shows and such until drive prices come down and I can upgrade with 8-10 3TB drives.

Navigation

[0] Message Index

[*] Previous page

Go to full version