John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.
(Score: 2) by mechanicjay on Wednesday September 12 2018, @04:43AM
Where I work now, we use ZFS extensively for central file servers as well as data volumes on any server we care about.
Single file restores are easy as pie with the snapshots. When replacing hardware, ZFSreceiving the entire datastore back into place is a thing of magic.
I recently put together a series of shell scripts to do a near real-time sync of data across machines. I wanted to eliminate single points of failure in a load balanced server environment, so I'm doing a snapshot on a "master" node 1/min and sending it out to the secondary nodes. Each system then gets to be fully independent. It's a thing of beauty. It beats the pants of hokey solutions like glusterfs and faster than calling back to a central nfs store.
Basically I've become a huge fan and kind of agree that it's the only file system that matters if you care about your data, because it's robust and easy to use.
My VMS box beat up your Windows box.