Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday September 11 2018, @03:29PM   Printer-friendly
from the zed-eff-ess-or-zee-eff-ess dept.

John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by pendorbound on Wednesday September 12 2018, @03:48PM

    by pendorbound (2688) on Wednesday September 12 2018, @03:48PM (#733665) Homepage

    Most of the articles I've read (including that one) on growing ZFS pools are a little myopic to one really important detail. It's a valid criticism that expanding ZFS pools costs more money than LVM or similar since you need to replace all devices. Too often that's generalized to "you can't expand zpools" which is incorrect.

    Say you've got a 4x 1TB RAID5 array with one drive redundancy (ZFS calls that RAIDz1). That gives you 3TB usable and can survive one drive failure. On LVM and similar non-ZFS volume managers, you could add another 1TB drive, restripe, and end up with 4TB usable and still one drive redundancy on a 5x 1TB RAIDz1. ZFS won't support that. 100% correct.

    Here's the scenario most articles miss:

    If you buy four new 2TB drives, you can replace each of the 1TB drives in turn with a 2TB drive. You offline one, replace the hardware, tell ZFS to replace the missing 1TB device with the new 2TB device, and ZFS will resilver the pool to the new device. Note you still only have 3TB usable at this point, not 4TB like you might expect. So you do the same in turn with the other three devices, removing, replacing, and waiting for resilver to complete for each. At the end of the resilver on the final device in the VDEV, you'll suddenly see your pool size has grown to 6TB usable. ZFS can use the additional storage provided you expand ALL of the underlying devices in a VDEV.

    The drawbacks are that you DO need to replace all your drives in one go, not just bolt on some new ones. That costs some $$$. Personally, that's the way I've ALWAYS done upgrades as I usually have one ailing drive and don't want to leave its litter mates around to possibly succumb to similar disease shortly after replacing one. It also takes potentially a long time to do all that incremental resilvering, assuming the pool was full (which it probably was or why else do this?). When you're done, you have the four original drives freed up. If you have the ports and want the storage, you can add them back as a new VDEV and either append it to the existing pool or create a new pool.

    You could also accomplish the same by adding all the new devices, creating a new pool, and zfs send/recv'ing the data over. The benefit of the resilver dance is the pool is online the entire time. You don't have to shutdown services, export/re-import the pool to rename it, or deal with changed mount point names. Assuming your motherboard, SATA/SAS controller, backplane, etc. can handle the hotswap, you can usually do all of that with zero host downtime.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2