Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday September 11 2018, @03:29PM   Printer-friendly
from the zed-eff-ess-or-zee-eff-ess dept.

John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by pendorbound on Tuesday September 11 2018, @07:58PM

    by pendorbound (2688) on Tuesday September 11 2018, @07:58PM (#733281) Homepage

    It's life & death critical for dedupe(*), but you're still going to want much more RAM than normal for ZFS. ZFS' ARC doesn't integrate (exactly) with Linux's normal file system caching. I've seen significant performance increases for fileserver and light database workloads by dedicating large chunks of RAM (16GB out of 96GB on the box) exclusively for ARC. It'll *work* without that, but ZFS is noticeably slower than other filesystems if it doesn't have enough ARC space available. Particularly with partial-block updates, having the rest of the block in ARC means ZFS doesn't have to go to disk to calculate the block checksum before writing out the new copy-on-write block. Running with insufficient ARC causes ZFS to frequently have to read an entire block in from disk before it can write an updated copy out, even if it was only changing one byte.

    (*) Source: Once tried to enable dedupe on a pool with nowhere near enough RAM. Took over 96 hours to import the pool after a system crash as it rescanned the entire device for duplicate block counts before it was happy the pool was clean. Had to zfs send/receive to a new pool to flush out the dedupe setting and get a usable system.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2