Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by CoolHand on Tuesday September 11 2018, @03:29PM   Printer-friendly
from the zed-eff-ess-or-zee-eff-ess dept.

John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by BananaPhone on Tuesday September 11 2018, @08:55PM (2 children)

    by BananaPhone (2488) on Tuesday September 11 2018, @08:55PM (#733306)

    I Really want a NAS with per-file chksum+AutoRepair.

    QNAP => ZFS need ECC memory + $$ hardware
    Synology => BTRFS
    FREENAS => ZFS (and never will do BTRFS)
    RockStor => BRTFS

    Can you expand on the fly with any of these?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Tuesday September 11 2018, @10:17PM

    by Anonymous Coward on Tuesday September 11 2018, @10:17PM (#733365)

    ZFS DOES NOT need ECC memory.

  • (Score: 2, Informative) by DECbot on Wednesday September 12 2018, @02:40AM

    by DECbot (832) on Wednesday September 12 2018, @02:40AM (#733454) Journal

    The hidden cost of zfs is the pool will always be limited by the size of the smallest disk. My setup: 4 disks, two 1TB disks and two 2TB disks in a Z1 configuration (think RAID5). My pool size is limited to around 3TB until I replace every disk to 2TB and then I will have around 6 TB. Not too bad for my homebrew setup, but when you look at upgrading an enterprise setup with a dozen 6TB WD Red disks, is gets expensive because there is no bump to your storage space until every disk has been replaced.
     
    Also, there is no on-the-fly conversion from Z1 to Z2 or Z2 to Z1. You have to transfer your data to a different zfs pool, destroy your old pool, recreate, and send the data back.
     
    Excuse me if I got some of the terminology wrong. I'm just a hobbyist trying to learn a new thing. Point out my mistakes so we can all learn.

    --
    cats~$ sudo chown -R us /home/base