John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.
(Score: 2) by hendrikboom on Tuesday September 11 2018, @09:33PM
I found the following:
https://www.reddit.com/r/DataHoarder/comments/5u3385/linus_tech_tips_unboxes_1_pb_of_seagate/ddrngar/ [reddit.com]
And also https://www.reddit.com/r/DataHoarder/comments/5u3385/linus_tech_tips_unboxes_1_pb_of_seagate/ddrh5iv/ [reddit.com]
So I now wonder what the *real* limits are on home-scale systems. In particular, suppose I have only a few terabytes. And a machine with only a half gigabyte of RAM. And used for nothing more bandwidth-intensive than streaming (compressed) video over a network to a laptop.
What I like about ZFS is its extreme resistance to data corruption. That's essential for long-term storage. My alternative seems to be btrfs. Currently I'm using ext4 on software-mirrored RAID, which isn't great at detecting data corruption.
-- hendrik