John Paul Wohlscheid over at It's FOSS takes a look at the ZFS file system and its capabilities. He mainly covers OpenZFS which is the fork made since Oracle bought and shut down Solaris which was the original host of ZFS. It features pooled storage with RAID-like capabilities, copy-on-write with snapshots, data integrity verification and automatic repair, and it can handle files up to 16 exabytes in size, with file systems of up to 256 quadrillion zettabytes in size should you have enough electricity to pull that off. Because it started development under a deliberately incompatible license, ZFS cannot be directly integrated in Linux. However, several distros work around that and provide packages for it. It has been ported to FreeBSD since 2008.
(Score: 0) by Anonymous Coward on Tuesday September 11 2018, @11:42PM (1 child)
I've started playing with btrfs a few months back. While I can't think of a compelling use case for subvolumes at the moment, I am using it as the filesystem containing my backups. After all my machines have backed up, I create a read-only snapshot of the state. I'm using bees to deduplicate the filesystem.
What I've seen: Deduplication saves a fair amount of space when it comes to system backups (It finds copies of the GPL for example as separate files, and within ISOs). Compression may not save as much as it costs in CPU time. Quotas are available, but bees has caused the btrfs kernel module to have extreme slowdowns because of locking if quotas were enabled. Readonly snapshots can be turned in to readwrite copies on demand, but I have a suspicion that a high amount of snapshots will cause btrfs to slow down. Bees also isn't currently optimized to recognize same extent identifiers in snapshots / will have to traverse all snapshots to find duplicate extents that may have already been deduplicated elsewhere.
(Score: 2) by Subsentient on Wednesday September 12 2018, @04:28AM
I use snapshots to backup my OS if I'm worried about something messing it up. As for compression, depends on the algorithm. The new zstd compression algorithm gives similar ratios to zlib but is just *so much* faster. Needs kernel 4.14 or above though. I compress all my btrfs filesystems with zstd. It makes a significant positive difference I've found.
As for performance, I haven't noticed a negative impact since switching to btrfs for root FS or for home directory. Benchmarks still say it's worse than ext4 however, but that said, I often end up missing data deduplication, compression, snapshots, etc when I'm working on an ext4 system.
Just, don't mess with the RAID stuff yet. Even the RAID1/RAID0 stuff is buggy at best. It won't corrupt your data, but it'll probably say you have no free space.
Btrfs isn't yet at the level of ZFS, but considering it's included in the mainline kernel and works pretty well, I prefer it. It's also much lighter on RAM than ZFS.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti