Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Sunday August 07 2016, @02:38PM   Printer-friendly
from the how-bad-could-it-really-be dept.

The nice feller over at phoronix brings us this handy to have bit of info:

It turns out the RAID5 and RAID6 code for the Btrfs file-system's built-in RAID support is faulty and users should not be making use of it if you care about your data.

There has been this mailing list thread since the end of July about Btrfs scrub recalculating the wrong parity in RAID5. The wrong parity and unrecoverable errors has been confirmed by multiple parties. The Btrfs RAID 5/6 code has been called as much as fatally flawed -- "more or less fatally flawed, and a full scrap and rewrite to an entirely different raid56 mode on-disk format may be necessary to fix it. And what's even clearer is that people /really/ shouldn't be using raid56 mode for anything but testing with throw-away data, at this point. Anything else is simply irresponsible."

Just as well I haven't gotten around to trying it then.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rleigh on Sunday August 07 2016, @06:23PM

    by rleigh (4887) on Sunday August 07 2016, @06:23PM (#385013) Homepage

    I've had bad experiences with it since the start. I wrote up some of the issues I had in this related thread: https://news.ycombinator.com/item?id=12232907#12233154 [ycombinator.com] These horrific problems are by no means isolated instances. Even if the RAID1 code is now fixed, I've lost all trust in it. There's just too much stuff which is fundamentally broken, and that's just not acceptable in a filesystem. I'm simply not prepared to lose any more data or downtime to it. I had high hopes for it, but it's turned into a seriously bad joke. Too many times people have told me, "oh, you need to upgrade the the latest kernel for $fix". How many times do you consider it acceptable for me to lose my data? Sorry, but it's not ready for production use, and it never has been.

    Over the last 2.5 years, I've been using ZFS on FreeBSD. What an absolute revelation and joy to use after 15 years of mdraid and LVM (and Btrfs). I wish I'd discovered it years before; I've got systemd to thank for that, and I'm genuinely happy that it gave me the push to test the waters outside the (increasingly insular) Linux sphere.

    But ZFS is getting much better supported on Linux as well. With Ubuntu 16.04, it's possible to boot directly to a root filesystem on ZFS, with /boot on ZFS. It's still a little rough--not supported directly by the installer--but all the pieces are there in GRUB, the initramfs, the init scripts etc. With a little pain and a few tries and failures, I got it booting directly with EFI and GRUB2. The only missing piece to get this generally usable is an option in the installer like you have with FreeBSD, and then it will be a piece of cake to get up and running.

    To be fair though, this isn't all the fault of the Btrfs developers. The number of uninformed fanboys parrotting how great it was and how we should all be using it belied the reality that those of us who heavily tested it for years discovered to our cost. Not so long ago the slightest criticism or caution was jumped upon in some quarters as though it was some sort of betrayal. No, it was simple common sense borne out of actual informed real-world experience with it! Blind faith in it won't make it magically reliable and stop it toasting all your data! I think that these people did a great disservice to anyone who followed their advice, particularly if they suffered from dataloss.

    For anyone interested in trying out ZFS on Linux as a rootfs (sorry about the mangled spacing, it should be more readable). Note it also includes a zvol as the swap device.

    % lsb_release -cr
    Release: 16.04
    Codename: xenial
    % sudo zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    fdata 134G 315G 96K /fdata
    fdata/old-root-backup 8.85G 315G 8.85G /fdata/old-root-backup
    fdata/rleigh 156M 315G 96K /fdata/rleigh
    fdata/rleigh/clion 156M 315G 156M /fdata/rleigh/clion
    fdata/schroot 201M 315G 96K /fdata/schroot
    fdata/schroot/sid 200M 315G 200M /fdata/schroot/sid
    fdata/vmware 125G 315G 125G /fdata/vmware
    rpool 21.9G 85.7G 96K none
    rpool/ROOT 7.53G 85.7G 96K none
    rpool/ROOT/default 7.53G 85.7G 7.25G /
    rpool/home 308K 85.7G 96K none
    rpool/home/root 212K 85.7G 132K /root
    rpool/opt 1.74G 85.7G 475M /opt
    rpool/opt/steam 1.27G 85.7G 1.27G /opt/steam
    rpool/swap 8.50G 93.7G 510M -
    rpool/var 4.06G 85.7G 96K none
    rpool/var/cache 4.06G 85.7G 4.02G /var/cache
    rpool/var/log 3.06M 85.7G 2.95M /var/log
    rpool/var/spool 168K 85.7G 104K /var/spool
    rpool/var/tmp 200K 85.7G 128K /var/tmp

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2