An elder nerd advised me to use XFS on my current array, and I have since had exactly zero issues with it. Even when a drives started to fail it recovered gracefully. The documentation is easy to read and the tools are easy to use (as far as filesystems go). It is important to note that so far I'm only trusting it to ephemeral data and nothing too important, but out of this 8 year long experiment I am wholly convinced to use XFS on the next array unless I come in to some sweet hardware that would make switching to ZFS or BTRFS worth while.
Starting Score:
1
point
Karma-Bonus Modifier
+1
Total Score:
2
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @05:34PM
by Anonymous Coward
on Wednesday February 08 2017, @05:34PM (#464627)
I have XFS on some large drives because after formatting it featured more available space than the ext(3 i guess) fs. Now, if that is a fake advantage as it is simply a matter of preallocating stuff for metadata, I dunno. Anyway, with the only downside of fsck + badblocks options not working, I never had problems with it.
XFS is a good filesystem. But it's not perfect. At work, we lost an entire openstack cluster just before Christmas, due to loss of the XFS storage. Likely a transient disk or memory hardware error, but it proved to be completely unrecoverable, even with the XFS tools.
RedHat seem to be falling back to XFS + LVM in the absence of Btrfs being anywhere remotely near production readiness. But XFS doesn't go much beyond metadata journalling; it's still very much a filesystem of the 90s, albeit a good one. It doesn't do data journalling, and it doesn't do block level hashing/checksumming, and it can't self heal or scrub itself. There is zero protection from data errors.
This is an area where there's a good bit of cognitive dissonance going on at the moment. The harsh truth of the matter is that Linux doesn't have a top notch native filesystem *at all* right now. You can use ZFS if you are able to use third-party modules. And at work we use expensive IBM GPFS stuff. But while Linux has a huge number of filesytems provided natively, they are all, for one reason or another, crap in different ways.
I've been trying out NILFS2 on a new system. So far it isn't crap, and its auto-snapshot capability has already saved me from an rm -r I later wanted to undo.
Not that I would describe ext4 and XFS as crap, either, though.
XFS got a bad reputation for a couple of reasons. The first was that XFS relied heavily on delayed allocation to avoid fragmentation and so liked to keep things in the buffer cache for a really long time. If you didn't do a graceful shutdown you'd end up with a lot of files containing nothing but zeroes. I think this is largely fixed (but doing so cost some performance, which made XFS less attractive). The second was that the initial Linux port of XFS was really buggy and you were pretty much guaranteed data loss with it. It took a few years to stabilise, and by that time other filesystems had journalling, which was the main reason for using it.
(Score: 2) by Valkor on Sunday February 05 2017, @11:09PM
An elder nerd advised me to use XFS on my current array, and I have since had exactly zero issues with it. Even when a drives started to fail it recovered gracefully. The documentation is easy to read and the tools are easy to use (as far as filesystems go). It is important to note that so far I'm only trusting it to ephemeral data and nothing too important, but out of this 8 year long experiment I am wholly convinced to use XFS on the next array unless I come in to some sweet hardware that would make switching to ZFS or BTRFS worth while.
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @05:34PM
I have XFS on some large drives because after formatting it featured more available space than the ext(3 i guess) fs. Now, if that is a fake advantage as it is simply a matter of preallocating stuff for metadata, I dunno. Anyway, with the only downside of fsck + badblocks options not working, I never had problems with it.
(Score: 3, Informative) by rleigh on Thursday February 09 2017, @11:12PM
XFS is a good filesystem. But it's not perfect. At work, we lost an entire openstack cluster just before Christmas, due to loss of the XFS storage. Likely a transient disk or memory hardware error, but it proved to be completely unrecoverable, even with the XFS tools.
RedHat seem to be falling back to XFS + LVM in the absence of Btrfs being anywhere remotely near production readiness. But XFS doesn't go much beyond metadata journalling; it's still very much a filesystem of the 90s, albeit a good one. It doesn't do data journalling, and it doesn't do block level hashing/checksumming, and it can't self heal or scrub itself. There is zero protection from data errors.
This is an area where there's a good bit of cognitive dissonance going on at the moment. The harsh truth of the matter is that Linux doesn't have a top notch native filesystem *at all* right now. You can use ZFS if you are able to use third-party modules. And at work we use expensive IBM GPFS stuff. But while Linux has a huge number of filesytems provided natively, they are all, for one reason or another, crap in different ways.
(Score: 2) by linuxrocks123 on Wednesday February 15 2017, @03:32AM
I've been trying out NILFS2 on a new system. So far it isn't crap, and its auto-snapshot capability has already saved me from an rm -r I later wanted to undo.
Not that I would describe ext4 and XFS as crap, either, though.
(Score: 0) by Anonymous Coward on Wednesday February 15 2017, @03:53AM
One nasty thing about XFS now is it can't be resized smaller. http://xfs.org/index.php/XFS_FAQ#Q:_Is_there_a_way_to_make_a_XFS_filesystem_larger_or_smaller.3F [xfs.org] and http://xfs.org/index.php/Shrinking_Support [xfs.org]
(Score: 2) by TheRaven on Friday February 17 2017, @06:24PM
sudo mod me up