Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday July 16 2017, @11:42PM   Printer-friendly
from the it's-all-ones-and-zeroes dept.

Stephen Foskett has written a detailed post about why he considers ZFS the Best Filesystem (For Now...). He starts out:

ZFS should have been great, but I kind of hate it: ZFS seems to be trapped in the past, before it was sidelined it as the cool storage project of choice; it's inflexible; it lacks modern flash integration; and it's not directly supported by most operating systems. But I put all my valuable data on ZFS because it simply offers the best level of data protection in a small office/home office (SOHO) environment. Here's why.

It's been a long road to get to where it is and there have been many hinderances, including software patents and malicious licensing.


Original Submission

Related Stories

ZFS Versus RAID: Eight Ironwolf Disks, Two Filesystems, One Winner 28 comments

ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner:

This has been a long while in the making—it's test results time. To truly understand the fundamentals of computer storage, it's important to explore the impact of various conventional RAID (Redundant Array of Inexpensive Disks) topologies on performance. It's also important to understand what ZFS is and how it works. But at some point, people (particularly computer enthusiasts on the Internet) want numbers.

First, a quick note: This testing, naturally, builds on those fundamentals. We're going to draw heavily on lessons learned as we explore ZFS topologies here. If you aren't yet entirely solid on the difference between pools and vdevs or what ashift and recordsize mean, we strongly recommend you revisit those explainers before diving into testing and results.

And although everybody loves to see raw numbers, we urge an additional focus on how these figures relate to one another. All of our charts relate the performance of ZFS pool topologies at sizes from two to eight disks to the performance of a single disk. If you change the model of disk, your raw numbers will change accordingly—but for the most part, their relation to a single disk's performance will not.

[It is a long — and detailed — read with quite a few examples and their performance outcomes. Read the 2nd link above to get started and then continue with this story's linked article.--martyb]

Previously:
(2018-09-11) What is ZFS? Why are People Crazy About it?
(2017-07-16) ZFS Is the Best Filesystem (For Now)
(2017-06-24) Playing with ZFS (on Linux) Encryption
(2016-02-18) ZFS is Coming to Ubuntu LTS 16.04
(2016-01-13) The 'Hidden' Cost of Using ZFS for Your Home NAS


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Snotnose on Monday July 17 2017, @12:21AM (14 children)

    by Snotnose (1623) on Monday July 17 2017, @12:21AM (#540087)

    Hasn't been updated for a few years, and outside of a couple of glitches it's been rock solid for me.

    / this should be good.

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 5, Funny) by linuxrocks123 on Monday July 17 2017, @12:49AM (6 children)

      by linuxrocks123 (2557) on Monday July 17 2017, @12:49AM (#540100) Journal

      ReiserFS and Reiser4 were definitely killer filesystems, really murdered the competition.

      • (Score: -1, Troll) by Anonymous Coward on Monday July 17 2017, @01:39AM

        by Anonymous Coward on Monday July 17 2017, @01:39AM (#540114)

        Reiser's wife was probably a Grade A Cunt.

        I'm not saying she deserved to be strangled to death, but... in the immortal words of Chris Rock: I understand.

      • (Score: -1, Spam) by Anonymous Coward on Monday July 17 2017, @02:49AM

        by Anonymous Coward on Monday July 17 2017, @02:49AM (#540136)

        Reiser's wife was probably a Grade A Cunt.

        I'm not saying she deserved to be strangled to death, but… in the immortal words of Chris Rock: I understand.

      • (Score: 3, Funny) by TheLink on Monday July 17 2017, @03:52AM

        by TheLink (332) on Monday July 17 2017, @03:52AM (#540165) Journal
        The big problem with ReiserFS: Vendor lock-in.
      • (Score: -1, Troll) by Anonymous Coward on Monday July 17 2017, @01:01PM

        by Anonymous Coward on Monday July 17 2017, @01:01PM (#540281)

        Reiser's wife was probably a Grade A Cunt.

        I'm not saying she deserved to be strangled to death, but… in the immortal words of Chris Rock: I understand.

      • (Score: -1, Spam) by Anonymous Coward on Monday July 17 2017, @03:06PM

        by Anonymous Coward on Monday July 17 2017, @03:06PM (#540334)

        Reiser's wife was probably a Grade A Cunt.

        I'm not saying she deserved to be strangled to death, but… in the immortal words of Chris Rock: I understand.

      • (Score: -1, Spam) by Anonymous Coward on Tuesday July 18 2017, @06:33AM

        by Anonymous Coward on Tuesday July 18 2017, @06:33AM (#540826)

        Reiser's wife was probably a Grade A Cunt.

        I'm not saying she deserved to be strangled to death, but… in the immortal words of Chris Rock: I understand.

    • (Score: 3, Insightful) by KiloByte on Monday July 17 2017, @01:54AM (6 children)

      by KiloByte (375) on Monday July 17 2017, @01:54AM (#540121)

      Reiser3 killed all my data twice within a month. Every single file above 4KB (not sure what's the exact cut-off) turned to garbage. And that was before I learned about backups (it takes an intelligent person only about 30 data loss events to start getting serious about backups). This was many, many years ago, but with ReiserFS' pace of maintenance, I bet it'd quickly do that again.

      I've also seen how often disks lie about data being kosher while silently corrupting it. This means I'm not touching any silentdatalossfs such as ext4 any closer than I can shake a shit-covered stick at. I stick with btrfs which has its flaws but at least detects (and in the right setup repairs) such unreported corruption.

      ZFS is the only other filesystem with data checksums, but 1. its integration with Linux stinks (thanks Sun and Oracle!), especially if you deal with -rc or -next kernels, 2. it takes ridiculous amounts of RAM, I use too many small ARM machines for that.

      --
      Ceterum censeo systemd esse delendam.
      • (Score: 3, Informative) by bzipitidoo on Monday July 17 2017, @05:43AM

        by bzipitidoo (4388) on Monday July 17 2017, @05:43AM (#540193) Journal

        What doused some of my enthusiasm for btrfs was its slow performance at some operations. In particular, I read that it was very slow at sync operations, which Firefox does rather often. The developers worked on that problem, and greatly improved sync, but it still isn't near sync on ext4.

        I learned xfs can be very slow at deleting a large directory tree, such as the Linux kernel source code. Was taking 5 minutes to do that, while ext4 was nearly instantaneous. Turned out the default sizes of blocks in xfs was just about the worst setting possible for the particular SAS controller and drives in that server. To fix the problem, I copied everything to another server, and reformatted with xfs with settings better suited to the hardware, then copied everything back.

        Used Reiser3 for a few years and the only trouble I had with it was that performance rapidly degraded when disk usage climbed above 92%. Seems it should have be okay up to 98% or 99%.

        Mostly, I want the file system not to have hugely slow and laggy performance on a few of the fairly routine operations. For a file system to work great on everything except show extraordinarily bad performance on one thing such as sync, or rm -rf /bigtree, is disturbing. So, been sticking with the ext file systems.

      • (Score: 0) by Anonymous Coward on Monday July 17 2017, @07:03AM (1 child)

        by Anonymous Coward on Monday July 17 2017, @07:03AM (#540209)

        Never had anything like that with reiserfs3 that wasn't directly attributable to dying hardware. The extX however have been a liability from day one. Their recommendation of bitterfs when they stop ext4 was the main driver to move to zfs.

        • (Score: 2) by pendorbound on Monday July 17 2017, @01:56PM

          by pendorbound (2688) on Monday July 17 2017, @01:56PM (#540299) Homepage

          My Reiser3 dataloss was related to hardware (some monkey knocked the power off a chain of drives) (Where's my banana???), but Reiser took what should have been a few tense hours of fsck'ing and turned it into complete array loss. Resier3's fsck is dangerously bad at what it's supposed to do.

          Since then, ZFS, decent UPS, never fracking around in live machines' innards, and a semi-reasonable backup strategy (ZFS send to secondary box that's usually offline), and life's been good...

      • (Score: 0) by Anonymous Coward on Monday July 17 2017, @02:07PM (1 child)

        by Anonymous Coward on Monday July 17 2017, @02:07PM (#540306)

        its integration with Linux stinks (thanks Sun and Oracle!)

        IIRC The creators themselves where the ones don't wanting their code slide into linux kernel, so they chose CDDL.

        • (Score: 0) by Anonymous Coward on Wednesday July 19 2017, @01:27PM

          by Anonymous Coward on Wednesday July 19 2017, @01:27PM (#541411)

          That's exactly what he was referring to. Those wankers.

      • (Score: 2) by linuxrocks123 on Tuesday July 18 2017, @10:49AM

        by linuxrocks123 (2557) on Tuesday July 18 2017, @10:49AM (#540893) Journal

        I've had good luck with NILFS2 recently. Also, I've only ever had serious data corruption once with ext4, and that was when somebody took the SD card out of a Raspberry Pi while it was running ... then made it worse by putting it back in while it was still running :(

  • (Score: 2) by Arik on Monday July 17 2017, @01:29AM (4 children)

    by Arik (4543) on Monday July 17 2017, @01:29AM (#540111) Journal
    It's actually crazy that they get away with selling RAM that lacks ECC, so that makes perfect sense.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 0) by Anonymous Coward on Monday July 17 2017, @01:43AM (1 child)

      by Anonymous Coward on Monday July 17 2017, @01:43AM (#540115)

      You know, the thing is that most people's computing is pretty fucking worthless; for most people, it doesn't matter at all that a bit gets flipped—most people have absolutely nothing of value to say.

      • (Score: -1, Troll) by Anonymous Coward on Monday July 17 2017, @02:00AM

        by Anonymous Coward on Monday July 17 2017, @02:00AM (#540122)

        Nigger, njgger, what's the difference?

    • (Score: -1, Spam) by Anonymous Coward on Monday July 17 2017, @12:16PM

      by Anonymous Coward on Monday July 17 2017, @12:16PM (#540268)

      I like eating walrus faces, raw.

    • (Score: 0) by Anonymous Coward on Monday July 17 2017, @05:21PM

      by Anonymous Coward on Monday July 17 2017, @05:21PM (#540410)

      I agree that ECC RAM *should* be on a server, or a device that does a lot of services intended for be accessed by clients.

      It used to be all of my computers had parity memory -- then non-parity came out and it wasn't just cheaper... but the parity ram costs went up while the new ram costs were less that the original parity ram.

      Now, it is very hard to get high speed, high density, ecc ram at a price that is reasonable. Reasonable is in the eye of the purchaser and system operator I guess, but often it's not cheap and it is not often one can point at an issue and say "yes this would have been prevented if you had ECC ram".

      Instead, errors are found in existing ram when the system crashes, if they are found proactively at all. Few people take their production systems down to run memtest 86 just to do it.

      Silent corruption of data often has a few potential causes, with memory only being one of them.

      A good approach is to do read only caching on non-ecc file systems... (or get lead foil shielding for those that use deferred writes!)

(1)