Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by cmn32480 on Thursday January 14 2016, @02:26AM   Printer-friendly
from the just-use-jbod dept.

We discussed this topic some in December 2015, so this is perhaps a continuation:

Many home NAS builders consider using ZFS for their file system. But there is a caveat with ZFS that people should be aware of.

Although ZFS is free software, implementing ZFS is not free. The key issue is that expanding capacity with ZFS is more expensive compared to legacy RAID solutions.

With ZFS, you either have to buy all storage you expect to need upfront, or you will be wasting a few hard drives on redundancy you don't need.

This fact is often overlooked, but it's very important when you are planning your build.

Other software RAID solutions like Linux MDADM lets you grow an existing RAID array with one disk at a time. This is also true for many hardware-based RAID solutions. This is ideal for home users because you can expand as you need.

ZFS does not allow this!

To understand why using ZFS may cost you extra money, we will dig a little bit into ZFS itself.


Original Submission

Related Stories

Stepping into the World of NAS 45 comments

Stepping into the world of NAS

After many years of accumulating family photos, videos, and other digital files, I decided to purchase a NAS to centralize my storage needs. I researched many different brands and prices. In the end I purchased a QNAP TVS-871 and populated it with three WD 4TB Red NAS drives in a RAID 5 configuration to start. It may be overkill for a home user such as myself, but I felt that it gave me the most bang for the buck, and allows me plenty of room to grow and learn. Hopefully, it will last me for many years to come. Yes, this is not being used as a backup, and I do have an off-site backup plan.

I do realize that many of you, who are certainly more tech savvy than myself, have more than likely built a home-brew NAS. This was simply the easiest way for a NAS noob such as myself to have something as close to plug-n-play as I could get. So my questions for the community:

1. Any general advice or tips that a NAS noob should know?

2. How do you manage your multimedia files? Is there any particular programs or folder structure you recommend for managing these files for easy viewing?

3. Do you have any other recommendations, thoughts, or experiences you wish to share with others who may be thinking of getting a NAS for home or small office use?

Re: Stepping into the world of NAS

Don't use RAID5 because large qualities of data can be silently corrupted.


[Editor's Note: For those unfamiliar with RAID, this primer from Adaptec is a very detailed description of the different RAID levels, their pros and cons, as well as use cases.]

Original Submission #1Original Submission #2

ZFS Versus RAID: Eight Ironwolf Disks, Two Filesystems, One Winner 28 comments

ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner:

This has been a long while in the making—it's test results time. To truly understand the fundamentals of computer storage, it's important to explore the impact of various conventional RAID (Redundant Array of Inexpensive Disks) topologies on performance. It's also important to understand what ZFS is and how it works. But at some point, people (particularly computer enthusiasts on the Internet) want numbers.

First, a quick note: This testing, naturally, builds on those fundamentals. We're going to draw heavily on lessons learned as we explore ZFS topologies here. If you aren't yet entirely solid on the difference between pools and vdevs or what ashift and recordsize mean, we strongly recommend you revisit those explainers before diving into testing and results.

And although everybody loves to see raw numbers, we urge an additional focus on how these figures relate to one another. All of our charts relate the performance of ZFS pool topologies at sizes from two to eight disks to the performance of a single disk. If you change the model of disk, your raw numbers will change accordingly—but for the most part, their relation to a single disk's performance will not.

[It is a long — and detailed — read with quite a few examples and their performance outcomes. Read the 2nd link above to get started and then continue with this story's linked article.--martyb]

Previously:
(2018-09-11) What is ZFS? Why are People Crazy About it?
(2017-07-16) ZFS Is the Best Filesystem (For Now)
(2017-06-24) Playing with ZFS (on Linux) Encryption
(2016-02-18) ZFS is Coming to Ubuntu LTS 16.04
(2016-01-13) The 'Hidden' Cost of Using ZFS for Your Home NAS


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by vux984 on Thursday January 14 2016, @02:40AM

    by vux984 (5045) on Thursday January 14 2016, @02:40AM (#289362)

    Assuming your home NAS is also backedup somewhere you can backup, add a single drive, rebuild your ZFS file system, and then restore to it. Admittedly past a certain threshold this can be pretty time consuming. But if you can afford the downtime (and you probably can on a home NAS) then its an option.

    You can also discard some of that crap you downloaded 5 years ago and still haven't gotten around to watching. ;-) Most home NAS setups beyond a certain threshold seem more about enabling hoarders than doing anything useful.

    • (Score: 2, Interesting) by frojack on Thursday January 14 2016, @02:54AM

      by frojack (1554) on Thursday January 14 2016, @02:54AM (#289365) Journal

      Why couldn't you set up your ZFS nas (using what ever level of raid-ish redundancy you prefer) then lay an LVM on top of it, and add drive-sets to the LVM as you need them.

      Redundancy built at the drive level, storage aggregation built at the LVM level.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 1, Insightful) by Anonymous Coward on Thursday January 14 2016, @03:55AM

        by Anonymous Coward on Thursday January 14 2016, @03:55AM (#289377)

        ZFS >>has an LVM

        Only an idiot would configure a storage array with a single zpool

      • (Score: 3, Informative) by rleigh on Thursday January 14 2016, @10:02AM

        by rleigh (4887) on Thursday January 14 2016, @10:02AM (#289432) Homepage

        Eh, ZFS *is* a logical volume manager, and vastly more capable than Linux-LVM at that. ZVOLs are broadly equivalent to LVM PVs. ZPOOLs are equivalent to LVM VGs. ZFS datasets and ZVOLs are equivalent to LVs, but without the allocation and size constraints LVM imposes. The most obvious one is that ZFS snapshots are much more flexible--they don't require manual size allocation (and don't cease to function when the allocation is used up!). Once you've learned the tools, it's a vastly impoved way to administer the storage.

        (And I say this as somone who used LVM on mdraid for over a decade.)

    • (Score: 3, Interesting) by Snotnose on Thursday January 14 2016, @03:25AM

      by Snotnose (1623) on Thursday January 14 2016, @03:25AM (#289370)

      You can also discard some of that crap you downloaded 5 years ago and still haven't gotten around to watching. ;-)

      Bite yo tongue heathen! I still plan to watch the final episode of MASH. Just give me time man.

      That said, I get where you're coming from. When I get a new machine I always create a directory "old_computer" that is a copy of my old computer minus the old OS. Figure I went from a 10M disc, to a 200M disc, to a 1.5G disc, then to a 200G disc, and now I've got 1 terabyte on the laptop, 500 meg on the gaming box (hey, it's a few years old), and 6 TB for NAS, of which about 100 GB is used and 90% of that are MP3s.

      I do usually go through the "old" stuff within a year and save off the stuff I think is important, but I still never delete the old stuff.

      --
      When the dust settled America realized it was saved by a porn star.
      • (Score: 2) by fishybell on Thursday January 14 2016, @04:43AM

        by fishybell (3156) on Thursday January 14 2016, @04:43AM (#289384)

        I still plan to watch the final episode of MASH

        <spoiler alert>It's not a chicken.</spoiler alert>

        • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @09:22AM

          by Anonymous Coward on Thursday January 14 2016, @09:22AM (#289425)

          oh, you mean snape kills gandalf with rosebud, right?

          • (Score: 3, Funny) by Phoenix666 on Thursday January 14 2016, @01:03PM

            by Phoenix666 (552) on Thursday January 14 2016, @01:03PM (#289464) Journal

            Hawkeye is Radar's father, and Hot Lips is his sister.

            --
            Washington DC delenda est.
    • (Score: 2) by PartTimeZombie on Thursday January 14 2016, @10:26PM

      by PartTimeZombie (4827) on Thursday January 14 2016, @10:26PM (#289689)

      Most home NAS setups beyond a certain threshold seem more about enabling hoarders than doing anything useful.

      My home NAS has 1 TB of storage, which I considered boosting a few months ago when I installed a new version of the distro I use.
      Then I thought about it. I use about 500GB or so, and have done for about the last 10 years. As we watch things they tend to get deleted. The family music collection does grow, but slowly and the various user folders with kid's homework and such are tiny (30, 40, 50 MB sort of range).

      Why am I spending money on getting massive multi-terabyte disks when what we've got is enough?

      • (Score: 2) by hendrikboom on Thursday January 14 2016, @11:44PM

        by hendrikboom (1125) Subscriber Badge on Thursday January 14 2016, @11:44PM (#289717) Homepage Journal

        Why am I spending money on getting massive multi-terabyte disks when what we've got is enough?

        Maybe for redundancy? I usually use a mdadm RAID for redundancy. The drives are not the same size, but the partitionsthat I contribute to the RAID are (this leaves me with lot of temporary nonredundant space should I need it). When the smaller reaches capacity, I just replace it, and than have space to enlarge them. Most of the time I have more disk space than I am using, but I only ever have to replace one of my physical drives at a time. When I buy a new drive I used to buy the largest that I can afford, and it lasts me years. Now I buy one at the point where the price/capacity curve seems to become linear.

        Occasionally I have to replace one because it fails, rather than because it is full. That's when I'm pleased to have the RAID.

  • (Score: 3, Interesting) by Anonymous Coward on Thursday January 14 2016, @03:52AM

    by Anonymous Coward on Thursday January 14 2016, @03:52AM (#289374)

    Anybody here tried out one of those non-standard RAID setups like unRAID or SnapRAID?

    They seem ideal for read-mostly media storage like tvshows, movies, music, etc. To over-simplify, they work by treating each disk as a stand-alone disk and then using a separate parity drive for all the disks in the "raid." You can mix-and-match disks of arbitrary sizes, as long as the parity drive is as large as the largest data disk. You don't get performance benefits of striping, but for media that's not necessary. You do get the benefit of each disk working fine on its own and you only need to spin up the one disk you are accessing rather than all of the drives in the raid set.

    I'm thinking about building a home NAS primarily for media and ZFS seems like over-kill with lots of complexity that provide negative value for my intended usage.

    • (Score: 2) by opinionated_science on Thursday January 14 2016, @10:29PM

      by opinionated_science (4031) on Thursday January 14 2016, @10:29PM (#289690)

      No. No vendor lock in for storage. MDADM->ZFS->BTRFS.

      The point about ZFS is that is rock solid NOW. BTRFS will get there in,say, 3 years?

      For now MDADM is bulletproof for RAID1/6 but without the block checksum you get from ZFS/BTRFS, makes it a risky proposition with disks >2TB.

      Hence, you should really have 2 NAS, so you can gracefully do maintenance on one, while upgrading the other. This should be every 2-3 years.

      And with flash drives become cheaper and huge, we might finally have tape on the ropes....

  • (Score: 2) by Whoever on Thursday January 14 2016, @03:55AM

    by Whoever (4524) on Thursday January 14 2016, @03:55AM (#289375) Journal

    random I/O performance is not a concern for 99% of the people building a home NAS.

    IMHO, for a home user, if they don't care about random I/O performance, then they don't care about performance of their filesystem at all.

    I might concede that for people who do lots of video editing, they may be concerned with streaming I/O performance. But for most people, they are reading and writing lots of small files, which is effectively random I/O.

    • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @04:38AM

      by Anonymous Coward on Thursday January 14 2016, @04:38AM (#289383)

      Not if they are using the NAS to serve video.

      • (Score: 3, Insightful) by Whoever on Thursday January 14 2016, @05:25AM

        by Whoever (4524) on Thursday January 14 2016, @05:25AM (#289389) Journal

        Not if they are using the NAS to serve video.

        Serving video shouldn't come within an order of magnitude of taxing a modern NAS.

        • (Score: 2) by VanderDecken on Thursday January 14 2016, @08:21AM

          by VanderDecken (5216) on Thursday January 14 2016, @08:21AM (#289414)

          I have one deployed in an environment where 3D 4k video is being edited. Yes, we can easily saturate a 10Gb link, and the disks aren't exactly idle.

          Yes, ZFS with RAIDZ2 keeps up, but it's in the realm where tuning may be needed.

          --
          The two most common elements in the universe are hydrogen and stupidity.
        • (Score: 0) by Anonymous Coward on Friday January 15 2016, @12:05AM

          by Anonymous Coward on Friday January 15 2016, @12:05AM (#289724)

          So you admit that a NAS that serves video, a common application, does not require high random I/O performance.

  • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @06:47AM

    by Anonymous Coward on Thursday January 14 2016, @06:47AM (#289400)

    A 25? year old NSLU2 hacked with Unslung firmware. I did the CPU overclock hack too, which only required cutting a resistor path. It serves my media files, torrents, it even cleans out my cats litter box. Not bad for the $25 I paid for it.

  • (Score: 5, Insightful) by bradley13 on Thursday January 14 2016, @07:19AM

    by bradley13 (3053) on Thursday January 14 2016, @07:19AM (#289402) Homepage Journal

    I'm not sure this is a big issue. Hardware gets old anyway, and wants replaced.

    I set up our NAS 6 or 7 years ago. At the time, I put on plenty of storage, and it's still only 25% full. The NAS has run perfectly this entire time, no failures at all, but the NAS and all of the hard drives are old, and failures at some point are inevitable.

    When I replace it, I won't be adding a drive here or there, I will replace everything. With larger disks that will suffice for the coming years. Disk space is so cheap that it's not worth futzing around.

    The one thing I regret/dislike about ZFS is that it is not supported by the commercial NAS providers. As I understand it, this is for licensing reasons - it's still annoying, because I am perfectly happy with a commercial NAS solution, but I would very much like to move to ZFS instead of standard RAID.

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by julian on Thursday January 14 2016, @07:48AM

      by julian (6003) Subscriber Badge on Thursday January 14 2016, @07:48AM (#289408)

      If I could use ZFS on Linux I would but I've not been happy with BSD.

      I built a NAS a year ago and initially tried FreeNAS with ZFS. It could reliably push ~800 mbit/s transferring files to my wired desktop on the same switch. I didn't have any problems with performance, but I hated FreeNAS/BSD as an OS. Everything seemed needlessly verbose, recondite, and complicated. It took way too long to get everything working. I'm used to the Linux way of doing things, true, but the FreeNAS interface is horrible. I just want to turn on file sharing and a bittorrent server, don't make me manually create directories, "volumes", users, groups, set permissions, set up Jails, and ssh into the server to edit config files. Most of this should be abstracted away unless the user needs that level of control.

      Eventually I ditched FreeNAS and switched to Debian 8 and XFS for on RAID1. Performance was the same and everything was much simpler. I easily got transmission-daemon up and running and the server has been working flawlessly ever since. Haven't had any issues with systemd either.

      I shouldn't disparage BSD entirely. It CAN be used to make an amazing system. I converted an old dual core Pentium Dell into a pfSense router and it's rock solid. No complaints.

      • (Score: 3, Disagree) by Francis on Thursday January 14 2016, @09:46AM

        by Francis (5544) on Thursday January 14 2016, @09:46AM (#289430)

        If you're doing any sort of Sysadmin work, even as a hobby, and you think that *BSD is complicated, you need to really rethink what you're doing before your entire network gets pwned.

        *BSD doesn't sugar coat things the way that Linux does by default, but most of the software I'm running on FreeBSD is the same as I'd be running on Linux, so I'm not sure how you get off suggesting that *BSD is worse. What's more, you don't have that systemd infection to worry about.

        • (Score: 2) by rleigh on Thursday January 14 2016, @10:13AM

          by rleigh (4887) on Thursday January 14 2016, @10:13AM (#289437) Homepage

          I'm someone who migrated from Linux to FreeBSD, including on a NAS. I didn't use FreeNAS, I used stock FreeBSD10 (.0, now .2), installed on ZFS, and set up NFSv4 and Samba (from ports) exports.

          After 15 years of using Linux pretty much exclusively, there was a (small) learning curve. Different tools for partitioning disks (gpart, gnop, geom), different way to configure init (rc.conf), etc. FreeBSD today is no harder than Linux of a few years ago. It's not *harder*, it's just *slightly different*. Once you've learned those minor differences, it's perfectly usable. And other than the initial setup and how the system boots, everything laid on top of that--the actual tools and programs you use on a day-to-day basis are absolutely identical to those you use on Linux, since they are exactly the same ones.

        • (Score: 4, Insightful) by janrinok on Thursday January 14 2016, @10:17AM

          by janrinok (52) Subscriber Badge on Thursday January 14 2016, @10:17AM (#289438) Journal

          Any computer can be managed at the lowest possible level, but that doesn't mean that it should be.

          Many competent home users are quite content to use Linux because it doesn't make one go back to the lowest level, and nor should it. Do you operate your cell/mobile by poking bits and bytes? Why not, you could cut out loads of software by doing so? - but your phone would be next to useless compared with all of the other devices on offer. The computer is a tool, it shouldn't demand that you understand machine code or scripting or low level languages. It shouldn't expect you to manually carry out tasks that don't require human intervention. Nor should a NAS expect you to set things up manually by creating partitions, LVMs etc, when there are perfectly good software tools for doing that job.

          If you want to work at a low level then fine, your computer shouldn't stop you from doing that either, but please don't suggest that it is the only way that computers should be operated. I used the SN comment editor to write this comment - I did not use some archaic method of inputting data such as punched cards or paper tape, although I have used both back in the day.

          I also run Linux without systemd, with all the security updates guaranteed until 2019. Ubuntu 14.04 is a LTS release, with support until that date, but there are others. Even Debian lets me revert to the older init system should I wish to do so. And on those systems that I do run systemd - so that I can understand how it works and find its strengths and weaknesses for myself without following all the 'advice' being pushed out on the 'net - I have found that it functions fine with no particular problems and certainly not the end of the world that was prophesied. Plus it gives me several capabilities that are not available with the older init system. Suggesting the 'systemd' is an argument to convince people to stop using Linux is quite wrong, IMHO.

          • (Score: 2) by bradley13 on Thursday January 14 2016, @04:58PM

            by bradley13 (3053) on Thursday January 14 2016, @04:58PM (#289551) Homepage Journal

            Did you reply to the wrong comment? I don't understand your response at all... My post basically says: why mess with adding individual disks, just replace the whole NAS. Which would seem to be anything but "low level", and certainly has nothing to do with specific Linux distros, systemd, etc...

            --
            Everyone is somebody else's weirdo.
            • (Score: 2) by janrinok on Friday January 15 2016, @10:04AM

              by janrinok (52) Subscriber Badge on Friday January 15 2016, @10:04AM (#289822) Journal

              No - I replied to Francis (5544) who was suggesting that 'BSD doesn't sugar coat things the way that Linux does'. My view is that forcing users to go back to the command line or refusing to use many of the latest software tools is not something we should be advocating. It's fine for those that want to use it, but it shouldn't be necessary in this day and age.

              The threading on my display seems to show the hierarchy of comments correctly, as does pressing the 'Parent' button on my comment. Perhaps you have found a software bug?

              I have no argument with your suggestion that replacing all of the drives at the same time would be a good idea.

              • (Score: 1) by Francis on Saturday January 16 2016, @02:38PM

                by Francis (5544) on Saturday January 16 2016, @02:38PM (#290277)

                Sugar coating is where most of the problems come from in the first place. I write my own scripts to handle things like rotating my zfs snapshots. I don't really need scripts, but it does make the process quicker and more efficient.

                The problem though is that when you're using a GUI it's really easy to not understand what you're doing and why things went south. I get that it's become cool to be computer illiterate, but the bottom line here is that anybody that cares about efficiency or reliability is going to be using the CLI for most things anyways. It's just so much faster. The main exceptions are things where you need to work with graphics, those are things that are usually best handled graphically.

                • (Score: 2) by janrinok on Sunday January 17 2016, @03:23PM

                  by janrinok (52) Subscriber Badge on Sunday January 17 2016, @03:23PM (#290747) Journal

                  So you do use a cli on your 'phone?

                  You could easily type a phone number on a cli and then speak to whom you contacted - but I bet that you don't do that, even though not too many years ago we all did effectively just that. Ask yourself why not. I'm also pretty sure that you will come to the same conclusion that I have - it just makes more sense to use a graphical interface.

                  I agree that there are times when the cli is a great advantage. However, I still content that those times are becoming more and more rare as time goes by. And, apart from perhaps servers and few other niche areas, after another decade or two many people will not believe that we ever used the cli at all. Which is exactly as it should be. How many people write software in assembly language nowadays?

                  • (Score: 1) by Francis on Sunday January 17 2016, @03:37PM

                    by Francis (5544) on Sunday January 17 2016, @03:37PM (#290753)

                    That's absolutely ridiculous. Using a CLI to enter a phone number isn't any different from using a GUI and you know perfectly well that's not what I was talking about. There's no substantial difference between typing "call 555-1234" and typing "555-1234" into a text box and pressing a call button.

                    If we're talking about cell phones, we've already given up all hope of being productive and efficient as few phones come with an actual keyboard anymore. And the UI is generally not set up for efficiency anyways.

                    • (Score: 2) by janrinok on Sunday January 17 2016, @08:48PM

                      by janrinok (52) Subscriber Badge on Sunday January 17 2016, @08:48PM (#290864) Journal

                      The point that I am making - apparently not very well - is that we should endeavour to move away from the command line for most every-day users of computers. A computer should be a tool that can be used by the largest number of people possible, not the province of a gifted few who have mastered the command line.

                      TFS is discussing the installation of a home NAS - and setting one up should be within the abilities of those that want to use a NAS in the home environment. Advocating the use of cli to achieve such a thing is not making it within the abilities of the majority of computer users. We, as programmers and system developers, should be providing the tools the enable such things to be done as easily as possible by the majority of potential users.

                      If, as you mentioned, the problems tend to begin when a GUI is used then we, the people who write the GUI software, are responsible. We need to up our game.

          • (Score: 2) by Marand on Thursday January 14 2016, @07:52PM

            by Marand (1081) on Thursday January 14 2016, @07:52PM (#289631) Journal

            Any computer can be managed at the lowest possible level, but that doesn't mean that it should be.

            Many competent home users are quite content to use Linux because it doesn't make one go back to the lowest level, and nor should it. Do you operate your cell/mobile by poking bits and bytes? Why not, you could cut out loads of software by doing so? - but your phone would be next to useless compared with all of the other devices on offer. The computer is a tool, it shouldn't demand that you understand machine code or scripting or low level languages. It shouldn't expect you to manually carry out tasks that don't require human intervention. Nor should a NAS expect you to set things up manually by creating partitions, LVMs etc, when there are perfectly good software tools for doing that job.

            This is something that's worth repeating, because it seems like so few of us understand it. Computers are supposed to make our lives and work easier, but we as a group seem to revel in doing things in the least convenient ways possible, contorting our own workflows and thought processes until they match the way the computer works, putting the extra difficulty on ourselves rather than making the computer work better for us. When someone uses, or suggests using, a simpler or higher-level tool, we sneer at them and make vague remarks about "performance" to justify the elitism.

            It happens with OSes, with people often ignoring other (dis)advantages to belittle someone for using an "easier" option. There are plenty of reasons a user might choose (or avoid) OS X, Windows, Linux, or the BSDs without bitching about a system "sugar coating" things and making it eaiser. Even within the same OS, users of the different BSD and Linux distributions sneer at each other the same way; for example, Gentoo and Arch users tend to be insufferable pricks in discussions about other distros, and for what? Because they actively made their computers less useful by adding complexity and potential downtimes? (Note that I'm not saying there are no good uses for those distros, just that I think most people use them for all the wrong reasons. Specifically, many seem to only use them for the +1 smugness bonus.)

            If that isn't insane enough, we even sneer at people for what text editor they use. Sure, there's the emacs vs. vi thing, but that pales to the self-righteousness and belittling remarks from both camps the moment someone pipes in that they like kate / notepad / notepad++ / nano / etc. And all of this pales to the ultimate battleground of self-righteous back-patting and insults: programming languages. If your language of choice dares implement syntactic sugar or otherwise even look too "high level", you're an infidel that should be sacrificed upon the C/C++ altar of unnecessary optimisation. Again, there are good arguments for/against various languages, but "it's too easy" shouldn't be one.

            Why did we, as a group, decide to go this route? Is it about job security, or are we that desperate to feel superior to someone that we actively make our lives worse just to say we're smarter or better than someone for using an easier tool? If a Ruby script on an Ubuntu install is all that the person needs to make something useful to them, we should encourage that instead of mocking them for not writing their own kernel in assembly, bootstrapping a C compiler on the custom OS, and then writing the tool they needed in C.

      • (Score: 2) by RedGreen on Thursday January 14 2016, @03:48PM

        by RedGreen (888) on Thursday January 14 2016, @03:48PM (#289522)

        "If I could use ZFS on Linux I would but I've not been happy with BSD."

        Never tried that hard then if you could not be bothered entering into google "zfs on linux debian" which results in the first hit.

        http://zfsonlinux.org/debian.html [zfsonlinux.org]

        apt-cache policy debian-zfs
        debian-zfs:
            Installed: 7~wheezy
            Candidate: 7~wheezy
            Version table:
          *** 7~wheezy 0
                        500 http://archive.zfsonlinux.org/debian/ [zfsonlinux.org] wheezy/main amd64 Packages
                        100 /var/lib/dpkg/status

        cat /etc/debian_version
        7.9

        Works great for my personal seedbox I would agree that the FreeNAS is junk, file transfers were just plain painful on this same hardware when I gave it a shot.

        --
        "I modded down, down, down, and the flames went higher." -- Sven Olsen
      • (Score: 1) by fubari on Thursday January 14 2016, @06:36PM

        by fubari (4551) on Thursday January 14 2016, @06:36PM (#289602)

        ... needlessly verbose, recondite, and complicated.

        +1 Irony :-)

        context:

        I didn't have any problems with performance, but I hated FreeNAS/BSD as an OS. Everything seemed needlessly verbose, recondite, and complicated . It took way too long to get everything working.

    • (Score: 2) by VanderDecken on Thursday January 14 2016, @08:08AM

      by VanderDecken (5216) on Thursday January 14 2016, @08:08AM (#289413)

      iXsystems is a commercial NAS provider that supports ZFS. They even have smaller models suited to home use.

      --
      The two most common elements in the universe are hydrogen and stupidity.
      • (Score: 3, Interesting) by TheRaven on Thursday January 14 2016, @12:12PM

        by TheRaven (270) on Thursday January 14 2016, @12:12PM (#289455) Journal
        And they posted some very entertaining rebuttals when TFA was originally circulated a little while ago. TL;DR: the author of TFA is completely clueless, and has deleted all of the comments on the blog post from people who actually know what they're talking about. It's quite embarrassing for Soylent to have posted this crap.
        --
        sudo mod me up
        • (Score: 2) by janrinok on Friday January 15 2016, @10:17AM

          by janrinok (52) Subscriber Badge on Friday January 15 2016, @10:17AM (#289824) Journal

          It's quite embarrassing for Soylent to have posted this crap.

          TheRaven - you make a valid point, but if all the useful comments have been deleted from the other site that you mentioned, then perhaps discussing them here (and not subsequently deleting them!) will help educate others who might not be as well informed in this particular subject. I'm sure that I am not alone in finding that I learn something new everyday from this site - and that something might be blindingly obvious to another member who has significant expertise in the subject under discussion.

  • (Score: 4, Informative) by rleigh on Thursday January 14 2016, @09:49AM

    by rleigh (4887) on Thursday January 14 2016, @09:49AM (#289431) Homepage

    Simply use RAID10, and add pairs of HDDs to the pool as needed. For a home NAS, that's likely perfectly sufficient. And you can also grow capacity within each vdev as you swap out drives over time. It's even recommended over RAIDZn for big systems, though I can't find the reference offhand. That's due to the resilver performance and time compared with RAIDZn--it has low impact on the system and the other disks in the array; counterintuitively the window of vulnerability during the resilver is less than for RAIDZn due to the speed, despite having less copies of the data.

    • (Score: 3, Informative) by ThePhilips on Thursday January 14 2016, @11:13AM

      by ThePhilips (5677) on Thursday January 14 2016, @11:13AM (#289443)

      Simply use RAID10, and add pairs of HDDs to the pool as needed.

      Not at home. Do pair disks in RAID-1s, and then add them to JBOD. Because you can expand JBOD, and you cannot expand the RAID-0. That IMO is more important at home.

      [RAID-10 is] even recommended over RAIDZn for big systems, though I can't find the reference offhand.

      RAID-10 is recommended because it is primitive, better supported overall and actually can survive a crash of multiple drives (if the failed drives are in different stripes). People also believe that it has better performance, but IMO it stems from the fact RAID-1/RAID-10 is primitive and better supported. (I have seen at least one NAS which had full RAID-5 hardware support (including the accelerated parity generation/checking) and it was faster than RAID-10 (as it should be: you read/write from/to 3 drives in parallel, not 2, thus read/write 50% more data at once). But that was rather exception.)

      (Disclosure: I am NOT storage expert. I worked for the company on a piece of software which required a storage. We did officially recommend (and supported only) the RAID-10 over the RAID-5/friends. Most of all because we have seen relatively many problems with the RAID-5/etc. E.g. one of the customers against our recommendation had used the RAID-5 and they had to reboot that NAS every 3 month because of a memory leak which vendor couldn't/wouldn't fix. The same storage system for Oracle DBs (Oracle mandates RAID-10) had no problems whatsoever.)

      • (Score: 3, Informative) by TheRaven on Thursday January 14 2016, @12:09PM

        by TheRaven (270) on Thursday January 14 2016, @12:09PM (#289454) Journal

        Not at home. Do pair disks in RAID-1s, and then add them to JBOD. Because you can expand JBOD, and you cannot expand the RAID-0.

        That's true with classical RAID, but not true with ZFS. You can add disks to a striped set and new data will be spread over them gradually.

        --
        sudo mod me up
  • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @02:49PM

    by Anonymous Coward on Thursday January 14 2016, @02:49PM (#289503)

    where is the smarts? in the hdd or the computer?
    the simplest way is to leave the smarts in the hdd.
    it means you can disconnect it and plug it into some other computer and
    just use it.

    if the smarts are in the computer you might need to connect all the
    hdd in the array and in the same order AND transfer the "smarts" in form
    of a program and/or config file ... so as to be able to move it arround to another
    computer.

    since storage requirments are starting to overtake single hdd size and more then one
    hdd is required i would suggest that the hdd manufacturers start to sell
    solutions that are acctually two or three hdd but with one connector.

    so one big box from western d. or s.gate, just like single hdd but with 20 or 30 tb capacity
    and only ONE SIMPLE sata connector. since there are no single hdd in the 20 tb range,
    the box would have hidden away and irreplaceable 3 x 5 tb or so inside.

    thx. i'll buy two :)

    • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @10:32PM

      by Anonymous Coward on Thursday January 14 2016, @10:32PM (#289692)

      I can't quite get my head around how having the smarts in the hard drive is a good idea. Beyond your rough outline, how exactly do you envisage the smarts for a RAID array built into a drive working? I'm sure it is possible, but the drive manufacturers would either make it cheap and unreliable, or expensive, and either way it would still be less flexible than a software-based solution.

      From what you described wanting, it sounds like you either want a hardware RAID controller or an appliance NAS box. I don't think this article is for you.

      • (Score: 1) by Francis on Saturday January 16 2016, @02:46PM

        by Francis (5544) on Saturday January 16 2016, @02:46PM (#290279)

        The GP is a troll. I remember years ago looking into hardware versus software RAID.

        Hardware RAID often times comes with vendor lock in that can leave you unable to access the data if the controller fails. And even if it doesn't fail, it can make it a real pain to migrate. For example, if you're moving a system with a PCI RAID controller to one that only has PCIe slots, it's going to be a huge hassle. With software RAID, it's not likely to be an issue as long as the OS is supported on both sides.

        I've been using ZFS for a while now and it works really well. Linux's LVM also seems to work well in this regard, but ZFS brings a lot more.

  • (Score: 4, Informative) by pendorbound on Thursday January 14 2016, @03:06PM

    by pendorbound (2688) on Thursday January 14 2016, @03:06PM (#289511) Homepage

    You absolutely can add storage incrementally to ZFS pools. TFA explains the limitations in a bit more detail (but still makes them sound way more scary than they are), but this summary completely misses the details. There's one limitation that MDADM allows that ZFS doesn't. You can't just throw another drive into an existing RAID pool. You can't turn a four-drive ZFS RAIDz into a 5-drive ZFS RAIDz and get more storage. You *can* turn a 4-drive RAIDz1 into a 5-drive RAIDz2 to get more redundancy, but *not* more storage.

    As for what you can do, you have two options for adding storage to a zpool: autoexpand and just adding more RAIDz vdev's.

    The ZFS autoexpand attribute has been available since Solaris 10 was released in 2005 (http://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html). To the best of my knowledge, it's been present in the ZFS on Linux and OpenZFS ports since their initial release.

    This article describes the procedure for increasing a pool's size using autoexpand: https://jsosic.wordpress.com/2013/01/01/expanding-zfs-zpool-raid/. [wordpress.com] You have to replace each device in the RAIDz one at a time, allowing them to resilver in between. Once all the devices are replaced with larger capacity devices, the pool will expand to use the new space.

    You can also add new vdevs to an existing storage pool, meaning you can add a new group of several RAIDz drives to an existing pool, and the storage will all be usable under the same pool & mount hierarchy.

    You can replace four 1TB drives in a RAIDz1 (3TB actual storage) with four 3TB drives, still in RAIDz1 to yield 9TB actual. You can also leave the four 1TB's spinning and install new drives along side them (assuming you've got room in the chassis) to get 4x1TB-RAIDz1 + 4x3TB-RAIDz1 for 12TB actual storage, all under the same pool.

    It is true you end up growing in chunks which inevitably leaves "wasted" space until you get around to using it. I'd say if you're not okay with that, you might not understand what NAS is. If you're planning on adding or replacing drives on a very frequent basis "as you need it," your odds of an "oops" and dataloss get crazy. Set it up, forget it, and only mess with it (infrequently!) if you need more storage or to replace a faulted drive.

  • (Score: 0) by Anonymous Coward on Thursday January 14 2016, @05:16PM

    by Anonymous Coward on Thursday January 14 2016, @05:16PM (#289560)

    Yet, the article does not mention one area that is really important for reliability that adds expense to the Motherboard and RAM and that is its dependence on using ECC Ram.
    Having bit errors in RAM during a scrub can be catastrophic to your data as the scrub tries to correct the data on disk due to the bit errors in RAM.

  • (Score: 2) by VLM on Thursday January 14 2016, @11:10PM

    by VLM (445) on Thursday January 14 2016, @11:10PM (#289709)

    I thought I was hard core because I had 6 TB of spinning rust and a TB of SSD at home, but the linked article talks about home amateur NAS as his old system had 18 TB and his new system has 71 TB. And thats "amateur just fooling around at home". OK then. So at $300 per SSD TB, assuming he mirrors, thats 142 TB at $300 per TB thats $43000 of drives "for amateur fooling around". I'm pretty well off and even I'd kind of blink my eyes at that. I guess a serious home NAS guy would have 10 to 100 times as much storage at home. Um, sure.

    I can't find more than a couple TB of compressed stuff worth watching. I guess if you went audiophile snob and insisted on every trek episode ever made in uncompressed 4K... but watchable SD to low quality HD is only 100 gigs or so.

    I was actually shocked the whole article wasn't about ZFS being memory hungry when you de-dupe. At least it was an issue back in the old days. I don't de-dupe so its no issue for me.

    Perhaps the linked author means amateur in that he only has one system. I have three small systems and I more or less load balance. This is quite handy. My video collection for mythtv is on spinning rust, but my network wide home dirs NFS on lightning fast (yet "small") SSD. SSD is huge for desktop use... you can only run modded minecraft once on SSD and you'll never tolerate spinning rust performance again.