Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday June 25 2017, @03:23AM   Printer-friendly
from the rot13++ dept.

A blog has a walkthrough of using ZFS encryption on Linux:

In order to have a simple way to play with the new features of ZFS, it makes sense to have a safe "sandbox". You can pick an old computer, but in my case I decide to use a VM. It is tempting to use docker, but it won't work because we need a special kernel module to be able to use the zfs tools.

For the setup, I've decide to use VirtualBox and Archlinux, since those are a few tools that I'm more familiar with. And modifying the zfs-dkms package to build from the branch that hosts the encryption PR is really simple.

[...] Finally we are able to enjoy encryption in zfs natively in linux. This is a feature that was long due. The good thing is that this new implementation improved a few of the problems that the original one had, especially around key management. It is not binary compatible, which is fine in most cases and still not ready to be used in production, but so far I really like what I see.

If you want to follow progress, you can watch the current PR in the official git repo of the project. If everything keeps going ok, I would hope this feature to land in version 0.7.1


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by fnj on Monday June 26 2017, @05:33AM (5 children)

    by fnj (1654) on Monday June 26 2017, @05:33AM (#531142)

    Incorrect. 1 GB per TB is the minimum for use without dedup. If you have dedup enabled, you need at least 5 GB per TB. But it's not a straight line. If you only have 0.1 TB, it's NOT going to work with 0.1 GB. Basically you need 4GB to get ZFS off the ground; then you add 1 GB per TB on top of that. It won't even work acceptably at all on ia32 because of virtual memory mapping limitations. It's got to be 64 bit.

    It doesn't do anybody any good to spread false information.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by rleigh on Monday June 26 2017, @09:24AM (4 children)

    by rleigh (4887) on Monday June 26 2017, @09:24AM (#531219) Homepage

    Umm... it's not false information. I've just finished reading two ZFS books ("FreeBSD Mastery: ZFS" and "FreeBSD Mastery: Advanced ZFS"; also applicable to other operating systems BTW for anyone wanting to know more about ZFS), and I've also read quite widely about the memory requirements before that, as well as using it for several years myself. They are all quite clear that it's quite possible to run ZFS on low memory systems; I run FreeBSD on ZFS on low memory virtual machines at work without any problems whatsoever.

    The large memory requirements are a myth which continues to propagate like some bizarre meme. Deduplication requires large amounts of memory for its in-memory copy of the on-disc deduplication tables; that scales linearly with disc size (well, total allocated unique blocks) and leads to the aforementioned rule. This does not at all apply to the ARC and related in memory caches, which can be dramatically shrunk, and you can additionally work on performance tuning on individual datasets and zvols to adjust the block prefetching, commit intervals and other parameters.

    It's true that extreme low memory usage requires sacrificing some performance, and that some optimisations are only enabled when a certain amount of memory is available, but this does not mean it's unusable. The cache sizes are not really related to the /quantity/ of storage, but the data access patterns you require for your workload, and this can vary wildly depending upon the specific cases. ZFS can be adjusted for throughput or interactive response on a per-dataset basis, and this also affects the amount of cache required. Caching can also be completely disabled for datasets which won't benefit from it.

    I have no idea where the 32-bit comments came from because I didn't refer to it at all.

    • (Score: 2) by LoRdTAW on Monday June 26 2017, @12:48PM (3 children)

      by LoRdTAW (3755) on Monday June 26 2017, @12:48PM (#531280) Journal

      You know your file system is massively over complicated when you need to read two fucking books on how to correctly use it.

      • (Score: 2) by rleigh on Monday June 26 2017, @04:43PM (2 children)

        by rleigh (4887) on Monday June 26 2017, @04:43PM (#531388) Homepage

        Ha ha. It's not quite that bad. I'd been using it for 2½ years before getting those books. They have some useful guidance and best practices, and I picked up on some neat tools I hadn't used before, but they are not essential reading unless you want to manage a humongous storage array with hundreds of discs, where the more advanced disc management stuff would come into its own.

        ZFS isn't just a file system though; it's doing RAID, volume management, block devices, replication, delegated administration and you can do all sorts of configuration and performance tuning of individual datasets. There's enough functionality that a couple of small books are a worthwhile addition to your library if you want to use it properly. I think it's justifiable; if you wanted to use RAID+LVM+xfs/ext4 there would be a good amount of complexity in setting up and ongoing maintenance of that as well.

        • (Score: 2) by fnj on Monday June 26 2017, @08:58PM (1 child)

          by fnj (1654) on Monday June 26 2017, @08:58PM (#531535)

          You don't have a single goddam clue what you are talking about.

          • (Score: 2) by rleigh on Monday June 26 2017, @11:00PM

            by rleigh (4887) on Monday June 26 2017, @11:00PM (#531615) Homepage

            Then maybe you would care to point out exactly what I said which was wrong, with some references to the documentation, please?