Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday June 25 2017, @03:23AM   Printer-friendly
from the rot13++ dept.

A blog has a walkthrough of using ZFS encryption on Linux:

In order to have a simple way to play with the new features of ZFS, it makes sense to have a safe "sandbox". You can pick an old computer, but in my case I decide to use a VM. It is tempting to use docker, but it won't work because we need a special kernel module to be able to use the zfs tools.

For the setup, I've decide to use VirtualBox and Archlinux, since those are a few tools that I'm more familiar with. And modifying the zfs-dkms package to build from the branch that hosts the encryption PR is really simple.

[...] Finally we are able to enjoy encryption in zfs natively in linux. This is a feature that was long due. The good thing is that this new implementation improved a few of the problems that the original one had, especially around key management. It is not binary compatible, which is fine in most cases and still not ready to be used in production, but so far I really like what I see.

If you want to follow progress, you can watch the current PR in the official git repo of the project. If everything keeps going ok, I would hope this feature to land in version 0.7.1


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rleigh on Monday June 26 2017, @10:33AM

    by rleigh (4887) on Monday June 26 2017, @10:33AM (#531246) Homepage

    I'd suggest looking at the books I posted in my other reply, and looking at the existing guidance out there on the net. 4GB is a reasonable minimum; file-level prefetching is disabled on systems with less memory, but I have as little as 2GB in some of my virtual machines. The more you have, the bigger its caches can be, and you can instrument ZFS to determine what the cache utilisation is for your system, and if it's wasted or too small. None of the documentation or books I've read to date have suggested a certain amount of memory per TB storage is a requirement, unless you're using deduplication. There's quite a lot of advice and guidance suggesting it, but I have not seen a technical rationale to back it up (if there is one, I'd be interested to see it).

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2