Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday February 04 2016, @03:38PM   Printer-friendly
from the predicting-the-future-is-hard dept.

For the last 10 years my company (not an IT company) has built about 2000 linux machines, all based on a common preseed file and ubuntu server, which installs a home-grown auditing tool, basic configuration, and an internal apt repository of about 200 home spun and other useful debs. About 1000 of these machines are still on the network somewhere in the world (none in Antarctica, at least no permanent installs). We have zero professional linux administrators, just people that dabble.

However in addition to that we have at least 2 dozen rpm-based machines, which lack auditing or management — they just get installed and forgotten about. These are typically for horrible purposes like running Oracle, but we also have a fair number of Xen nodes too. I was going to build a yum repository for these machines with hopes of aliening in some of the tools we have (things like auto-registering into nagios).

Being an old fart, I consider "yum" to be quite new compare with apt, so I was surprised to see that it was being replaced with DNF. There's also a push from the youngling developers to run everything on docker on something like Redhat Atomic, or Ubuntu Core - which are redhat and ubuntu in name only, and lack any traditional package tools.

Given that the hipster millennial agile cupcakes are the future, is there a future in old fashioned RPMs or Debs distributed by yum/dnf and apt, or will the future be "snappy"? How have you managed to cope with the move to a containerised environment? Or do you think it's all a fad and we'll swiftly move to traditional metal-OS-Application (rather than metal-OS-container-vm-OS-container-OS-Application).


What package manager(s) do fellow Soylentils use? What shortcomings have you encountered?

Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by Marand on Friday February 05 2016, @03:11AM

    by Marand (1081) on Friday February 05 2016, @03:11AM (#299277) Journal

    I came here to mention Nix, but since you did I'll instead give you an upmod and add some information about what you said.

    Whether or not it actually gets adopted, I'm leaning toward "no", for the same reason Vim is more popular than Emacs despite the latter's technical superiority.

    It's also not going to get adopted because "I already know [foo]" and "boohoo it doesn't conform to Filesystem Hierarchy Standard" (man hier). The latter has already been cited as a reason why Debian will never even consider Nix or its concepts for improvement of Debian's package management, though that same argument probably won't stop it from consolidating /usr/bin and /bin like other distros have started doing.

    In the case of Nix, the FHS is ignored because it's a necessity to make the "functional" part of "functional package manager" happen.

    Nix is a functional package manager. If you're familiar with strict functional programming like Haskell, this should make sense intuitively. All software packages are well-defined with isolated build environments. If you install package foo 1.2, Nix will pull the same source code every time, it will pull the same dependencies every time, build them in the exact way every time in a clean environment every time with the same flags every time and install it in an isolated environment every time.

    The point is, installing package foo 1.2 will give you the same thing every time, irrespective of installed dependencies, architecture or distribution. I can for example run Nix and install packages on my Arch computer and they will run identically to the same packages installed on NixOS (a Linux distro built around Nix) on a MacBook.

    To expand on this point, especially for people not as familiar with functional programming, an important part of FP is function purity [wikipedia.org] and referential transparency [wikipedia.org]: a function should, internally, have no side effects and should always give the same result if given the same input.

    Nix [nixos.org] applies this idea to package management. Nix functions are recipes of what pieces of software are required to create or run a new piece of software, so that you can always reliably create the desired package on any system. Unlike traditional package management where updating one program will require updating libs that will in turn also require updating unrelated software, Nix packages install what they need and different lib versions will live together happily in a store (/nix/store) under different hash IDs.

    Okay, that sounds nice and all, but the next question is what good is it to me?

    This setup has a few interesting benefits:

        1. You can have multiple versions of software installed more easily and switch between them as desired. Multiple copies of Wine, Firefox, Krita, gcc, etc. can be installed and changed simply.

        2. Nix maintains package "generations", allowing you to cleanly revert from an update. Hypothetical example: remember the horrible KDE3 to 4 transition? With Nix, a user could have updated to KDE 4.0, found it unusable, and then reverted to the previous generation safely. Systems like dpkg/apt are fine for upgrades but not as good at reverting changes, especially in bulk

        3. You can create your own Nix functions that encompass an entire work environment you want. Like, say, you're working on a project and you want a specific compiler version, a set of libs, Emacs, a specific set of elisp files for emacs, and your debugger of choice. You can describe that in a Nix function and have a one-stop install for it.

        4. Continuing from #3, because of the functional purity ideal, you should be able to take that Nix function and use it on any system with Nix installed to accurately and safely re-create that environment. You could deploy a standard kit of tools on multiple machines by creating a function and installing it on each.

        5. Strange, but kind of cool: Nix has a REPL that you can use to interact with the package manager and test creation of functions and whatnot.

        6. More clearly useful, it also has nix-shell, which lets you create a temporary shell environment with any packages you specify. Like, say, you want to test a new httpd, or try a different browser and see if you like it. You run a nix-shell session with the package you want and get a temporary sandbox, just like if you'd installed it for real. Try out the new software, see if you like it, and then when you exit the shell it's gone.

        7. Per-user packages. It's possible for different users on a system to have different sets of packages and versions, so you could use the latest Firefox but someone else could stick to the Firefox ESR without conflict.

        8. Finally, it can co-exist with other package management. Install it on Arch, install it on Debian, install it on OS X; doesn't matter, it doesn't mess with the FHS so you can run it alongside whatever your OS provides already. Use it solely for build environments or software testing or getting newer versions of things than what your OS provides.

    ---

    Sounds great, but of course there are negatives to the concept as well. The most obvious one is that it hides away installed packages (libs, software, whatever) inside /nix/store/ and makes heavy use of symlinks and $PATH to make things work. This has a side effect that you can't use your traditional script shebang (e.g. #!/usr/bin/foo) because foo most likely won't live in /usr/bin. You instead have to do the '#!/usr/bin/env foo' style.

    The other negative is this setup uses more space and does not automatically purge old software. (If it did you'd have trouble with rollbacks.) That means Nix has to perform garbage collection, where it checks for packages that aren't used by any existing generation and then removes them when it determines they're not used by anything. That also means you have to manage removal of old generations, either automatically or by hand.

    Still, those negatives don't seem so bad (to me, at least) compared to the benefits. If that's the same for anybody else, the next question is how to get it. Most likely you'll want to grab Nix itself [nixos.org] for use on existing systems, but there's also a standalone distribution [nixos.org] that uses Nix as its foundation and seems to be popular among people that use Nix. There is also Guix [gnu.org], a GNU project that provides a distribution based off of Nix, but using Guile Scheme instead of Nix's own language for description of packages.

    Probably the best way to start is to either try using it alongside your normal distro, or try NixOS inside a VM. Though if you're an OS X user, Nix is probably a no-brainer, since OS X users already likely deal with macports/homebrew/whatever else, and it fills the same niche in a cleaner way.

    Sorry this got so long, but it's a different way of managing packages, so it's hard to describe concisely. It may never be "the future" because of inertia and distros having no desire to implement it, but

    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Informative) by Marand on Friday February 05 2016, @03:16AM

    by Marand (1081) on Friday February 05 2016, @03:16AM (#299279) Journal

    Damn it, bumped submit early. To finish the last sentence:

    It may never be "the future" because of inertia and distros having no desire to implement it, but it's an interesting design that seems to have similar benefits as the Docker craze going on, but without trying to basically turn every program into a big program+lib bundle in an attempt to recreate the problems of static linking all over again.