Stories
Slash Boxes
Comments

SoylentNews is people

Meta
posted by NCommander on Tuesday February 07 2017, @11:45AM   Printer-friendly
from the insert-systemd-rant-here dept.

So, in previous posts, I've talked about the fact that SoylentNews currently is powered on Ubuntu 14.04 + a single CentOS 6 box. Right now, the sysops have been somewhat deadlocked on what we should do going forward for our underlying operating system, and I am hoping to get community advice. Right now, the "obvious" choice of what to do is simply do-release-upgrade to Ubuntu 16.04. We've done in-place upgrades before without major issue, and I'm relatively certain we could upgrade without breaking the world. However, from my personal experience, 16.04 introduces systemd support into the stack and is not easily removable. Furthermore, at least in my personal experience, working with journalctl and such has caused me considerable headaches which I detailed in a comment awhile ago.

Discounting systemd itself, I've also found that Ubuntu 16.04 seems less "polished", for want of a better word. I've found I've had to do considerably more fiddling and tweaking to get it to work as a server distro than I had to do with previous releases, as well as had weird issues with LDAP. The same was also true when I worked with recent versions with Debian. As such, there's been a general feeling with the sysops that it's time to go somewhere else.

Below the fold are basically the options as we see them, and I hope if the community can provide some interesting insight or guidance.

Right now, we have about three years before security updates for 14.04 stop, and we are absolutely forced to migrate or upgrade. However, we're already hitting pain due to outdated software; I managed to briefly hose the DNS setup over the weekend trying to deploy CAA records for SN due to our version of BIND being outdated. When TLS 1.3 gets standardized, we're going to have a similar problem with our frontend load balancers. As such, I want to get a plan in place for migration so we can start upgrading over the next year instead of panicking and having to do something at the last moment

The SN Software Stack

As with any discussion for server operating system, knowing what our workloads and such is an important consideration. In short, this is what we use for SN, and the software we have to support

  • nginx - Loadbalancing/SSL Termination
  • Apache 2.2 + mod_perl - rehash (we run it with a separate instance of Apache and Perl, and not the system copy)
  • MySQL Cluster for production
  • MySQL standard for secondary services
  • Kerberos + Hesiod - single-signon/authetication
  • Postfix+Squirrelmail - ... mail

In addition, we use mandatory application controls (AppArmor) to limit the amount of stuff a given process can access for critical services to try and help harden security. We'd like to maintain support for this feature to whatever we migrate, either continuing with AppArmor, switching to SELinux, or using jails/zones if we switch operating systems entirely.

The Options

Right now, we've floated a few options, but we're willing to hear more.

A non-systemd Linux distro

The first choice is simply migrate over to a distribution where systemd is not present or completely optional. As of writing, Arch Linux, Gentoo, and Slackware are three such options. Our requirements for a Linux distribution is a good record of updates and security support as I don't wish to be upgrading the system once a week to a new release.

Release-based distributions

I'm aware of the Devuan project, and at first glance, it would seem like an obvious choice; Debian without systemd is the de-facto tagline. However, I've got concerns about the long-term suitability of the distribution, as well as an intentional choice to replace much of the time-tested Debian infrastructure such as the testing archive with a git-powered Jenkins instance in it's place. Another option would be slackware, but Slackware has made no indication that they won't adapt systemd, and is historically very weak with in-place upgrading and package management in general. Most of the other distributions on without-systemd.org are either LiveCDs, or are very small minority distros that I would be hesitant to bet the farm on with.

Rolling-release distributions

On the other side of the coin, and an option favored by at least some of the staff is to migrate to Gentoo or Arch, which are rolling-release. For those unaware, a rolling release distribution basically always has the latest version of everything. Security updates are handled simply by updating to the latest upstream package for the most part. I'm not a huge fan of this option, as we're dependent on self-built software, and it's not unheard of for "emerge world" to break things during upgrades due to feature changes and such. It would essentially require us to manually be checking release notes, and crossing our fingers every time we did a major upgrade. We could reduce some of this pain by simply migrating all our infrastructure to the form of ebuilds so that at least they would get rebuild as part of upgrading, but I'm very very hesitant about this option as a whole, especially for multiple machines.

Switch to FreeBSD/illumos/Other

Another way we could handle the problem is simply jump off the Linux ship entirely. From a personal perspective, I'm not exactly thrilled on the way Linux as a collective whole has gone for several years, and I see the situation only getting worse with time. As an additional benefit, switching off Linux gives us the possiblity of using real containers and ZFS, which would allow us to further isolate components of the stack, and give us the option to do rollbacks if ever necessary on a blocked upgrade; something that is difficult to impossible with most Linux distributions. As such, I've been favoring this option personally, though I'm not sold enough to make the jump. Two major options attract me of these two:

FreeBSD

FreeBSD has been around a long time, and has both considerable developer support, and support for a lot of features we'd like such as ZFS, jails, and a sane upstream. FreeBSD is split into two components, the core stack which is what constitutes a release, and the ports collection which is add-on software. Both can be upgraded (somewhat) independently of each other, so we won't have as much pain with outdated server components. We'd also have the ability to easy create jails for things like rehash, MySQL, and such and easily isolate these components from each other in a way that's more iron-clad than AppArmor or SELinux.

illumos

illumos is descended from OpenSolaris, and forked after Oracle closed up the source code for Solaris 11. Development has continued on it (at a, granted, slower place). Being the originator of ZFS, it has class A support for it, as well as zones which are functionally equivalent to FreeBSD jails. illumos also has support for SMF, which is essentially advanced service management and tracking without all the baggage systemd creates and tendrils throughout the stack. Zones can also be branded to run Linux binaries to some extent so we can handle migrating the core system over by simply installing illumos, restoring a backup into a branded zone, and then piecemeal decommissioning of said zone. As such, as an upgrade choice, this is fairly attractive. If we migrate to illumos, we'll either use the SmartOS distribution, or OpenIndiana.

Final Notes

Right now, we're basically on the fence with all options, so hopefully the community can provide their own input, or suggest other options we're not aware of. I look forward to your comments below!

~ NCommander

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by martyb on Tuesday February 07 2017, @11:47PM

    by martyb (76) Subscriber Badge on Tuesday February 07 2017, @11:47PM (#464371) Journal

    I appreciate the feedback. I see I wasn't clear in what I was suggesting... too many VMs in the mix!

    We have several Linode instances, each of which is, in reality a VM (via KVM on the bare metal). What I am curious about is whether or not we could, host multiple VMs of our own within a single Linode.

    Background info. At one point, in exchange for converting from Xen to KVM, Linode offered us twice the RAM. Mind you we were running 'okay' on the RAM we had had. Let's take a concrete example. We have a Linode VM, hydrogen which now has 8GB of RAM where it once had only 4 GB.

    Another way to look at it is that we now have an extra 4 GB or RAM on hydrogen.

    What keeps us from running several VMs of our own within the 8GB RAM on hydrogen? In other words, can't we run (our own) VMs in our Linode VM?

    So, conceivably, could we not host two (almost) 4GB VMs in our 8 GB Linode? We could have a 4 GB hydrogen (on our existing Ubuntu) and a new one, let's call it deuterium, running on, say, FreeBSD.

    --
    Wit is intellect, dancing.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by VLM on Wednesday February 08 2017, @02:06PM

    by VLM (445) Subscriber Badge on Wednesday February 08 2017, @02:06PM (#464526)

    What keeps us from running several VMs of our own within the 8GB RAM on hydrogen? In other words, can't we run (our own) VMs in our Linode VM?

    Nothing, really, although for networking reasons you're gonna have to pay for another ipv4 addrs or play NAT games.

    I have no direct KVM experience but I have a lot of experience with other systems and from what I can tell if you want to run kvm in kvm you need CPU feature level support enabling nested vmx. The good news is that is a boot time argument no problemo IF you own the bare metal hardware. The bad news is it sounds unlikely (although possible) the linode guys boot enabling nested KVM. All I can find on the topic of nested vmx is caker himself (the main dude at linode) a couple years ago specifically saying that CPU feature was not enabled in their initial Xen to KVM conversion project (... at least way back then) and they might revisit that decision someday. So probably not, although I wouldn't be shocked if it has been changed although undocumented.

    I'm a linode customer longer than SN has been; I logged in and I forget where I am (I think I'm in Dallas data center?) and ran x86info --flags | grep vmx and got nothing. So at least on my host probably in Dallas, nested vmx hasn't been enabled.

    SN people being famous internet stars and having groupies and such, if I asked caker to enable vmx on my host as a special request its probably not happening, but if the SN people asked formally, well, who knows...

    Of course people have been doing virtualization for a heck of a long time without KVM or nestet vmx support. LXC which is mostly just jails for linux (or was, in the old days, anyway) runs on anything that boots linux for all practical purposes (like no 386 or something, but I had some old 2000s decade 1U servers that worked fine with LXC).

    Personally I think for april fools you guys should install original unix v7 on something like the simh emulator and make it your DNS server or some other component of the architecture. OR set up a Hercules emulation box and run Debian-S390 on it. That being an immense hack and waste of cycles, would also be cool use of virtualization, although probably not even remotely what you're asking for LOL.

    • (Score: 2) by martyb on Thursday February 09 2017, @11:07PM

      by martyb (76) Subscriber Badge on Thursday February 09 2017, @11:07PM (#465305) Journal

      Thanks for the thoughtful reply!

      I had not considered hardware support for virtualization. Sure, everything could be emulated, but then there would be a [potentially major] performance hit. I guess it depends on how heavily loaded the existing system's processors are. If they are generally mostly idle, then it would seem to be a real win. But that is a big "IF".

      Thinking back, I'm amazed I didn't think of the vmx hardware assist being important. When I was working at IBM testing VM/SP HPO in the early 80's there were a number of test scenarios we would run. The usual case of a bunch of VMs running on the bare metal. Then there were the "second level" VMs -- a user could run VM in a VM. (This was not entirely uncommon - we were doing that all the time as we were working on a new release. At the time, all source code was provided to the customer, too. Thus, many shops did their own customization and would run 2nd-level VMs to test them out, too.) There were several specific code paths in place to provide optimizations for that.

      And then there was the case of running a VM in a VM in a VM on the bare metal -- a third-level VM. This would allow you, while running your VM to test how well your new VM could support a VM running in it. Yes, we did a bit of that, too. And it was great when it worked! And it generally DID work, when we were done testing it. But woe unto ye who had to debug what happened when something went wrong! Single stepping through each assembler statement on the VM closest to the bare metal and watching all the things percolating up and back and through all the optimization paths, in hexadecimal, was "interesting". =)

      Again, thanks for the reply!

      --
      Wit is intellect, dancing.
      • (Score: 2) by NCommander on Friday February 10 2017, @07:48AM

        by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Friday February 10 2017, @07:48AM (#465422) Homepage Journal

        Hardware virtualization assist mostly exists on x86 being a nightmare to virtualization; its *considerably* less of a problem on other architectures. Alpha for instance only has a single instruction that requires supervisor mode, so you can simply trap I/O access and emulator, and that one instruction. VMware was made famous because up until that point, paravirtualization was considered borderline impossible for x86 at reasonable speed. The original releases of Xen required modified domU software which allowed the OS to run in Ring 1 and avoid direct I/O accesses due to this problem until vmx became a thing.

        --
        Still always moving
      • (Score: 3, Interesting) by VLM on Sunday February 12 2017, @01:03PM

        by VLM (445) Subscriber Badge on Sunday February 12 2017, @01:03PM (#466129)

        a user could run VM in a VM. (This was not entirely uncommon - we were doing that all the time

        I have a relative who was a sysprog in the 80s at a major manufacturer and they did this "prehistoric Docker" type of thing. IT is cyclical not linear and virtualization and immutable deployments and stuff is actually very old, not new. Everything we do today will be reinvented in 2060 or something and branded as totally new.

        Like many big projects the blind men and elephant effect occurs and the explanation I got for why they ran VMs in VMs was fuzzy but it boiled down to a panic solution after a merger worked so well in operational practice, that they simply continued to do all business that way.

        The financial services company I worked at in the 90s also spent a lot of doing mergers and they were not into that strategy for whatever reason. Maybe they were better at mergers, I donno.

        • (Score: 2) by martyb on Monday February 13 2017, @01:21AM

          by martyb (76) Subscriber Badge on Monday February 13 2017, @01:21AM (#466398) Journal

          Like many big projects the blind men and elephant effect occurs and the explanation I got for why they ran VMs in VMs was fuzzy but it boiled down to a panic solution after a merger worked so well in operational practice, that they simply continued to do all business that way.

          Necessity is the mother of invention, and in this case it seems that a quick-and-dirty hack ended up working so well that it became the defacto way of doing things -- I don't know why, but something about that just gives me a nice warm feeling about human ingenuity!

          I wonder if an AI could ever have come up with THAT solution? Was that a stroke of human brilliance? Or, would a simple enumeration of all of the possibilities with the appropriate risk assessments necessarily have come up with this solution.... or maybe something even better?

          Anyway, thanks for the reply -- much appreciated!

          --
          Wit is intellect, dancing.