Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by NCommander on Tuesday February 07 2017, @11:45AM   Printer-friendly
from the insert-systemd-rant-here dept.

So, in previous posts, I've talked about the fact that SoylentNews currently is powered on Ubuntu 14.04 + a single CentOS 6 box. Right now, the sysops have been somewhat deadlocked on what we should do going forward for our underlying operating system, and I am hoping to get community advice. Right now, the "obvious" choice of what to do is simply do-release-upgrade to Ubuntu 16.04. We've done in-place upgrades before without major issue, and I'm relatively certain we could upgrade without breaking the world. However, from my personal experience, 16.04 introduces systemd support into the stack and is not easily removable. Furthermore, at least in my personal experience, working with journalctl and such has caused me considerable headaches which I detailed in a comment awhile ago.

Discounting systemd itself, I've also found that Ubuntu 16.04 seems less "polished", for want of a better word. I've found I've had to do considerably more fiddling and tweaking to get it to work as a server distro than I had to do with previous releases, as well as had weird issues with LDAP. The same was also true when I worked with recent versions with Debian. As such, there's been a general feeling with the sysops that it's time to go somewhere else.

Below the fold are basically the options as we see them, and I hope if the community can provide some interesting insight or guidance.

Right now, we have about three years before security updates for 14.04 stop, and we are absolutely forced to migrate or upgrade. However, we're already hitting pain due to outdated software; I managed to briefly hose the DNS setup over the weekend trying to deploy CAA records for SN due to our version of BIND being outdated. When TLS 1.3 gets standardized, we're going to have a similar problem with our frontend load balancers. As such, I want to get a plan in place for migration so we can start upgrading over the next year instead of panicking and having to do something at the last moment

The SN Software Stack

As with any discussion for server operating system, knowing what our workloads and such is an important consideration. In short, this is what we use for SN, and the software we have to support

  • nginx - Loadbalancing/SSL Termination
  • Apache 2.2 + mod_perl - rehash (we run it with a separate instance of Apache and Perl, and not the system copy)
  • MySQL Cluster for production
  • MySQL standard for secondary services
  • Kerberos + Hesiod - single-signon/authetication
  • Postfix+Squirrelmail - ... mail

In addition, we use mandatory application controls (AppArmor) to limit the amount of stuff a given process can access for critical services to try and help harden security. We'd like to maintain support for this feature to whatever we migrate, either continuing with AppArmor, switching to SELinux, or using jails/zones if we switch operating systems entirely.

The Options

Right now, we've floated a few options, but we're willing to hear more.

A non-systemd Linux distro

The first choice is simply migrate over to a distribution where systemd is not present or completely optional. As of writing, Arch Linux, Gentoo, and Slackware are three such options. Our requirements for a Linux distribution is a good record of updates and security support as I don't wish to be upgrading the system once a week to a new release.

Release-based distributions

I'm aware of the Devuan project, and at first glance, it would seem like an obvious choice; Debian without systemd is the de-facto tagline. However, I've got concerns about the long-term suitability of the distribution, as well as an intentional choice to replace much of the time-tested Debian infrastructure such as the testing archive with a git-powered Jenkins instance in it's place. Another option would be slackware, but Slackware has made no indication that they won't adapt systemd, and is historically very weak with in-place upgrading and package management in general. Most of the other distributions on are either LiveCDs, or are very small minority distros that I would be hesitant to bet the farm on with.

Rolling-release distributions

On the other side of the coin, and an option favored by at least some of the staff is to migrate to Gentoo or Arch, which are rolling-release. For those unaware, a rolling release distribution basically always has the latest version of everything. Security updates are handled simply by updating to the latest upstream package for the most part. I'm not a huge fan of this option, as we're dependent on self-built software, and it's not unheard of for "emerge world" to break things during upgrades due to feature changes and such. It would essentially require us to manually be checking release notes, and crossing our fingers every time we did a major upgrade. We could reduce some of this pain by simply migrating all our infrastructure to the form of ebuilds so that at least they would get rebuild as part of upgrading, but I'm very very hesitant about this option as a whole, especially for multiple machines.

Switch to FreeBSD/illumos/Other

Another way we could handle the problem is simply jump off the Linux ship entirely. From a personal perspective, I'm not exactly thrilled on the way Linux as a collective whole has gone for several years, and I see the situation only getting worse with time. As an additional benefit, switching off Linux gives us the possiblity of using real containers and ZFS, which would allow us to further isolate components of the stack, and give us the option to do rollbacks if ever necessary on a blocked upgrade; something that is difficult to impossible with most Linux distributions. As such, I've been favoring this option personally, though I'm not sold enough to make the jump. Two major options attract me of these two:


FreeBSD has been around a long time, and has both considerable developer support, and support for a lot of features we'd like such as ZFS, jails, and a sane upstream. FreeBSD is split into two components, the core stack which is what constitutes a release, and the ports collection which is add-on software. Both can be upgraded (somewhat) independently of each other, so we won't have as much pain with outdated server components. We'd also have the ability to easy create jails for things like rehash, MySQL, and such and easily isolate these components from each other in a way that's more iron-clad than AppArmor or SELinux.


illumos is descended from OpenSolaris, and forked after Oracle closed up the source code for Solaris 11. Development has continued on it (at a, granted, slower place). Being the originator of ZFS, it has class A support for it, as well as zones which are functionally equivalent to FreeBSD jails. illumos also has support for SMF, which is essentially advanced service management and tracking without all the baggage systemd creates and tendrils throughout the stack. Zones can also be branded to run Linux binaries to some extent so we can handle migrating the core system over by simply installing illumos, restoring a backup into a branded zone, and then piecemeal decommissioning of said zone. As such, as an upgrade choice, this is fairly attractive. If we migrate to illumos, we'll either use the SmartOS distribution, or OpenIndiana.

Final Notes

Right now, we're basically on the fence with all options, so hopefully the community can provide their own input, or suggest other options we're not aware of. I look forward to your comments below!

~ NCommander

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by Creaky on Tuesday February 07 2017, @02:44PM

    by Creaky (6492) on Tuesday February 07 2017, @02:44PM (#464066)

    Ok, this posting got me off the lurker fence.

    Following views developed from the dirty coal face experience over the ages with high visibility high load websites.

    What makes a good web site hosting OS comes down to how easy it is for the poor administrator to:

    * Install, blow away, recover and upgrade.
    * Secure out of (or close to) the box.
    * Sane network and system defaults out of the box.
    * Sensible file system layout.
    * Support modern application software stacks.
    * Good reliable 3rd party package management or build infrastructure.
    * Package repositories kept up to date.
    * Deterministic booting and general operating behaviors. (This includes time scheduling)
    * Easy to modify or tune where required.
    * Ability to run in Local VM, Cloud Infrastructure and Physical Hardware.
    * Integration with deployment software such as Ansible or SaltStack.
    * Longevity and likely to remain available for next 10 years.

    I would personally choose FreeBSD. It best meets the above “needs” and has very significant history and is running high volume websites successfully. FreeBSD jails and FreeBSD layer 2 hypervisor bhyve offer great separation. PF firewall is excellent. Network stack and file systems UFS and ZFS very battle tested and proven. Ports and port building infrastructure (poudriere with portshaker) provides own local repository for production application package deployment. I personally have lots of experience with FreeBSD in high volume high visibility web sites and consider it viable. Linux binaries can be run under FreeBSD (why?) or use bhyve to create linux VM whilst migrating.

    Everyone will have a Linux distribution opinion so I will talk about the other options.

    Solaris in its day was great. The network stack, scheduler and ZFS was and still is leaps and bounds beyond any Linux offering. Talking about X86 versions SmartOS is a cloud OS only so using it requires re-thinking how a server state and storage is done. This is a big architecture change to running Linux and is not for the inexperienced. Illumos/OpenIndiana is good and is most like traditional Solaris.

    However all Solaris derived systems suffer significantly in 3rd party packages. Illumos and SmartOS use the NetBSD ports package system and it is a pain in the backside to update and maintain. Software in the package system is often out of date and updates are irregular. Not what is desired in a public front facing OS and application stack.

    Comparing Solaris to FreeBSD, the OpenZFS brings ZFS parity between the operating systems. Solaris zones now taken care of by FreeBSD Jails and especially bhyve. So install FreeBSD, create Linux bhyve virtual machine, restore a backup into the VM and then piecemeal decommision the said VM. So FreeBSD meets all that illumos brings to the table. Both FreeBSD and Solaris derived will bring greater stability to the table than any Linux distribution.

    On a final note, why use bind? Its history of poor programming practices (see never ending list of security advisories) should banish it. Try nsd [] for name serving and unbound [] for name resolution.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  

    Total Score:   3  
  • (Score: 2) by NCommander on Tuesday February 07 2017, @03:57PM

    by NCommander (2) Subscriber Badge <> on Tuesday February 07 2017, @03:57PM (#464106) Homepage Journal

    Our BIND instance isn't public; I'm well versed with its "quirks". We simply maintain the zone file in it and it does DNSSEC signing, and then it gets punted off to Linode by AXFR. Inline signing makes DNSSEC absolutely trivial (unless you bork up a config file). The internal li694-22 zone is also hosted on bind, but again, its not world accessible so I'm not hugely concerned on security on it.

    One of the bigger advantages to BIND for us is it still supports classes, so we could move our Hesiod configuration from IN -> HS if we want to for separation reasons.

    Still always moving
  • (Score: 2) by NCommander on Tuesday February 07 2017, @03:59PM

    by NCommander (2) Subscriber Badge <> on Tuesday February 07 2017, @03:59PM (#464109) Homepage Journal

    I'm well familiar with pkgsrc and its deficits with upgrading. Unfortunately. Depending on how you set it up though, its relatively easy to copy all packages installed, re-install on a new pkgsrc copy, and then punt over the etc/var folder. Not an ideal setup, but def. managable once every three months.

    Still always moving