So, in previous posts, I've talked about the fact that SoylentNews currently is powered on Ubuntu 14.04 + a single CentOS 6 box. Right now, the sysops have been somewhat deadlocked on what we should do going forward for our underlying operating system, and I am hoping to get community advice. Right now, the "obvious" choice of what to do is simply do-release-upgrade to Ubuntu 16.04. We've done in-place upgrades before without major issue, and I'm relatively certain we could upgrade without breaking the world. However, from my personal experience, 16.04 introduces systemd support into the stack and is not easily removable. Furthermore, at least in my personal experience, working with journalctl and such has caused me considerable headaches which I detailed in a comment awhile ago.
Discounting systemd itself, I've also found that Ubuntu 16.04 seems less "polished", for want of a better word. I've found I've had to do considerably more fiddling and tweaking to get it to work as a server distro than I had to do with previous releases, as well as had weird issues with LDAP. The same was also true when I worked with recent versions with Debian. As such, there's been a general feeling with the sysops that it's time to go somewhere else.
Below the fold are basically the options as we see them, and I hope if the community can provide some interesting insight or guidance.
Right now, we have about three years before security updates for 14.04 stop, and we are absolutely forced to migrate or upgrade. However, we're already hitting pain due to outdated software; I managed to briefly hose the DNS setup over the weekend trying to deploy CAA records for SN due to our version of BIND being outdated. When TLS 1.3 gets standardized, we're going to have a similar problem with our frontend load balancers. As such, I want to get a plan in place for migration so we can start upgrading over the next year instead of panicking and having to do something at the last moment
As with any discussion for server operating system, knowing what our workloads and such is an important consideration. In short, this is what we use for SN, and the software we have to support
In addition, we use mandatory application controls (AppArmor) to limit the amount of stuff a given process can access for critical services to try and help harden security. We'd like to maintain support for this feature to whatever we migrate, either continuing with AppArmor, switching to SELinux, or using jails/zones if we switch operating systems entirely.
Right now, we've floated a few options, but we're willing to hear more.
The first choice is simply migrate over to a distribution where systemd is not present or completely optional. As of writing, Arch Linux, Gentoo, and Slackware are three such options. Our requirements for a Linux distribution is a good record of updates and security support as I don't wish to be upgrading the system once a week to a new release.
I'm aware of the Devuan project, and at first glance, it would seem like an obvious choice; Debian without systemd is the de-facto tagline. However, I've got concerns about the long-term suitability of the distribution, as well as an intentional choice to replace much of the time-tested Debian infrastructure such as the testing archive with a git-powered Jenkins instance in it's place. Another option would be slackware, but Slackware has made no indication that they won't adapt systemd, and is historically very weak with in-place upgrading and package management in general. Most of the other distributions on without-systemd.org are either LiveCDs, or are very small minority distros that I would be hesitant to bet the farm on with.
On the other side of the coin, and an option favored by at least some of the staff is to migrate to Gentoo or Arch, which are rolling-release. For those unaware, a rolling release distribution basically always has the latest version of everything. Security updates are handled simply by updating to the latest upstream package for the most part. I'm not a huge fan of this option, as we're dependent on self-built software, and it's not unheard of for "emerge world" to break things during upgrades due to feature changes and such. It would essentially require us to manually be checking release notes, and crossing our fingers every time we did a major upgrade. We could reduce some of this pain by simply migrating all our infrastructure to the form of ebuilds so that at least they would get rebuild as part of upgrading, but I'm very very hesitant about this option as a whole, especially for multiple machines.
Another way we could handle the problem is simply jump off the Linux ship entirely. From a personal perspective, I'm not exactly thrilled on the way Linux as a collective whole has gone for several years, and I see the situation only getting worse with time. As an additional benefit, switching off Linux gives us the possiblity of using real containers and ZFS, which would allow us to further isolate components of the stack, and give us the option to do rollbacks if ever necessary on a blocked upgrade; something that is difficult to impossible with most Linux distributions. As such, I've been favoring this option personally, though I'm not sold enough to make the jump. Two major options attract me of these two:
FreeBSD has been around a long time, and has both considerable developer support, and support for a lot of features we'd like such as ZFS, jails, and a sane upstream. FreeBSD is split into two components, the core stack which is what constitutes a release, and the ports collection which is add-on software. Both can be upgraded (somewhat) independently of each other, so we won't have as much pain with outdated server components. We'd also have the ability to easy create jails for things like rehash, MySQL, and such and easily isolate these components from each other in a way that's more iron-clad than AppArmor or SELinux.
illumos is descended from OpenSolaris, and forked after Oracle closed up the source code for Solaris 11. Development has continued on it (at a, granted, slower place). Being the originator of ZFS, it has class A support for it, as well as zones which are functionally equivalent to FreeBSD jails. illumos also has support for SMF, which is essentially advanced service management and tracking without all the baggage systemd creates and tendrils throughout the stack. Zones can also be branded to run Linux binaries to some extent so we can handle migrating the core system over by simply installing illumos, restoring a backup into a branded zone, and then piecemeal decommissioning of said zone. As such, as an upgrade choice, this is fairly attractive. If we migrate to illumos, we'll either use the SmartOS distribution, or OpenIndiana.
Right now, we're basically on the fence with all options, so hopefully the community can provide their own input, or suggest other options we're not aware of. I look forward to your comments below!
~ NCommander
(Score: 2, Disagree) by Subsentient on Tuesday February 07 2017, @12:05PM
If you want a system without systemd, just install debian and apt-get remove systemd. You can install sysvinit straight from the repos and it works fine.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by butthurt on Tuesday February 07 2017, @12:34PM
For good measure, one can put
Package: systemd
Pin-Priority: -1000
in /etc/apt/preferences to keep the systemd package from coming back.
(Score: 5, Informative) by NCommander on Tuesday February 07 2017, @12:35PM
Doesn't work nearly as well as you'd think. Debian has a hard system of binding for dependencies, and anything that wants an SD socket will pull it in without an explicate pin to block it. I honestly won't be surprised if we've got a bunch of packages that only install unit files now and don't bother with sysvinit/upstart at all which means you install, and nothing happens. If I thought using Debian without systemd was viable, we'd done a cross-grade from Ubuntu 14.04 to Debian 8.
Still always moving
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @04:49PM
I've no problem removing systemd from the herd of NAS boxes I manage.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @06:13PM
Works fine for me, and has for a while. You're just assuming things won't work, which might be true for future releases of Debian but is not for this one.
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @03:32AM
At work we pin systemd to -1 in an apt preference file (pushed out via puppet), and it works fine on all our jessie hosts. A little concerned about stretch and beyond, though.
(Score: 4, Informative) by canopic jug on Tuesday February 07 2017, @12:41PM
You can't uninstall systemd from Debian since many years ago.
Server-side, Devuan GNU/Linux is quite good. It really is a drop-in replacement. But if you don't trust it yet and really want plain Debian, then there is Debian GNU/kFreeBSD. Only the kernel and a few kernel related user-land tools are different. Nothing that should be noticeable except for PF vs iptables -- unless you run into a hardware compatibility issue.
I think the main question is how interested you are in ZFS. There is some ZFS action in both Devuan GNU/Linux and Debian GNU/kFreeBSD, but the real ZFS action is happening on plain FreeBSD. I myself don't enjoy FreeBSD so much and, again, there is the hardware support question. FreeBSD is not as organized as OpenBSD, but OpenBSD means committing to doing serious OS upgrades at least every 12 months, unless you shell out for the M:Tier options. Some releases of FreeBSD have a long support cycle [freebsd.org] and I think 11 is good until 2022.
Money is not free speech. Elections should not be auctions.
(Score: 3, Insightful) by NCommander on Tuesday February 07 2017, @12:54PM
My problem with Debian/kFreeBSD is they changed out the entire userland to GNU+glibc. This causes a hilarious amount of breakage for anything that hasn't been patched to recognize that (uname == FreeBSD) != (libc == FreeBSD). We have to compile a lot of CPAN modules for rehash, and I can see that just blowing up in hilarious ways if we tried it on kFreeBSD; I have a better chance getting it to work on HURD.
It also failed to meet release qualifications for jessie, and as such was only released to the side, not as an official Debian release (similar to etch-m68k).
Still always moving
(Score: 2) by canopic jug on Tuesday February 07 2017, @01:03PM
It also failed to meet release qualifications for jessie, and as such was only released to the side, not as an official Debian release (similar to etch-m68k).
That was mostly due to systemd though.
How deep into the OS do you need to delve? I got the impression that you were relying on Perl 5, shell scripts, and some pre-packaged databases. Most of that is pretty remote from the kernel.
If FreeBSD were as easy to work with as OpenBSD, then it'd be the hands-down winner. However, once it's set up it is rather low maintenance. Additionally, FreeBSD has jails and ZFS, even if its PF is out of date.
Money is not free speech. Elections should not be auctions.
(Score: 2) by NCommander on Tuesday February 07 2017, @01:31PM
The original notes for Slashcode had a fairly large caveat section on running on BSD due to some of the underlying Perl modules taking a shit. They did note it ran perfectly fine on Solaris. As far as I know, we cleaned out most of that breakage when we migrated the entire stack forward into 2015. For shits and giggles last year for April fools, I tried to setup rehash on Hurd, but failed about half way through it as it fell over due to missing Perl modules. I've since "improved" the installation instructions so I can test it in a VM and fix breakages relatively easy if it came do that.
Still always moving
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @02:22PM
I am not surprised.
I have the same problem with compiling software that uses assembler or specifies a CPU architecture ever since putting a 64-bit kernel on my 32-bit Slackware install. They check uname, notices a 64-bit CPU and tries to compile a 64-bit version, resulting in an error from the 32-bit compiler.
Asking GCC for the architecture is one line of shell script, and gives all the information (i686-Linux in my case) in one go.
(Score: 2) by schad on Tuesday February 07 2017, @01:06PM
If I were the only one affected by the decision, I'd switch to FreeBSD. I love Solaris, but it's over and done with. FreeBSD is the next best thing.
If others were affected too, I'd seriously consider whether my philosophical objection to systemd is actually worth the practical efforts of avoiding it. This is why, at work, we're going to CentOS 7. None of us are happy about it, but using a niche OS like FreeBSD means that we're committing ourselves to 100% ownership of every aspect of these servers' lives for as long as we have them. That's something we very much don't want to do -- we've got this entire huge IT department with follow-the-sun 24/7/365 coverage; why not use it? -- so we suck it up and deal.
It's weird to me that SVR4 is actually turning out to be the loser of the Unix wars. The proprietary Unixes are basically deprecated even by their owners now, and Linux is doing everything it can to strip away its Unix-clone origins. Meanwhile, the BSDs just keep doing their thing... while slowly absorbing the interesting bits of the last (and first!) SVR4 derivative. Even five years ago, I don't think I would have seen this coming.
(Score: 3, Insightful) by canopic jug on Tuesday February 07 2017, @01:13PM
FreeBSD hasn't been niche for ages. It runs Netflix, is on the PS4, used to run HotMail during its growth phase, ran the now defunct Yahoo, runs WhatsApp, is one of the systems used by Verisign, used in Juniper, and Experts-Exchange (mind the dash). It's just a bit weird to set up and needs reading its handbook to make that possible, though it does have a really good handbook.
Money is not free speech. Elections should not be auctions.
(Score: 2) by Pino P on Tuesday February 07 2017, @02:19PM
I thought that because of the paywall, the user base abandoned Expert S-ex Change in favor of Stack Overflow and the rest of the Stack Exchange network, which runs Windows Server [stackexchange.com].
(Score: 2) by canopic jug on Tuesday February 07 2017, @02:24PM
I've never even looked at it, but do recall the noise about the hyphen. I guess that was a while ago. However, they do have a testimonial up about FreeBSD [freebsdfoundation.org].
Money is not free speech. Elections should not be auctions.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @03:27PM
Well, without the hyphen they would be inaccessible from quite a few places with stupid filters. I guess they learned that the hard way.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @08:06PM
that explains why the rules/mods are so old school douchetastic.
(Score: 2) by TheGratefulNet on Tuesday February 07 2017, @02:57PM
I worked at juniper back in 2000 or so and their whole eng dep ran on freebsd (desktop, mail, etc). the router itself ran freebsd!
at that time, other networking companies I was at were using netbsd (for power-pc or some other non-intel chip).
now, linux is all the rage ,but linux is not as stable as it once was (oddly enough). I'd go with bsd.
"It is now safe to switch off your computer."
(Score: 2, Disagree) by schad on Wednesday February 08 2017, @12:56AM
When you can rattle off a near-complete list of every major deployment, present or recent past, you're kind of making my case for me.
I love FreeBSD. I like it better than Linux, and I always have. It fits better with my sense for how computers ought to operate. But... I'm sick of fighting with Linux people. They know only one way: The Linux Way. They won't learn FreeBSD not because they can't, but because they don't want to. They get pissy at having to type "netstat -r" instead of "route" (never mind that the former works in Linux just fine). They bitch about "ps -ef" not working the way they expect. And God help you if they have to write a shell script. They'll come to you in a blind fury about how /bin/sh isn't bash, and what kind of idiotic system doesn't even have bash, and so they installed it from "this 'ports' thing, which by the way took forever," but it went into /usr/local/bin/bash instead of replacing /bin/sh like it should, so they made /bin/sh a symlink to /usr/local/bin/bash and now the system won't even boot, and what kind of idiot designed this shit, and why the fuck can't we just use Linux which just works and you don't have to fight with it all the time?
(You may be able to tell that I've been in this situation with a coworker.)
(Score: 5, Informative) by TheRaven on Wednesday February 08 2017, @01:18AM
sudo mod me up
(Score: 2) by NCommander on Tuesday February 07 2017, @01:14PM
If it was just me, we'd upgraded probably have upgraded already. Our CentOS 6 box has driven me mad since GoLive but its still here.
Still always moving
(Score: 3, Insightful) by VLM on Tuesday February 07 2017, @02:32PM
None of us are happy about it, but using a niche OS like FreeBSD means that we're committing ourselves to 100% ownership of every aspect of these servers' lives for as long as we have them. That's something we very much don't want to do -- we've got this entire huge IT department with follow-the-sun 24/7/365 coverage; why not use it? -- so we suck it up and deal.
I've found in practice at "giant Fortune 50" megacorps over a couple decades of linux and now freebsd, that unless you're doing something really niche, there is no niche support required, so if you're doing something weird no one on the planet can help you as much as yourself, and if you're not doing something weird its so easy to help yourself it doesn't matter. Secondly, people who want to work with you, will work with you, and people who don't want to work with you, will not work with you, and the mere topic of OS is more or less irrelevant because if "Mordac the Preventer" in IT land wants to stop you, merely using a supported OS isn't going to provide much armor against him. Or if you have and are using a big club to get past Mordac the Preventer then using your choice of OS is mere icing on the cake.
From a telecom background, isolation and demarcation are the key points. Never tell IT that you're running freebsd and its slow and they need to help you debug freebsd, only tell them the problem is NAS share named wtf-1234 measured multiple times only has 70 K/sec of measured bandwidth which seems a bit low unless the NAS is running on a 360K 5.25 floppy drive. Make sure no one discusses what you're doing with 70 K/sec of bandwidth or your astrological sign or your OS version or any of that, they're just optimistically going to fix the NAS or the network or whatever.
(Score: 2) by Yog-Yogguth on Wednesday February 08 2017, @03:07AM
Very good advice except the very last part which I'm not sure I understood because sometimes wild geese will lead you to the answer/solution, especially if it's a really hard problem way above your "paygrade".
But I still think I didn't get what you meant to say in your last sentence.
Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
(Score: 2) by VLM on Wednesday February 08 2017, @02:52PM
I've worked "with" Mordac the Preventer on a couple occasions over the past decades and say you want a dude to add a DNS record and dude wants to play Quake or WoW or pokemon go or Facebook instead, then rather than spending 60 seconds adding your DNS A record you'll get enormous feedback asking if that device has a barcode in the new inventory system or has it been added to the centralized ping monitoring system. You know how 3rd world roadblocks are just random garbage piled up to stop people from getting what they want? "So you want a DNS A record, just to verify does that new windows box have a corporate install of Avast anti virus on it and who paid for that license?" and the F-ing idiot doesn't realize that IP address in the DNS request is the management port of a Cisco ethernet switch or its an arduino ethernet shield or its a SCADA controller that runs VXworks. I've actually had conversations like this.
I've found over the decades that the most personally rewarding way to blow past a human obstacle like that is flame him and his boss until the smell of his burning flesh makes him repent, but the fastest way to actually accomplish the goal is just to politely persistently focus on the task until its clear to Mordac the Preventer that the easiest way to get back to playing WoW is to just add the A record. "Why don't I stay on hold until I verify this is done" "I'll call you back in ten to see how its going" "Your boss told me it wouldn't be a problem as long as I filled out the request form correctly". Tempting as it is as an alternative to discuss something more in the style of "The best in life is to crush your enemies, see them driven before you, and hear the lamentations of their women"
The only thing less effective is going evangelical on the poor MCSE, the last thing he wants to hear is both a request for a simple DNS "A" record PLUS a free rant about the superiority of unix over windows and BTW emacs is superior to visual studio (admittedly all true, but irrelevant to the topic...)
If you know Mordac the Preventer is just going to throw up roadblocks make sure he obtains nothing extra from you to pile up on the roadblock.
Its actually kinda fun blowing past those folks once you get good at it. "The more you tighten your hand, the more star systems slip thru your fingers...."
(Score: 2) by TheGratefulNet on Tuesday February 07 2017, @02:54PM
freebsd is my vote, too.
production headless server that runs ip stuff?
freebsd.
not linux.
only issue: most admins know linux better than bsd. but that's not a big problem, its still unix.
"It is now safe to switch off your computer."
(Score: 1) by animal on Tuesday February 07 2017, @04:35PM
You can install Debian wheezy which is systemd free. Prior to upgrading to jessie, there are steps that can be taking to keep sysvinit and skip the whole systemd installation and removing using apt-pinning.
https://www.debian.org/releases/jessie/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system. [debian.org] I didn't want systemd either, I recently upgraded to jessie after I was show that link on irc.debian.org. The people there are VERY helpful.
(Score: 2) by butthurt on Tuesday February 07 2017, @08:49PM
You can't uninstall systemd from Debian since many years ago.
It worked for me.
(Score: 2) by butthurt on Wednesday February 08 2017, @12:44PM
You can't uninstall systemd from Debian since many years ago.
I've been able to do so, apart from leaving the libsystemd0 package installed; it hasn't caused problems.
(Score: 2, Disagree) by Lester on Tuesday February 07 2017, @12:47PM
For good or bad, systemd is in debian (and linux) to stay. In the middle and long term, software sticking on non-systemd will eventually become unmantained, no matter what Devuan or others do.
By the way, I don't like systemd.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @12:48PM
They are currently using upstart, so I doubt they want to go back to sysv.
(Score: 2) by VLM on Tuesday February 07 2017, @03:57PM
We need a mod named "I wish" or similar.
(Score: 2, Funny) by Anonymous Coward on Tuesday February 07 2017, @12:19PM
I mean, obviously. Am I right?
(Score: 2) by bzipitidoo on Tuesday February 07 2017, @10:26PM
Couldn't be more wrong! The obvious solution is to make your own. Gentoo is for wimps! Real sysadmins build from scratch. While Linux From Scratch is a good starting point, I'd get away from Linux entirely, and FreeBSD as well. After all, they're monolithic kernels. Illumos?? How could you even consider a Slowaris fork?
Use a microkernel based OS such as Minix 3. Sure, Minix needs lots of work, but that's what makes it fun, right?
(Score: 2) by TheRaven on Wednesday February 08 2017, @01:21AM
sudo mod me up
(Score: 2) by fido_dogstoyevsky on Tuesday February 07 2017, @12:25PM
Any particular reasons for not going with Slackware (if sticking with Linux) or OpenBSD (if jumping to *BSD)?
It's NOT a conspiracy... it's a plot.
(Score: 2) by NCommander on Tuesday February 07 2017, @12:40PM
My problem with Slack is that it's package management system isn't so much braindead as stuck in 1990. Since it doesn't do any advanced tracking of files or config things, it can (and has) clobbered configuration files on system wide upgrades and such. The situation MIGHT have gotten better since the last time I ran slack though That, in and of itself, wouldn't be a show stopper, but, plus I haven't seen much commitment that Slack won't adapt systemd at some point. If we stay with a Linux distribution though, there's a good chance we'll jump to Slack.
If systemd gets its head out of its ass and stops sucking constantly (ala PulseAudio (eventually)) did, I might not care, but there's little notion they plan to backtrack from much of the braindamage I outlined before.
Still always moving
(Score: 2) by Thexalon on Tuesday February 07 2017, @01:27PM
I'm using Slack right now, and I'm fairly sure they've modified their installer scripts to not clobber config files.
While I agree there's no explicit promise to never use systemd, they've also shown no signs of doing it, and being one of the most old-school-Unixy distros I'd be surprised if they did.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 3, Interesting) by linuxrocks123 on Tuesday February 07 2017, @08:26PM
Second Thexalon. I would in fact be suspicious of Slackware if they did say they would _NEVER_ adopt SystemD. Slackware is an ambitious project, like any serious distro, and Slackware does not define itself in terms of what it won't do. If SystemD evolves into a project that fits with Slackware's vision, Slackware will probably adopt it. That said, here's the most official word on SystemD I've found thus far: http://alien.slackbook.org/blog/pulseaudio-comes-to-slackware-current-beta/ [slackbook.org]
Regarding package management, this is what I use: https://software.jaos.org/ [jaos.org] I don't think it's ever clobbered my config files, and you should be using NILFS2 anyway.
More on that link from earlier: Slackware recently adopted PulseAudio. Obviously not relevant for your use case, but that was relevant to mine. I decided I didn't want PulseAudio, so I took it out. Slackware adopted PulseAudio in a way such that taking it out was fairly easy and, with a few ALSA config file changes, everything kept working. Slackware's philosophy in general is to let the user decide what he wants and to facilitate the user's choices. If Slackware ever does adopt SystemD, I'm sure it will still be possible to remove it and have everything keep working if you want to.
I've used Slackware for many years and it's never let me down. If you want to move, and you don't like SystemD, and you don't like Devuan, and you don't like rolling releases, it's definitely your best choice, IMO. Really your only choice among the Linux distros, as you've noticed.
Regarding moving OSes, moving to a BSD or Solaris system would probably be a lot more trouble in terms of learning curve than moving Linux distros. Why make work for yourself? And, regarding ZFS, I'd suggest having a serious look at the Linux NILFS2 filesystem. If all you want is rollbacks, NILFS2 gives that to you in spades.
I agree that some of Linux upstream has gone batty in recent years, but it's mostly been high-level graphics toolkits and SystemD that have become havens for battyness. The kernel upstream is as sane as ever, as is the upstream for glibc, and as is the upstream for XFCE. Linux has a bigger community than the BSDs, so it has more room for batty subcommunities. But there are a lot of non-batty parts to the Linux community as well, and you can use those to create a system at least as solid as any BSD.
TLDR: go with Slackware.
(Score: 2) by NCommander on Wednesday February 08 2017, @06:55AM
My problem is I've long considered the kernel itself to be rather batty as well, though not to the extent of some of the other stuff in the stack. I've never liked doing Linux kernel development, and I felt that stepping into LKML is a good way to be shot.
Still always moving
(Score: 2) by The Mighty Buzzard on Tuesday February 07 2017, @12:41PM
Package management on slackware. It's a bloody pain to admin.
My rights don't end where your fear begins.
(Score: 2) by pe1rxq on Tuesday February 07 2017, @06:20PM
That may depend on your expectations and personal preferences.
Everytime I used a different distro I was reminded how slackware just did what I expected (not much) and did it well.
- Header files are just part of the package, none of this crazy 'dev' stuff
- Upgrade to a new library without breaking users of an older version? Just install the new one and leave the old ones. Assuming your library writer was not smoking crack while choosing version numbers it just works.
(Score: 2) by NCommander on Tuesday February 07 2017, @04:11PM
Oh, second follow-up. OpenBSD was ruled out because it lacks both a MAC framework like AppArmor/SELinux, or a jails like functionality that FreeBSD and illumos offer.
Still always moving
(Score: 2) by Subsentient on Wednesday February 08 2017, @04:02AM
Are you sure of that? OpenBSD's name is built entirely upon the idea of security. I'd find it hard to believe they're that unsophisticated.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by RedBear on Wednesday February 08 2017, @07:41AM
Yes, he's sure. Believe it. To oversimplify the *BSD world, the relatively few differing variants of BSD have very different focuses:
NetBSD: Portability between different CPU architectures.
FreeBSD: Server/network efficiency and stability under extreme network loads.
OpenBSD: Security above all else. Period.
The thing with security is that it's extremely difficult to have verifiable security with an overly complex codebase that changes too quickly. So OpenBSD likes to keep things simple, and change slowly. Very slowly. They leave out features that other operating systems have had for years, if they don't think it's important. Performance, new features, new hardware support, new software support, everything takes a back seat to security and stability. All code in the base system has been through a comprehensive security audit which took several years. So OpenBSD is a great choice for something like a firewall box; not so much for a high-traffic database/web server cluster. FreeBSD is the proper choice for this situation.
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
(Score: 2) by Dr Spin on Wednesday February 08 2017, @07:42AM
I think you are wrong about OpenBSD - just because it does not use the word "jails" does not mean the concept is not there.
Try OpenBSD on Oracle T series hardware if you want high throughput and isolation of components (databases in different domains, and server in another different domain, and hardware separated too).
You get Reliability and stability, as well as security. If you have a UPS, then 2 year up times are easy.
Just don't touch any Oracle software (its highly infectious).
Warning: Opening your mouth may invalidate your brain!
(Score: 3, Insightful) by MadTinfoilHatter on Tuesday February 07 2017, @12:30PM
Personally I've been using Gentoo on the desktop for a long time, and it's been remarkably stable, as long as you don't use the ~${arch} packages. The biggest issues tend to arise when there is a major update of some key graphical components, e.g. KDE, or Xorg. This would of course not be a problem on a server.
Using Debian without systemd (which was mentioned) has the problem of non-sysremd-users effectively being second class citizens with that distro, so I wouldn't recommend that. Also since systemd digs its tentacles so deeply into everything, you'll still need to deal with some systemd brain-damage, even if you don't let it act as the init system. Devuan would clearly be the better path if what one wants is Debian sans poetterix.
(Score: 2) by NCommander on Tuesday February 07 2017, @12:44PM
My biggest problem with Gentoo is you occasionally need to emerge world when you have an ABI bump in portage at a low level; maybe once every few months. That can take hours or days to finish depending on number of ports installed. Multiply that by a bunch of machines and life gets painful since each will likely update to a slightly different version of Gentoo unless I setup a portage mirror internally and use that to keep everything lock-step.
As far as rolling releases go though, its probably the top choice though; I prefer it much more to Arch.
Still always moving
(Score: 2) by tynin on Tuesday February 07 2017, @03:49PM
It becomes less of a hassle if you setup a PXE booted environment and then make a cloned base OS the latest Gentoo emerge. Then it is just a matter of pointing your PXE server to boot the new OS and reboot the servers into it. It'll mean maintaining a single OS image, and all of the servers are a reboot away from being a "clean" and updated install again.
(Score: 2) by NCommander on Tuesday February 07 2017, @04:13PM
Because almost all our machines run different configurations and software loads, we don't do ansible/puppet/StackScript deployments. The one exception are the frontend boxes. In the entirety SN has been up, we've only ever done a single nuke and pave, and that was when the original web-frontend (hydrogen) developed weird issues and was blown away and copied fluorine.
Still always moving
(Score: 2) by tynin on Thursday February 09 2017, @07:59PM
Thanks for the explanation!
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @04:52AM
I just got through rebuilding a system with it and libc++abi+libunwind as the abi backend. Works fine, and apparently according to the libcxx website, it is also possible to compile against it with gcc (no specific version mentioned, although it may be 4.8+).
Requires setting up environment flags for clang, but outside some cornercases seems to work just fine (notable failures were openscenegraph 3.5.x due to addition of templates which broken on libc++, and some weird build issues over memory types in recent firefox versions.) Outside of that it seems to compile slightly faster than libstdc++, hasn't show any of the annoying cross-version compiler errors that tend to crop up with libstdc++, and with a bit of work could be combined with the hardened musl profile on gentoo to provide a significantly cleaner and less finicky userspace, assuming your build dependencies compile cleanly against it (the number of assumptions in the average linux focused software project nowadays is obscene.)
(Score: 2) by Techwolf on Wednesday February 08 2017, @10:37PM
I have been using Gentoo for many years. One thing I have notice is portage has vastelly improve over the past years. Its rare to to b0rk a system with emerge -avuD world. One reason is a new portage feature that presearve libs when upgrading libs. If a package is linked to the old lib, portage will marked it presearve and will not delete untill the all packages that dedpends on it is rebuilt against the new lib.
I use a chroot to build the packages then emerge -k on main. This allows me to emerge -e world while still useing the desktop. Another advange is when a package fails to build, still have desktop to browse the gentoo forums and bugs to see if someone else posted a fix/workaround.
To speed things up, I bought my first SSD and mounted the chroot directory on it. I did use btfs on it to test it out. After a power failure, UPS died, I started to get emerge falures that did not happen on main system. Digging into some btfs tools, discovered the file system had a problem. Tried to fix only to become un-mountiable. Had to nuke start over using ext4 and backup. So btfs is a failure and I will stick to ext4.
My recominadation is gentoo is you have spare server/VM to build on and emerge -k to production. Otherwise, FreeBSD all the way. :-)
(Score: 2) by The Mighty Buzzard on Tuesday February 07 2017, @12:45PM
Me, I'm currently digging on Calculate. Essentially Gentoo with a well stocked binary overlay. Allows me to tweak the packages I want tweaked for speed with USE/make/env flags and get binaries of the ones I fail to give a shit about. Plus setup is way, way easier.
My rights don't end where your fear begins.
(Score: 2) by NCommander on Tuesday February 07 2017, @12:56PM
If we do jump onto the Gentoo ship, we're probably going to setup a build master on neon since the only thing that box is doing is running a backup DB node. That way, we can emerge world once, then get all those binaries published with our USE flags for everything else downstream to grab it.
Alternatively, we could use oxygen, as it has a lot of spare disk space ...
Still always moving
(Score: 0) by Anonymous Coward on Thursday February 09 2017, @11:59PM
(Score: 2) by The Mighty Buzzard on Saturday February 11 2017, @01:35AM
Actually, oxygen is dead full to the point that we've been making partial and nonexistent backups for who knows how long. I freed up a bunch of space getting rid of "temporary" full db backups on several boxes that'd been being backed up as part of our usual backup routine. We'll see how much is left tomorrow after the scheduled backup.
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @02:05PM
I've come to love Gentoo after jumping away from Debian.
Ignore the naysayers telling you that you don't get any performance benefit from recompiling from source, because that's not the point.
The USE flags really makes Gentoo stand out from traditional binary distros. You can use it to cut out all the unnecessary features.
Hate systemd? Use -systemd
Hate pulseaudio? Use -pulseaudio
Want to disable only certain GTK3* features? x11-libs/gtk+:3 -introspection
It gets complicated, but it is pretty well documented in portage(5). Check out equery(1) too.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @05:36PM
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @04:57AM
I don't remember exactly what it is supposed to do, but it provides standardized ways to poke at certain glib/gtk/etc objects in a consistent manner.
That said, it has been a huge PITA for me in the past since some packages only worked with one particular version of glib or introspection or what have you thanks to the gimp/gnome projects not bothering to fucking change the version number and allow slotted includes/libraries when forcefully deprecating features actual software uses.
(Score: 4, Funny) by GreatAuntAnesthesia on Tuesday February 07 2017, @12:36PM
Beautiful operating system...
Failing that, I think I have an old C64 somewhere you could use.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @01:09PM
No BeOS love?
(Score: 2) by NCommander on Tuesday February 07 2017, @01:23PM
Needs more multitasking and threading support.
Still always moving
(Score: 2) by bd on Tuesday February 07 2017, @05:45PM
What about Multics, then?
(Score: 2) by TheRaven on Wednesday February 08 2017, @01:23AM
sudo mod me up
(Score: 1) by DeVilla on Sunday February 12 2017, @12:54AM
You're not taking this seriously. The C64 was a nice micro, but in this case the extra memory in the C128 alone would be worth it.
(Score: 3, Interesting) by aim on Tuesday February 07 2017, @12:43PM
Disclaimer: I'm currently in the process of migrating my outside-facing server VM from an older Debian server to FreeBSD.
There are quite some differences, i.e. getting acquainted to doing things a bit differently. I have to say though that the new server "feels" way more responsive (no, I have no number to back that up), but then the stack is more modern too (e.g. apache 2.4 rather than 2.2, postgres rather than mysql). I'm not sure that using a different Linux distribution would have been easier than a *BSD, with their specificities regarding management (e.g. yum on RedHat derivatives, yast on SuSE ones).
My alternative would have been to move to Devuan, where I'd most probably just feel at home.
And yes, systemd is one of the factors that had me change, even if it didn't bite me yet. I do run a current Kubuntu on my laptop, but that could be reinstalled painlessly in case of issues - my data reside on my internal server (older Ubuntu), RAID-5ed and backed up. If all goes well with this FreeBSD VM, I'll migrate the internal server (NAS, DLNA, VM host etc.) to a self-configured self-maintained FreeBSD-based NAS box too.
(Score: 2) by NCommander on Tuesday February 07 2017, @12:47PM
I've used almost every major distro at some point or another with the exception of SuSE and Mandriva (I've used Mandrake when that was still a thing). That's mostly true for most of the staff. The learning curve isn't horrid if we jump ship to another distro. The Linux->BSD/Solaris one is higher because things like ifconfig change.
Still always moving
(Score: 1) by aim on Tuesday February 07 2017, @01:15PM
I've also used many Linux distros, starting with Slackware in 1995, as well as some of the proprietary Unix systems. Sure, the BSD commands tend to be a bit different from the GNU variants, but that needn't be blocking, just a matter of getting used to. There's also a reason why some of the big online platforms run *BSD.
As for Solaris, I'd consider it pretty much as dead as Digital Unix, HP/UX etc. at this point - nobody wants to pay Larry's next yacht [recently read: Oracle doesn't have customers, they only have hostages]. And I say this as someone whose day job is partly to maintain a pretty dead mainframe OS (no, not IBM) [, the other part being mainly on SLES].
(Score: 3, Informative) by NCommander on Tuesday February 07 2017, @01:22PM
illumos has the advantage of not buying Larry's next yacht; its mostly supported by Joyent.
Still always moving
(Score: 2) by JoeMerchant on Tuesday February 07 2017, @01:14PM
Has anyone floated the idea of setting up a testbed system (parallel to the live) and testing a rolling release OS in there before taking updates live?
More resources? Yes. More reliable? Yes. More work? At first. It pays off the first time you upgrade and something blows up in your face.
Technically, you could also run a testbed with your current update every couple of years scheme, too, but it's harder to imagine the payoff.
🌻🌻 [google.com]
(Score: 2) by NCommander on Tuesday February 07 2017, @01:21PM
Our plan would probably be to use our current development box or one of the disused nodes as an "emerge world" slave, and generate binary ebuilds. Once lithium is upgraded and works, we can deploy those ebuilds elsewhere as needed. I'm not the biggest fan of this setup, but its at least semi-viable ...
Still always moving
(Score: 2) by VLM on Tuesday February 07 2017, @02:05PM
That would work, but no need to keep it running 24x7 or even keep it around permanently. At work we have a massive private cloud whatever OK a big openstack and a big NAS and in a fit of insanity "they" gave me admin rights so I spin up a new clone and run experiments on it, and if the experiment works then ansible applies to production. At work humans don't touch production other than information gathering and troubleshooting; only ansible touches production.
In the old days you had to really fight to get DEV and TEST boxes, the thing I like the most about "the cloud" is not needing to go thru management and purchasing just to test a software upgrade. I just spin up a clone, stick it in TEST network land or DEV, and have at it. My bros in MVS land were doing that in the 80s of course at my first mainframe job. Eventually the whole world catches up to VM and MVS, it just takes awhile. I can't wait until we're writing web applications as CGI files in COBOL.
That operations model would probably work for SN; from memory SN uses linode, and although linode charges per ip address used, it would probably be worth the cost to always have a spare image and spare ip addrs for testing.
You know something cool about freebsd? It has ZFS. The biggest F up in the world can be fixed with just a "zfs rollback somthing", assuming you snapshotted before you started. Of course the second biggest F up in the world is cloning or snapshotting and never deleting it or removing it after the upgrade. I can neither confirm nor deny I've wasted a lot of NAS space on stupid clone/snapshot tricks. If you thought bad Perl was write only, imagine trying to figure out WTF you were doing with some snapshot from six months ago that you found wasting space.
(Score: 2) by NCommander on Tuesday February 07 2017, @04:16PM
Cobol on Wheelchair [github.com] - for all your dynamic COBOL needs. There's also a nodeJS-COBOL binding available. And ZFS snapshots are amazingsauce :)
Still always moving
(Score: 1, Interesting) by Anonymous Coward on Tuesday February 07 2017, @01:19PM
Like you, I've found Ubuntu 16.x sorely disappointing in terms of stability and work needed to get things going.
Now I've plugged Alpine Linux on this site before, and as a regular user of it I'm happy to do so again here. It's small, gets out of your way, no systemd, prepatched with grsec, and if you're familiar with Gentoo like I am, it's easy to get used to since it uses OpenRC. Uses its own "apk" tool for package management, which in terms of normal usage really isn't any different to apt/yum. I'm currently migrating all my single-task ubuntu and centos VMs over to Alpine, I find it works great when you have VMs dedicated to specific tasks/applications. Which might be one thing you may need to test for this site, if you don't or can't run different VMs for each workload, since it sounds like from your summary you run multiple services on the same VM. Nothing inherently wrong with that per se, just that Alpine is so ridiculously lightweight compared to traditional distros that it's simpler to really break applications down into their individual components and give them their own VM each. Also watch out that Alpine is based on musl libc, and defaults to busybox with ash shell (not bash), if that causes and problems with applications or scripts. Also I've not checked if all your listed apps up there are in Alpine's repos - being a smaller distro, it has a smaller selection of packages than your usual suspects as well.
(Score: 2) by NCommander on Tuesday February 07 2017, @01:27PM
We used to run a dedicated VM per task for that, but we ended up consolidating onto fewer nodes due to cost. Running htop on helium is an exercise in both hilitary and depression.
Still always moving
(Score: 2) by martyb on Tuesday February 07 2017, @01:59PM
IIRC, we consolidated VMs back when we were running on Xen. When we converted to KVM we got double memory. I don't know how much CPU, I/O, storage pressure we currently have, but maybe it would be worthwhile to revisit that decision?
Also, I've seen many reports of people just spinning up new VMs and, as needed, moving them around to different servers. I'm curious why we don't run more than one VM per server?
Background: I worked at IBM back in the early 80's testing their VM operating system. It was commonplace on those systems to give each user their own VM in which they would do their work. I personally saw a single mainframe running in excess of 1000 users, each of whom had their own VM. Granted that was back when "green screens" ruled -- no GUIs then! Then again, that mainframe with 1K users, had "only" 32 MB of main memory and 256 MB of extended memory (Not a typo - megabytes - my cell phone has 2 GB of RAM). We've come quite a ways since then.
Wit is intellect, dancing.
(Score: 2) by VLM on Tuesday February 07 2017, @02:21PM
I'm curious why we don't run more than one VM per server?
I am also a customer of linode and at least at the small user level they charge per ipv4 addrs and I don't know how charging is at the rarefied level of giant SN, but perhaps an interesting hack on their "bill per VM" model would be running ipv6 only (which is free... right?) and only the web boxes and maybe a management gateway get an ipv4 address such that linode is fooled into thinking you have fewer billable VMs.
The other problem is linode hard allocates ram and extra ram is quite costly. I'd rather have 1 box with 4 GB than 4 boxes with 1 GB, most of the time.
At work we have an unimaginably immense "private cloud" and there is no bill and my quota is quite lax mostly to prevent brain farts (whoops I mean 4096 MB of RAM, not 4096 GB of RAM now please don't deplete the entire system for my little typo)
(Score: 2) by NCommander on Tuesday February 07 2017, @03:50PM
I think we're grandfathered to when they gave one free IPv4 address per Linode as we don't get hit with an additional charge of them. We haven't spun up a new instance in a long time so I dunno if that changed. We still need v4 connectivity for talking to the outside world, but maybe we could give up some of the public IPs for cost reasons.
Still always moving
(Score: 2) by martyb on Tuesday February 07 2017, @11:47PM
I appreciate the feedback. I see I wasn't clear in what I was suggesting... too many VMs in the mix!
We have several Linode instances, each of which is, in reality a VM (via KVM on the bare metal). What I am curious about is whether or not we could, host multiple VMs of our own within a single Linode.
Background info. At one point, in exchange for converting from Xen to KVM, Linode offered us twice the RAM. Mind you we were running 'okay' on the RAM we had had. Let's take a concrete example. We have a Linode VM, hydrogen which now has 8GB of RAM where it once had only 4 GB.
Another way to look at it is that we now have an extra 4 GB or RAM on hydrogen.
What keeps us from running several VMs of our own within the 8GB RAM on hydrogen? In other words, can't we run (our own) VMs in our Linode VM?
So, conceivably, could we not host two (almost) 4GB VMs in our 8 GB Linode? We could have a 4 GB hydrogen (on our existing Ubuntu) and a new one, let's call it deuterium, running on, say, FreeBSD.
Wit is intellect, dancing.
(Score: 2) by VLM on Wednesday February 08 2017, @02:06PM
What keeps us from running several VMs of our own within the 8GB RAM on hydrogen? In other words, can't we run (our own) VMs in our Linode VM?
Nothing, really, although for networking reasons you're gonna have to pay for another ipv4 addrs or play NAT games.
I have no direct KVM experience but I have a lot of experience with other systems and from what I can tell if you want to run kvm in kvm you need CPU feature level support enabling nested vmx. The good news is that is a boot time argument no problemo IF you own the bare metal hardware. The bad news is it sounds unlikely (although possible) the linode guys boot enabling nested KVM. All I can find on the topic of nested vmx is caker himself (the main dude at linode) a couple years ago specifically saying that CPU feature was not enabled in their initial Xen to KVM conversion project (... at least way back then) and they might revisit that decision someday. So probably not, although I wouldn't be shocked if it has been changed although undocumented.
I'm a linode customer longer than SN has been; I logged in and I forget where I am (I think I'm in Dallas data center?) and ran x86info --flags | grep vmx and got nothing. So at least on my host probably in Dallas, nested vmx hasn't been enabled.
SN people being famous internet stars and having groupies and such, if I asked caker to enable vmx on my host as a special request its probably not happening, but if the SN people asked formally, well, who knows...
Of course people have been doing virtualization for a heck of a long time without KVM or nestet vmx support. LXC which is mostly just jails for linux (or was, in the old days, anyway) runs on anything that boots linux for all practical purposes (like no 386 or something, but I had some old 2000s decade 1U servers that worked fine with LXC).
Personally I think for april fools you guys should install original unix v7 on something like the simh emulator and make it your DNS server or some other component of the architecture. OR set up a Hercules emulation box and run Debian-S390 on it. That being an immense hack and waste of cycles, would also be cool use of virtualization, although probably not even remotely what you're asking for LOL.
(Score: 2) by martyb on Thursday February 09 2017, @11:07PM
Thanks for the thoughtful reply!
I had not considered hardware support for virtualization. Sure, everything could be emulated, but then there would be a [potentially major] performance hit. I guess it depends on how heavily loaded the existing system's processors are. If they are generally mostly idle, then it would seem to be a real win. But that is a big "IF".
Thinking back, I'm amazed I didn't think of the vmx hardware assist being important. When I was working at IBM testing VM/SP HPO in the early 80's there were a number of test scenarios we would run. The usual case of a bunch of VMs running on the bare metal. Then there were the "second level" VMs -- a user could run VM in a VM. (This was not entirely uncommon - we were doing that all the time as we were working on a new release. At the time, all source code was provided to the customer, too. Thus, many shops did their own customization and would run 2nd-level VMs to test them out, too.) There were several specific code paths in place to provide optimizations for that.
And then there was the case of running a VM in a VM in a VM on the bare metal -- a third-level VM. This would allow you, while running your VM to test how well your new VM could support a VM running in it. Yes, we did a bit of that, too. And it was great when it worked! And it generally DID work, when we were done testing it. But woe unto ye who had to debug what happened when something went wrong! Single stepping through each assembler statement on the VM closest to the bare metal and watching all the things percolating up and back and through all the optimization paths, in hexadecimal, was "interesting". =)
Again, thanks for the reply!
Wit is intellect, dancing.
(Score: 2) by NCommander on Friday February 10 2017, @07:48AM
Hardware virtualization assist mostly exists on x86 being a nightmare to virtualization; its *considerably* less of a problem on other architectures. Alpha for instance only has a single instruction that requires supervisor mode, so you can simply trap I/O access and emulator, and that one instruction. VMware was made famous because up until that point, paravirtualization was considered borderline impossible for x86 at reasonable speed. The original releases of Xen required modified domU software which allowed the OS to run in Ring 1 and avoid direct I/O accesses due to this problem until vmx became a thing.
Still always moving
(Score: 3, Interesting) by VLM on Sunday February 12 2017, @01:03PM
a user could run VM in a VM. (This was not entirely uncommon - we were doing that all the time
I have a relative who was a sysprog in the 80s at a major manufacturer and they did this "prehistoric Docker" type of thing. IT is cyclical not linear and virtualization and immutable deployments and stuff is actually very old, not new. Everything we do today will be reinvented in 2060 or something and branded as totally new.
Like many big projects the blind men and elephant effect occurs and the explanation I got for why they ran VMs in VMs was fuzzy but it boiled down to a panic solution after a merger worked so well in operational practice, that they simply continued to do all business that way.
The financial services company I worked at in the 90s also spent a lot of doing mergers and they were not into that strategy for whatever reason. Maybe they were better at mergers, I donno.
(Score: 2) by martyb on Monday February 13 2017, @01:21AM
Necessity is the mother of invention, and in this case it seems that a quick-and-dirty hack ended up working so well that it became the defacto way of doing things -- I don't know why, but something about that just gives me a nice warm feeling about human ingenuity!
I wonder if an AI could ever have come up with THAT solution? Was that a stroke of human brilliance? Or, would a simple enumeration of all of the possibilities with the appropriate risk assessments necessarily have come up with this solution.... or maybe something even better?
Anyway, thanks for the reply -- much appreciated!
Wit is intellect, dancing.
(Score: 3, Funny) by Pino P on Wednesday February 08 2017, @02:15PM
Uses its own "apk" tool for package management
Nothing to do with Android apps or a certain proponent of /etc/hosts as an anti-malware tool, I presume.
(Score: 4, Insightful) by Anonymous Coward on Tuesday February 07 2017, @01:34PM
Disclaimer: I used to be a FreeBSD admin.
I have a project for which I will be bringing online soon. An application with a web interface for which potentially thousands of concurent users will use. I have administrated FreeBSD, Redhat, and Windows. I feel your pain. My primary computer is currently Ubuntu 16.04. I would not use a systemd OS in a production environment for love or money. My work shells out big bucks to keep Redhat up. For my projects I can't afford to deal with it.
Windows is a non-starter. So many reasons.
Redhat is out of the question.
So, after a decade away I am going back to FreeBSD. Stable. Reliable. Excellent history. Good community. Secure. Runs modern software.
If I could go with a stable reliable Linux distro I would. I really like Linux, more than FreeBSD.
Dovean (heh) is not mature enough.
Just my two cents. Do what is best for you.
(Score: 2) by The Mighty Buzzard on Tuesday February 07 2017, @02:01PM
Try Calculate in a VM for a bit. It's a bianry overlay + Gentoo plus a few non-standard tweaks of their own that every distro seems to feel the need to put in. Fortunately that includes a very much easier install method and doesn't include systemd. I'm really digging the hell out of it so far.
My rights don't end where your fear begins.
(Score: 2) by coolgopher on Tuesday February 07 2017, @02:09PM
I used FreeBSD for a ~decade, before $work made it more convenient to move to Linux (Debian/Ubuntu). These days I'm on Devuan, but if it wasn't for work, I'd probably be back on FreeBSD. Having a nice, solid, small base OS instead of 90+ interdependent packages is nice, and the ports collection rocked. Want something installed with slightly unusual options? Sure, just build it with the (menu-config'd) options you need. Or, if you're happy with the defaults, just pull down the pre-built package. It was really the best of both worlds in my opinion.
Solaris had a very solid feel to it, but it never "felt" as "friendly" as FreeBSD to me. Too much XML, for one. Lacked good packages/ports, and using the NetBSD pkg setup was somewhat painful, though it worked. Haven't touched Solaris since Sol10 though, things might have improved.
(Score: 2) by TheRaven on Wednesday February 08 2017, @01:27AM
sudo mod me up
(Score: 2) by VLM on Tuesday February 07 2017, @01:50PM
If you stay on linode there's a nice extremely long and hyper detailed document explaining the linode install process and everything is "as you'd expect" with two exceptions, shut off Lassie temporarily or she'll drive you crazy during the rebooting process (woof woof) and you have to convince freebsd to serial console boot to make the linode admin tools talk to the console correctly (no big deal). Other than that, it looks about as exciting as setting up under openstack was at work. Openstack setup was quite boring I assure you, not even worthy of comment.
As far as freebsd criticisms the coordination between packages is a little weaker than some linuxes so as an example of hilarity a couple weeks/months ago there was an exploit for mysql 5.6 that "couldn't be patched" or some damn thing (or maybe it was 5.5, who cares) that would appear in the daily report emails from my DB servers and that was seen as OK because 5.7 is in the system and you can install it, but the standard for every client of mysql in freebsd was 5.6 so you could install 5.7 which was patched and secure but nothing native to freebsd can talk to it, or you can install 5.6 which works with everything but outputs a security warning every day (or was it week?). In my application it was an intranet internal use only system at work and internal use only at home and you'd have to be pretty crazy to open up the mysql port to "the internet" so I didn't sweat it and ran 5.6 although someone trying to sell DB-aasS to the general public would likely WTF.
As for jails you can do that by hand (tedious) or write your own scripts (error prone) or try the pre-alpha-ish support for docker (maybe its improved since I heard about it; probably it has) or try the CBSD suite which works perfectly and seems straightforward and has been extremely reliable for me although apparently is only used and loved by the developer in Russia and myself and quite possibly no one else on the planet. Its really nice, really ignored, and gets no love at all on the internet. Too bad.
I think given your situation (public internet site) you'd like PF. My interactions with PF have been somewhat faster simpler and more logical than old-days ipchains or iptables. I've never heard anyone complain about PF.
There's sort of three parts, the core system as you mention, /usr/ports manually compiling which is a slow PITA but highly versatile, and pkg-ng which is essentially apt-get for freebsd. I mean literally where you type apt-get update you type pkg update instead, etc. Mixing hand compiled stuff from /usr/ports with pkgng packages can be ... unusually exciting in the Gentoo sense. I mean if you're doing something weird and have to compile your own, then do it, but usually you won't have to and things can get weird (um, do I upgrade xyz in /usr/ports or via pkg or what if I get it "wrong" and do the opposite or)
you didn't mention your config management system. For like a decade I used puppet and it was so much fun having it randomly crash days or weeks of operation on Debian, it was the only unreliable part of the system so I know it wasn't hardware, so its theoretically pull but in practice debug and push constantly, what a PITA. Also on an "internal only system" I greatly enjoyed (sarcasm) running multiple authentication systems so I have kerberos, ssh keys, web "real" SSL, and fake puppet only SSL, what an ungodly PITA it is to keep puppet's SSL running. Which is why I scrapped it all and eventually settled on ansible. And the topic of this paragraph is dealing with ansible on freebsd. First your hosts files will be full of some_hostname ansible_python_interpreter=/usr/local/bin/python or legacy_linux_hostname ansible_python_interpreter=/usr/bin/python. Then in your roles, tasks, you're gonna have lots of stuff of the form " when: ansible_distribution == "FreeBSD" " or VERY similar. Unless someone else has a better idea.
Just like my linux boxes the outta the box ssh config is not that strong. Just like my linux boxes fixing it is pretty uneventful WITH the sole exception of yet another path issue. If you copy a sshd config from Ubuntu you'll see sftp lives in /usr/lib/sftp-server and on freebsd it lives in /usr/libexec/sftp-server. And you'll be like WTF I don't use SFTP so I don't give a F but it turns out ssh won't work AT ALL (or maybe it was just scp that failed, or just console, regardless it was Fed up). Unless sshd can find the referenced module it doesn't work. So you end up with two sshd configs, one for legacy linux and one for freebsd differing hopefully only in paths and nothing else. You really need to replace ssh_config and sshd_config on all linux or freebsd boxes. Seriously, don't some still permit like MD1 MACs and "PermitRootLogin yes" and dumb stuff like that, that sounded good in 1993 but not so much in 2017?
Another fun almost flamewar almost as much fun as emacs vs vi is dealing with 3rd party or homemade scripts running under bash and linux-only-noobs have lines like #!/bin/bash but under freebsd bash lives in /usr/local/bin so its an unholy flamewar if root should say fuck it and symlink /usr/local/bin/bash to /bin/bash on each machine (a simple ansible config to be sure) or if the script should be forcibly converted permanently to freebsd with a like like #!/usr/local/bin/bash or if the script author is a script-kiddie and should file a bug because the script should be starting with #!/usr/bin/env bash which works PERFECTLY cross platform.
ZFS is awesome. AFS on freebsd was not so awesome, I ran that for 10, 15, maybe 20 years, I'd have to think about it, but the big freebsd conversion made me dump it, kernel crashes under heavy load not funny. ZFS survived the crashed and didn't care.
I'm not mentioning stuff that just works, which is pretty much everything else.
(Score: 2) by NCommander on Tuesday February 07 2017, @04:02PM
I hate iptables with the passion of a thousand suns, and am somewhat horrorified that I know it was better than ipchains which it replaced :/. We use Ubuntu's UFW to frontend the firewall which took it from *horrid* to relatively managable. Our firewall rules are actually fairly simple overall so as long as it utter braindamage, it's usable. I'd be fine with plain old ipfw.
Still always moving
(Score: 2) by mrpg on Tuesday February 07 2017, @07:03PM
If anyone here uses Debian:
"After years of heavy development, the nftables framework is ready to use in production environments. You are encouraged to migrate from iptables to nftables. "
https://wiki.debian.org/nftables [debian.org]
(Score: 3, Funny) by TheGratefulNet on Tuesday February 07 2017, @11:43PM
I tried switching from iptables to little bobby tables, but somehow, my system was not usable after I tried it.
(lol)
"It is now safe to switch off your computer."
(Score: 2) by VLM on Wednesday February 08 2017, @02:54PM
You can find nftables in jessie-backports
ah thats why I was like "what is that"
Nice dual stack ipv4/v6
I see its not infected with systemd but is systemd compatible in that people have made files that'll run it from systemd
If its truly superior the *BSDs will just "steal" it so I look forward to that possibility. I don't mind PF though.
(Score: 0) by Anonymous Coward on Friday February 10 2017, @07:55AM
FreeBSD has a superior firewall to pf already: ipfw.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @11:09PM
Would you mind providing a short statement on the major problem points of iptables from your point of view?
I primarily just do personal stuff with iptables, and while I can get things to work, I have noticed that it seems clunky to manage from a configuration perspective. (I just use tons of commented-out lines in my config file/script and remove/add comment-marks manually since I'm paranoid, lazy, and like to have the feeling that I know exactly what I just did because I just looked directly at the only iptables config file in use.)
(Score: 2) by The Mighty Buzzard on Saturday February 11 2017, @01:44AM
Actually, we don't anymore. Audioguy hand rolled us an iptables firewall after several hours of us arguing over rule specifics. Details should be on the tech wiki.
My rights don't end where your fear begins.
(Score: 2) by NCommander on Tuesday February 07 2017, @04:06PM
Second follow-up; we don't use centralized configuration management with the exception of production, which we rsync the /srv/soylentnews.org folder which has all the server software and configs in one package.
With the exception of Hesiod+Kerberos, everything is running different software.
Still always moving
(Score: 2) by TheRaven on Wednesday February 08 2017, @01:36AM
but the standard for every client of mysql in freebsd was 5.6 so you could install 5.7 which was patched and secure but nothing native to freebsd can talk to it, or you can install 5.6 which works with everything but outputs a security warning every day
That sucks, and is something that you should poke ports-secteam about if you see it. That said, there's a third option of setting up Poudriere (which is very easy) and running your own package build with the default MySQL versions set to 5.7. It takes about 5 minutes to set up and then the build time obviously depends on the machine performance. On my AMD E-350, building the installed package set takes a few days, on the beefy machines at work we can build the entire ports collection in under 24 hours.
As for jails you can do that by hand (tedious) or write your own scripts (error prone) or try the pre-alpha-ish support for docker (maybe its improved since I heard about it; probably it has) or try the CBSD suite which works perfectly and seems straightforward and has been extremely reliable for me although apparently is only used and loved by the developer in Russia and myself and quite possibly no one else on the planet.
I've not come across CBSD, but it looks pretty good. iocage [readthedocs.io] has replaced ezjail as the popular tool for jail management (unlike ezjail, it's ZFS only, but that does mean that when you are using ZFS it is better positioned to take advantage of all of the shiny features than ezjail).
There's sort of three parts, the core system as you mention, /usr/ports manually compiling which is a slow PITA but highly versatile, and pkg-ng which is essentially apt-get for freebsd. I mean literally where you type apt-get update you type pkg update instead, etc. Mixing hand compiled stuff from /usr/ports with pkgng packages can be ... unusually exciting in the Gentoo sense
End users should never build anything from /usr/ports (do we even install it anymore?). If you want to build your own packages, then use Poudriere. It will handle all of the dependencies for you, builds in a jail and ensures that packages have all of their dependencies rebuilt. You can then just specify the generated package repo in your config and get packages from there if they exist or from upstream otherwise.
sudo mod me up
(Score: 5, Insightful) by shipofgold on Tuesday February 07 2017, @02:06PM
I jumped from Fedora to Devuan on my home server/router and won't look back.
The issue with Devuan is that there are plenty of people that would like to try it, but until it achieves critical mass many will be hesitant. As long as many are hesitant it will never achieve critical mass.
(Score: 5, Informative) by VLM on Tuesday February 07 2017, @02:13PM
However, I've got concerns about the long-term suitability of the distribution
On boxes I couldn't move to freebsd immediately I Devuan'd them and I converted my 4th to last box last weekend and it was the most boring and reliable thing ever.
Unbelievably I still have three Debian boxes that use deb-multimedia and I haven't taken the plunge on them yet. Conversion projects seem to follow some kind of exponential half life where half the boxes are converted every six months but the last boxes to get converted take literally years.
I'm just saying the conversion from a Debian Jessie or Debian Wheezy to Devuan Jessie is oh I donno like two hours max and painless and its not a major project. Spin up some clones and give it a try, its really quite predictable and boring. As it should be.
You might multi-step and convert to Devuan immediately with a longer term couple month project to roll in new freebsd boxes as old boxes require upgrades or changes. Kinda the last upgrade ever to the old boxes would be to Devuan Jessie and all future changes would involve a "some other OS" perhaps freebsd.
(Score: 1) by Scruffy Beard 2 on Wednesday February 08 2017, @06:44AM
Sorry about the funny mod. Must have accidentally clicked on the drop-down box.
Did not know you could mod more than one post at once
(Score: 2) by VLM on Wednesday February 08 2017, @02:32PM
LOL the exponential half life of conversion projects is one of those IT things to chuckle about anyway. I don't feel too bad, about my own laziness I've seen some crazy stuff over the decades that just never quite gets finished.
(Score: 3, Interesting) by hendrikboom on Wednesday February 08 2017, @02:29PM
Upgrading from Debian Wheezy to Devuan Jessie was painless. I didn't even have to worry what systemd would do to me. It was as easy as any upgrade that I had done in the past between Debian stable releases, and easier than some. Except for some (it turns out after the fact completely unnecessary) precautions against what might have gone wrong, it was a complete nonevent.
(Score: 2) by VLM on Wednesday February 08 2017, @03:06PM
and easier than some
From memory the conversion from a.out to elf binaries in the mid 90s had more rough edges. There was an upgrade that involved going to or from glibc (I don't remember) that was hilarious. In both cases the distro itself was not nearly as much of a problem as locally compiled stuff which was very exciting.
Oh I got another one, this wasn't a "stable" to "stable" upgrade but I remember a kernel upgrade where there was one of those periodic "rip out all the non-free firmware" and naturally some of the non-free firmware was my ethernet card, making it impossible to download the non-free firmware over ethernet. Naturally this being a long time ago and kernel upgrades "always just worked" I had deleted the old version so I couldn't just reboot into the old kernel. I'm almost embarrassed to admit I used a PLIP cable to fix that. PLIP, what a blast from the past. For the kids under age 40 or so, back when computers had hardware parallel printer ports you could network over them with a very special cable and some strange linux drivers.
(Score: 2) by NCommander on Wednesday February 08 2017, @11:37PM
I still regularly use SLIP on embedded board bringups. Once I have a kernel booting to /sbin/init, it's not hard to compile in slip, add slattach and get getto networking up and running. NFS works over it too if you're patient which can be really handy for debugging before you have the USB controller talking.
Still always moving
(Score: 5, Insightful) by Anonymous Coward on Tuesday February 07 2017, @02:15PM
I was a long time debian user. Used it exclusively in every company I worked, from 1997 up until about 2 years ago. I have my reasons for switching to FreeBSD and I have to say i have not looked back. I am sad for debian and I love the project. But some things which happend in the last couple of years do not trigger very well with me.
I am very happy with FreeBSD. It is a very stable platform. ZFS and Boot Environments are so exceptionally great. Jails are amazing. My main job is maintaining Callcenters in a couple of countries / continents and we are slowly migrating away from Debian. It took me quite some time to gain confidence to use it as a primary OS for servers, gradually migrating less criticial servers first and go to a couple of upgrades (10.0 -> 10.1 -> 10.2 -> 10.3 -> 11.0) and it was a blast!
Everyone seriously considering a switch, FreeBSD is an excellent choice. It is sometimes not easy to put it in words, but the design and philosophy is so much more coherent. It works on a slower pace, but on my servers I prefer slow and steady evolution with a clearly defined road-map instead of the very hectic and sometimes stress inducing changes to linux every couple of upgrades.
Just my 0.2
Best
(Score: 3, Informative) by Creaky on Tuesday February 07 2017, @02:44PM
Ok, this posting got me off the lurker fence.
Following views developed from the dirty coal face experience over the ages with high visibility high load websites.
What makes a good web site hosting OS comes down to how easy it is for the poor administrator to:
* Install, blow away, recover and upgrade.
* Secure out of (or close to) the box.
* Sane network and system defaults out of the box.
* Sensible file system layout.
* Support modern application software stacks.
* Good reliable 3rd party package management or build infrastructure.
* Package repositories kept up to date.
* Deterministic booting and general operating behaviors. (This includes time scheduling)
* Easy to modify or tune where required.
* Ability to run in Local VM, Cloud Infrastructure and Physical Hardware.
* Integration with deployment software such as Ansible or SaltStack.
* Longevity and likely to remain available for next 10 years.
I would personally choose FreeBSD. It best meets the above “needs” and has very significant history and is running high volume websites successfully. FreeBSD jails and FreeBSD layer 2 hypervisor bhyve offer great separation. PF firewall is excellent. Network stack and file systems UFS and ZFS very battle tested and proven. Ports and port building infrastructure (poudriere with portshaker) provides own local repository for production application package deployment. I personally have lots of experience with FreeBSD in high volume high visibility web sites and consider it viable. Linux binaries can be run under FreeBSD (why?) or use bhyve to create linux VM whilst migrating.
Everyone will have a Linux distribution opinion so I will talk about the other options.
Solaris in its day was great. The network stack, scheduler and ZFS was and still is leaps and bounds beyond any Linux offering. Talking about X86 versions SmartOS is a cloud OS only so using it requires re-thinking how a server state and storage is done. This is a big architecture change to running Linux and is not for the inexperienced. Illumos/OpenIndiana is good and is most like traditional Solaris.
However all Solaris derived systems suffer significantly in 3rd party packages. Illumos and SmartOS use the NetBSD ports package system and it is a pain in the backside to update and maintain. Software in the package system is often out of date and updates are irregular. Not what is desired in a public front facing OS and application stack.
Comparing Solaris to FreeBSD, the OpenZFS brings ZFS parity between the operating systems. Solaris zones now taken care of by FreeBSD Jails and especially bhyve. So install FreeBSD, create Linux bhyve virtual machine, restore a backup into the VM and then piecemeal decommision the said VM. So FreeBSD meets all that illumos brings to the table. Both FreeBSD and Solaris derived will bring greater stability to the table than any Linux distribution.
On a final note, why use bind? Its history of poor programming practices (see never ending list of security advisories) should banish it. Try nsd https://www.nlnetlabs.nl/projects/nsd/ [nlnetlabs.nl] for name serving and unbound https://www.nlnetlabs.nl/projects/unbound/ [nlnetlabs.nl] for name resolution.
(Score: 2) by NCommander on Tuesday February 07 2017, @03:57PM
Our BIND instance isn't public; I'm well versed with its "quirks". We simply maintain the zone file in it and it does DNSSEC signing, and then it gets punted off to Linode by AXFR. Inline signing makes DNSSEC absolutely trivial (unless you bork up a config file). The internal li694-22 zone is also hosted on bind, but again, its not world accessible so I'm not hugely concerned on security on it.
One of the bigger advantages to BIND for us is it still supports classes, so we could move our Hesiod configuration from IN -> HS if we want to for separation reasons.
Still always moving
(Score: 2) by NCommander on Tuesday February 07 2017, @03:59PM
I'm well familiar with pkgsrc and its deficits with upgrading. Unfortunately. Depending on how you set it up though, its relatively easy to copy all packages installed, re-install on a new pkgsrc copy, and then punt over the etc/var folder. Not an ideal setup, but def. managable once every three months.
Still always moving
(Score: 2) by iamjacksusername on Tuesday February 07 2017, @03:05PM
CentOS 6 is EOL in 2020. You have another year before you really need to start testing and that year gives you some more time to allow a non-systemD distribution to emerge as the defacto keeper of the flame.
The other option is you can get a paid distribution. EL6 will have extended support past 2020.
The cost is not too much versus the amount of effort you will spend to move to a new platform. I do not see the physical hardware listed but you can get RH subs for 4 sockets for ~$1600 / year which includes support as well. That's pretty much peanuts when you are sitting there at 2AM trying to figure out why your scripts are vomiting errors. Suse's offering are similarly priced.
I personally have used production support from both Suse and RH and been pretty happy with both. It just depends on what you are comfortable with. For what it's worth, none of my production workloads have been moved to a SystemD-based system. Pretty much everything is RHEL 5/6 or SLES 11. I am taking wait and see approach. Maybe by 2020, they will have SystemD straightened out but it has not been worth my time to deal with it yet.
(Score: 1) by mechanicjay on Tuesday February 07 2017, @10:19PM
We only have 1 box running Centos/Rhel 6. It's basically the "other" box, which runs mail services, IRC, some ancillary web tools, etc. The problem is that the versions of everything are becoming quite long in the tooth, which is fine if your software is static or you only ever use system supplied packages. Once you need to upgrade your webmail software, and you need to upgrade PHP inorder to accomplish that, you just start down a horrid maintenance rabbit hole. We're already on the cusp of breaking everything on this box every time we touch it due to this.
While we have a while on our Ubuntu boxes which run the core site, this one is in really bad shape and honestly will probably make a decent testing ground for some of us admins working in a new environment.
My VMS box beat up your Windows box.
(Score: 2) by cmn32480 on Tuesday February 07 2017, @10:26PM
The bigger issue with the paid option is the budget for this site. It is... rather thin. We are currently covering our operating costs, and not much else.
"It's a dog eat dog world, and I'm wearing Milkbone underwear" - Norm Peterson
(Score: 2) by iamjacksusername on Tuesday February 07 2017, @11:03PM
That makes sense. When it's all a labor of love, every dollar counts.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @03:41PM
TRS-DOS 3
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @08:48PM
After all these years, my brain still flips the letters to "TSR" once it sees the word "DOS."
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @09:44PM
You misspelled Apple DOS 3.1 with 13 sector 113.75K 5 1/4" floppies. Here's a cheap server (with manuals!) you can run it on: http://www.ebay.com/itm/Bell-and-Howell-Apple-II-Plus-A2S1048B-Darth-Vader-version-/192095173467 [ebay.com]
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @04:20PM
The most rational solution is probably to do like most people : cope with this damn systemd. Sure it is a pain, sure it an annoying change after years of stability on a rock stable linux distro like a Debian, but it is here to stay whether you like it or not.
AFAIK, Slackware is probably the only rock stable distro with a non systemd option. Unless you have a proper pre-production server with pre-production testing, I would not ever touch the idea of using a less tested, rolling release OS like Arch. These are for people who have automated pre-production tests (not CI, you know).
FreeBSD, while neat and all, is not always as reliable as people think. I had awful issues with regressions on an high end network card drivers years ago, and it was a huge pain in the butt. Found no workaround, had to remove FreeBSD and install OpenBSD instead.
Man, how can one let such a regression happen on a such an important piece of code?
(Score: 3, Insightful) by NCommander on Tuesday February 07 2017, @05:02PM
Go try using Ubuntu 16.04, then get back to me. My experience has shown me Debian 8 more or less fell into the same quality tarpit due to systemd.
Still always moving
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @06:40PM
I use antix and mx linux and I use sparingly voidlinux and knoppix. Among these, mx linux has a lot of systemd components but PID 1 is clean.
So, if I can use a desktop without systemd, people can use servers without it.
I guess Lennart and RH must try harder to trap us.
(Score: 3, Informative) by mechanicjay on Tuesday February 07 2017, @11:14PM
At the day job we're currently running a couple hundred CentOS 7 boxes, with ZFS for Linux and we have very few issues.
Systemd is craptastic, but CentOS 7 seems to have made a good go of it -- We've not had any major systemd related issues, only annoyances.
OpenSuse was an early adopter of systemd and it resulted in a few releases that were almost completely broken from a system management standpoint. Honestly, it's been really solid for me on OpenSuse in the 13.x and 42.x releases.
My VMS box beat up your Windows box.
(Score: 2) by hendrikboom on Friday February 10 2017, @03:40AM
Devuan is an easy crossgrade and avoids having to tangle with systemd. And you can still spend time surveying the landscape to decide if you want something else. It could give you some more time to make any decisions you think you still need to make.
(Score: 2) by The Mighty Buzzard on Saturday February 11 2017, @01:50AM
It's a beta init system. You do not use a beta init system on a production server. Ever.
My rights don't end where your fear begins.
(Score: 2) by canopic jug on Saturday February 11 2017, @11:38AM
Actually, systemd is in pre-alpha given not only the instability but inescapable fact that they are still deciding on and adding to the feature list. Alpha is after you've decided on the functionality and have it in place and working as at least proof-of-concept. Beta comes much later when you get those functions to work in the way they were planned to.
Money is not free speech. Elections should not be auctions.
(Score: 2) by The Mighty Buzzard on Saturday February 11 2017, @01:53PM
Very good point. I stand corrected.
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @04:23PM
In Gentoo, even though all packages are always newest stable upstream up-to-date versions, you can keep some at older versions by using the /etc/portage/package.mask file.
For instance:
#DRM, Pocket, Cisco binary codecs:
>=www-client/firefox-38
This means I want Firefox up-to-and-including 37.*, but nothing newer. The rest of the system is up-to-date. If there are some common dependencies between this older Firefox and other packages, the newest-common version of such dependency is used, so that both my old Firefox and all other packages work.
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @11:35PM
I suspect that the downsides to running an old version of Firefox are not something you desire to encounter. It's not just full remote-code-execution exploits in the hole-y javascript engine, but similar underlying critical vulnerabilities in core architecture.
I highly recommend looking at Pale Moon [palemoon.org] as a Firefox replacement. There is an intelligent mind behind Pale Moon development, as evidenced by many user-accessible knobs to tweak would-be annoyances (my favorites being media.autoplay.enabled and media.autoplay.allowscripted, with the latter being a Pale Moon addition that provides the expected behavior from the Firefox-provided former, in that setting both to false with Pale Moon prevents sites like Youtube from "clicking their own play button" on videos.
Many Firefox addons work with Pale Moon, and I've found those that don't have Pale Moon-specific replacements. It is highly likely you have have your current comfortable browser user interface and NOT have multiple wide-open doors for attackers.
(Score: 2) by goodie on Tuesday February 07 2017, @04:46PM
I don't have tons of experience with Linux but I've always gone back to FreeBSD. At the end of the day, I have found it easy to use and very solid. I am not managing a web server serving thousands of page views per day but my personal opinion is that FreeBSD will not feel too different from Linux while removing some of the pain points. I have set up zfs with two or three simple commands, I've used it as a host and guest VM, I've installed hadoop on a small four node cluster and have had no trouble other than those created by my own fault. The ports system works really well to IMHO.
anyway none of that relates to what you are doing with SN, but I guess my point is that FreeBSD is a good, solid OS from my point of view and that it deserves more love than it has gotten over the years (although systemd helps ;-) ). I don't think that the learning curve would be very steep FWIW and the guys at freebsdforums are very knowledgeable.
(Score: 2) by drussell on Tuesday February 07 2017, @05:24PM
FreeBSD++
(Score: 3, Interesting) by stack on Tuesday February 07 2017, @05:24PM
As an admin for a couple hundred systems set up for High Performance Computing + Webservers + Docker + storage (Ceph) + a handful of desktops who has been working with Linux since the 90's - here are my thoughts. We have had many conversations and debates internally on exactly your situation. We are a bit further out as we have actually begun implementing a lot of our changes. We are probably only about 10-15% through our migration right now, but we are migrating.
1) BSD has its place and is awesome. But I struggle to find good admins to hire that know Linux well enough to do what we need to do. Give them BSD and things are just different enough that it is a no-go unless I want to be on call 24x7 or spend a ton of time getting them up to speed. It is just little differences, but it seriously threw a wrench in our plans for migration when so many of our scripts broke on BSD and most of my team stalled out. I really don't want to re-write everything myself and catch them up to speed. Personally, I have enough experience that when those little changes bite me I just mutter a curse, fix it, and move on. I can't carry the full team and just the thought of what it would do to my large user base scares me.
2) We use a LOT of Red Hat (and Scientific Linux). We REALLY have had issues with systemd and weird issues surrounding it. We would prefer to stay on RH6, but it is incredibly annoying how fast companies have dropped support for it. For example, Rstudio no longer supports RH6 for literally no good reason. We are finding it harder and harder to support application updates on RH6 because other vendors are dropping it. Even things like Gitlab which support RH6 have dependencies that we have to use IUS or Software Collections for updates. Even Git (!) is so old that our user base has had issues and we have had to update/replace it internally. We had so many workarounds in place for our owncloud, that when we migrated to Nextcloud 11 we couldn't get it running right on RH6 and had to migrate it to a newer OS. While we have proper RH support and we utilize it for systemd bugs, they can be slow and it can be painful trying to get things replicated and fixed. Sometimes it is just "well that is a systemd problem upstream" to which my response is "Well then fix it. That is why I am paying for your support!". Don't get me wrong, they are usually pretty good, but we've had a few issues. We are /slowly/ migrating to 7. Honestly, the biggest thing about RH7 that causes cursing among myself and my team:
$ syscontrol $APP stop && syscontrol $APP start
admin: "!@#*&!@#!$%*$&@#!$^%^!$^!@*&$!!"
$ syscontrol stop $APP && syscontrol start $APP
Under initd of course it was "service $APP start". Not only is it that order ingrained into our memory and muscles, but it is a pain in the @$$ when you can't simply hit the up arrow and backspace the last argument to run a different command. Whoever did this in systemd is a first class @$$hole who must seriously hate admins...It is really bad right now since we are supporting a mixed environment of both initd and systemd systems. Maybe it won't be so bad when we fully transition.
3) We are using Lubuntu 16.04 on our desktops and Ubuntu 16.04 Server on several of our systems. I have found the Ubuntu team FAR more skilled and adept at getting systemd problems fixed. We still find hardware issues from time to time which is just freaking annoying. The Systemd crowd says "Oh it isn't systemd! It's your hardware!" to which we call BS. We have routinely proven that other non-systemd OS's use the hardware just fine and in many cases an different version of systemd works. Seriously, the biggest issue I have with systemd these days is that the developers all seem to be smug jerks who think they know more about my hardware setup then I do. I routinely prove them wrong. They also have the problem of saying brand new laptops are "broken" because systemd doesn't support them properly (laptop lid switch reporting on or off is a common problem that systemd deals with quite poorly). I have come to have a GREAT deal of respect for the Ubuntu systemd devs as they have always been helpful and have worked well with me to replicate and fix the problem.
4) The BIGGEST problem I have with Ubuntu 16.04? It freaking updates all the damn time! I'm in an environment where I have to have security patches applied regularly but I also can't reboot at will! I have users running jobs that may run months at a time! Yet Ubuntu kicks out kernel updates nearly weekly. My RH and 14.04 boxes are stable and get a few updates from time to time, but the 16.04 boxes are CONSTANTLY complaining about needing to reboot for patches. Knock it off, Canonical, and stabilize 16.04 already! Jeez.
It hasn't been easy, it is slow going, and we've reported TONS of bugs so far upstream...but we believe that moving forward with 16.04 and RH7/SL7 is the best way forward at the moment.
Hope that helps.
~Stack~
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @08:53PM
I'm convinced the frequent kernel updates on 16.04 (even when there aren't security issues from upstream) are to inconvenience enough so that people buy their live patching service.
(Score: 2) by Magic Oddball on Tuesday February 07 2017, @05:37PM
I'm not surprised you didn't have much luck with the list at without-systemd.org — I remember trying to slog through it a few years ago after the Debian 'Jessie' upgrade borked my install beyond repair. It's a bit difficult to fix problems when systemd bugs also break access to even the lowest-level tools. :-p
I might be misunderstanding your described criteria, but it sounds to me like the often-overlooked systemd-free distro PCLinuxOS [pclinuxos.com] might be able to fulfill it.
Another Soylentil pointed me towards it a few years ago after the Debian disaster, and have been using it steadily since then. I'd rate its stability, software availability, support, and speed of security-focused updates as being on par with or better than the major distros (Debian, Fedora, Ubuntu, OpenSUSE, etc.) at their best before they were hit by systemd.
Either way, good luck finding a decent replacement; hopefully it won't be as painful or frustrating as it was for me.
(Score: 5, Insightful) by RedBear on Tuesday February 07 2017, @05:56PM
I've played around with a lot of non-Windows operating systems over the past 25 years. From that experience and research into all operating system platforms, I have trouble understanding why there is even a question of which direction to go with something like a server running a database-driven website. Linux may have "more stable" flavors available, but anyone who has paid any attention to the evolution of Linux should have noticed that it has spent its entire life (a quarter century now) being semi-experimental and bleeding edge. Great for people who like to tinker and get support for the latest desktop hardware and people playing with making Beowulf clusters. And now there is the Great SystemD Split, causing at least a third of even long-time Linux fans to see Linux as having lost the plot.
On the other hand, there is a community that has spent the last 30 or more years, in a very general sense, concentrating entirely on having a culture of long-term stability both in the community ideals and guiding principles, and in the software development. A community that has never spent much time and effort trying to be bleeding edge but rather on creating solid, stable, efficient code bases for many of the most important servers that need to remain stable for years while getting hammered with unimaginable amounts of traffic on the Internet. Of course this is the BSD community. Extremely impressive and popular security software such as OpenSSH and PF emanated from the OpenBSD community. Skills you may have learned using a BSD distribution 30 years ago are still applicable to FreeBSD today, and skills you learn today will still be applicable to FreeBSD or one of its variants 30 years from now. I have always seen the BSDs as a platform that it truly makes sense to spend the time to master if you have a real production server environment to run.
If you have enough faith to expect this site to still exist 30 years from now, I honestly don't get why there would even be a question of which direction to go. I'm sure the Solaris-based stuff is also magical and powerful, but think about the likelihood that any future volunteer maintainers will be familiar with it, and how stable the future development might be. For me, the best possible choice for this task is FreeBSD's stable branch. You lay out all the good reasons yourself in the quote above.
But, that's just my opinion, and as always I could be quite wrong.
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
(Score: 4, Informative) by TheRaven on Wednesday February 08 2017, @01:42AM
For me, the best possible choice for this task is FreeBSD's stable branch.
I hope this was a typo, and you meant release branch. The stable branch in FreeBSD is the middle branch in terms of stability. Unlike current (head) it is guaranteed to maintain a stable ABI (and KBI) over its lifetime, but it doesn't get any of the release testing - it's the branch from which the next minor stable release will be cut, after people who enjoy testing things have tested it. Please don't run it on production unless it's on an expendable machine (if you've built in fault tolerance at a higher level, please do run it on a few machines and report any issues though, because that's how we try to make sure bugs don't end up in the releases). In practice, it's probably fine, but it comes with absolutely no claims of reliability.
sudo mod me up
(Score: 2) by RedBear on Wednesday February 08 2017, @06:35AM
Ah, yes, I forgot FreeBSD has that odd naming where "stable" just means "it probably won't blow up too frequently". That's a bit of a confusing misnomer. Been too long since I've actually used FreeBSD in any meaningful way. I only meant that the most stable branch of FreeBSD should be used, obviously. Which I expect people like NCommander to already understand much better than I do.
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
(Score: 2) by TheRaven on Thursday February 09 2017, @12:00AM
Ah, yes, I forgot FreeBSD has that odd naming where "stable" just means "it probably won't blow up too frequently".
Stable doesn't mean anything about how frequently it will blow up. Stable means the ABI / KBI is stable. If you build a kernel module against any version of the stable branch, then it will work with any future version of the same stable branch. Similarly, any piece of userspace code compiled for a release of the stable branch will work with any future version of the stable branch.
sudo mod me up
(Score: 2) by RedBear on Thursday February 09 2017, @04:34AM
Somebody needs to take some things a little bit less seriously.
¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @08:02PM
yes, really.
systemd: i haven't experienced any/most of the issues you mentioned in your previous comment about systemd. journald corruption? that's either old ass versions or distro/hardware/config specific, i would guess. every once in a while i run into something where i don't know how to get the info i want but i'm sure it's just my ignorance as it has been in the past. i'm not discounting others worries about the unix philosophy, etc. but all that's above my pay grade. for my use case(different types of servers, firewalls, desktops, etc) it's easy and nice 98% of the time. for the 2% output to syslog and use the tools/methods you're used to. read the arch wiki again.
systemd2: or not. whatever works.
arch: almost completely problem free. a somewhat competent sysadmin that pays attention to terminal output will have minimal issues. everything i manage is arch (for a few years now) and i'm probably much less experienced overall than soylent staff. gentoo is probably cool too but my *guess* (no real experience with gentoo) is that you will have more problems with their ebuilds than you will with arch packages. People think of arch as bleeding edge but it's really just current stable upstream versions with minimal changes. not a big deal. Distros that bastardize the hell out of everything are way more issue prone, IMHO.
use linux-grsec kernel. arch is supposed to be adding PIE but it's currently stalled. I think we need to get organized and pay some of the devs...have staging servers or lxc containers if you're worried about something breaking.
bsd? i have nothing against bsd (except i don't love the license) but i think you're kidding yourself and/or taking the vast power of the GNU+linox ecosystem for granted. bsd is fine if it meets your requirements and you want to jack with it, but chances are it's missing a few somethings you will need. you just may not find out until you're in too deep for comfort.
good luck.
(Score: 2) by bzipitidoo on Tuesday February 07 2017, @09:28PM
When did Arch make systemd optional? I quit Arch some years ago when they tried to push systemd through their rolling update process, and talked like systemd was going to become a requirement for the rest of eternity. The update to systemd was long and complicated, and somewhere towards the end it went wrong, leaving me with an unbootable mess of an installation.
One problem I ran into with systemd was that the default setting for the system logs was compressed. I forget which distro that was, think it was OpenSuse. To view the most recent lines, couldn't do "tail /var/log/messages" anymore and get results instantly, had to wait half a minute for journalctl to decompress the current log. Was really annoying when all I wanted was to view the last few lines over and over to check on whether this or that fix had resolved some issue the system was having.
(Score: 1) by TheSage on Wednesday February 08 2017, @06:03AM
https://sourceforge.net/projects/archopenrc/files/arch-openrc/ [sourceforge.net]
works for me. For more details see
http://systemd-free.org/ [systemd-free.org]
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @03:33PM
i'm not saying journalctl is perfect, but the reason journalctl was taking so long is because you were not restricting output to current boot. that means you were opening all of the journal since the beginning of time. especially large if you didn't set up a low size limit for the journal. if you specify the current boot "journalctl -b" and then hit "end" you'll be at the last few lines reasonably quickly. all in the arch wiki. maybe not when you were having issues though. i also didn't move to systemd until it had been in use in arch for a little while, since i knew nothing about it except that it was an important part of the system that i didn't want to have to try and fix.
(Score: 2) by bzipitidoo on Wednesday February 08 2017, @07:09PM
Thanks for the tip. And yet, -b seems a hackish workaround. I vaguely recall journalctl has another flag to get the most recent messages. I did not know of the Arch wiki at the time I was wrestling with systemd. I do remember that the maintainers of the distro had not troubled to document things. I had to find out the hard way where the system logs had been moved and how to access them. Read a lot of man pages and did a lot of Internet searches to learn that the system had been changed to systemd, that systemd was an init replacement, then learn of the existence of journalctl and that it was the new way to read the logs. A simple document explaining these basic facts and listing common Sys V commands with systemd equivalents would have saved me a lot of time.
One thing that MS doesn't get is that gratuitously changing the interface is bad. Thought the Linux world was a bit more mindful of that, until this systemd fiasco. What could possibly justify breaking "tail /var/log/syslog" and "tail /var/log/messages" as ways to view the logs? They could have at least put in a short text file in their place, just have /var/log/messages contain something like the text "journalctl is the new command to view the logs. See the man page for details." then I would have seen at least that when I tried to view the logs the old way, instead of being left to wonder why the log files were missing and what was wrong with the system. But no, they didn't even do that. The move to systemd was badly done. Now, /var/log/syslog has come back in Ubuntu at least, if it was ever removed. Even the ipv4 to ipv6 move didn't uproot the old ways, you can still type "ifconfig", don't have to use the newer "ip addr".
(Score: 2) by AudioGuy on Tuesday February 07 2017, @08:52PM
Servers are much simpler than user machines and you really don't want a lot of extra cruft that is geared toward user machines on a server.
We always run a development machine which is a natural place to keep a Gentoo base system, which can the be rolled out to other machines as binaries (literally just rsync from base machine to others).
The very first buld takes a while, but after that you are just updating your 'world' which is remembered.
As someonle else pointed out it is easy to do things like set a specific version of an application to use (and override just about anything).
You only install what you need, with the specific features you need, so the only security updates you need to worry about are the ones for your epecific apps and even some of those may not be needed if they affect only a feature you are not using. Gentoo stays very up to date on security.
The standard init system is one that fixes many of the problems with the older init systems, while retaining the good parts. (default init is non systemd).
Docs are good. Support is good. Its a major distro, not a derivative, which is also used by Google so not likely to go away, or get too weird.
(Score: 3, Interesting) by mechanicjay on Tuesday February 07 2017, @10:36PM
I've been running Gentoo on my Desktop for almost 2 years now. Once per week, I do an 'emerge --sync' followed by an 'emerge --ask @world'. The only breakage I've had in this time was a bad autofs package, which was really easy to roll back to the previous working version. For me it's been as stable as OpenSuse which I run on my Laptops (and personal server). I need to rebuild my personal VPS soon, and I'm thinking of running Gentoo there as well.
That said, I am sensitive to the coordination and build-time issues involved in a large cluster of machines. It seems to me that doing builds on a 'test' box and pushing to a binary package host which all the other machines pull from is not too bad of a solution.
My VMS box beat up your Windows box.
(Score: 2) by AudioGuy on Tuesday February 07 2017, @10:52PM
I have been doing this for the last 13 years, in a situation much more difficult.
I build on a machine at home and rsync to remote machines (real hardware, not virtual) where a single error could leave a machine unbootable and me 800 miles away. So I deal with that with two bootable partitions, and various fail-safe scripts.
But at Soylent there are separate admin interfaces, so none of that sort of stuff needs to be done - it is much simpler and safer.
(Score: 2) by DonkeyChan on Tuesday February 07 2017, @09:14PM
I mean, you guys know what you're doing. The only thing I would add or adjust in your stack is to build it all into containers. Find a distro that gives you pleasant system control then don't worry about whether your stack runs on it. Focus on the OS being what you need it to be for maintenance etc and containerize the stack.
(Score: 4, Insightful) by mechanicjay on Tuesday February 07 2017, @10:39PM
I've been getting a ton of experience with Docker at my day job the last 6 months or so. I've been thinking about working on building a rehash image as a proof of concept. I've found a ton a freedom when you can completely divorce the application from the underlying OS. I agree, something like this should be on the table as well.
My VMS box beat up your Windows box.
(Score: 2) by DonkeyChan on Tuesday February 07 2017, @11:28PM
Yeah! Whatever flavor containerization you choose to go with, being able to divorce the system from the OS like you said is an investment in future maintenance and a hedge against regression bugs.
(Score: 3, Informative) by NCommander on Wednesday February 08 2017, @06:51AM
Docker is useful for staging and separation, but not for security. If you can get root in a privilleged LXC container (which is what Docker used checked), you can break out. LXC for a long time had a rather horrid security rating because privileged containers were the only way to go.
Still always moving
(Score: 2) by mechanicjay on Wednesday February 08 2017, @07:32AM
Yeah, there are a whole lot of security considerations with regards to root breaking out. Docker is moving pretty quick at this point and this has been a huge area of focus for the last year or so. A few sane things to do are to be sure that the process running in the container is not running as container root, then make sure you're using uid/gid mapping, which further isolates container processes in their own uid/gid range. Those combined with, of course, never running a container with the --privileged flag, and I don't think you're any worse off than running some horrid old web application on apache directly on the OS. That's at least my take on it, having wrapped about a half-dozen horrid old apps in docker in the last year. YMMV
My VMS box beat up your Windows box.
(Score: 0) by Anonymous Coward on Friday February 10 2017, @10:28AM
For what it's worth, jails on FreeBSD were designed specifically to confine root (via isolation, the original paper is titled 'confining the omnipotent root'), and with few exceptions that aren't really related to the jail implimentation itself, has not been broken out of yet, to the point that PHK has quibbed that he'd "be interested to hear from people who manage to jailbreak, as he doesn't believe it's secure". The few exceptions (with their included mitigations/fixes noted in parenthesis) include modifying the inside-jail shell that the outside-jail root attaches to (can be mitigate completely by never using jail_attach and sshing into the jail instead), using symlinks to access resources outside of jails (can be mitigated by using chroot or zfs datasets), and spoofing ip addresses (it isn't possible on default jails because socket access is not allowed, but if you need socket access VIMAGE provides the BSD full netstack).
(Score: 2) by mrpg on Tuesday February 07 2017, @09:45PM
For webmail I switched from squirrelmail to roundcube:
https://roundcube.net/ [roundcube.net]
(Score: 2) by NCommander on Wednesday February 08 2017, @12:31PM
This was actually the preference of most of the staff as I suggested that migration awhile ago but most of us are happy with SquirrelMail.
Still always moving
(Score: 3, Interesting) by Azuma Hazuki on Tuesday February 07 2017, @09:50PM
I love Gentoo, I've run production machines on it (very well, thank you--a little savvy makes it rock-stable), but for this? FreeBSD all the way. Most of the reasons have been said already and by more articulate, experienced people than me.
I am "that girl" your mother warned you about...
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @06:23AM
I've been hosting websites for 20 years now... tried just about everything, but I've had the best luck with Gentoo and FreeBSD!
(Score: 0) by Anonymous Coward on Wednesday February 08 2017, @07:31AM
Either Debian or BSD. Roll dice.
(Score: 1) by garrulus on Wednesday February 08 2017, @11:14AM
nt
(Score: 3, Informative) by bart9h on Wednesday February 08 2017, @07:33PM
I'm using Devuan since alpha1, and I like it a lot.
Not much to say here: it's basically the Debian I was used to, but without systemd.
The vast majority of the packages are the same as Debian, they only touch what is needed to keep systemd out of it.
(Score: 1) by krait6 on Friday February 10 2017, @08:35AM
I don't blame anyone for wanting to avoid systemd; I deal with systems both with and withtout it and there are benefits and drawbacks to each. My goal in this reply is to respond to the OP's query to discuss distro choices without systemd.
A starting recommendation is "try before you commit"; you could load various free distros in VMs to try 'em out to evaluate which ones will fit your needs. I once loaded the top 25 free software distros listed at Distrowatch.com into VM's to run (simple) tests on some software and completed all that in 3 days. [Except for Gentoo -- that took days to compile the GUI within the VM.]
Debian: defaults to systemd since Jessie, but systemd can be removed and sysvinit put in it's place and kept that way with an APT PIN:
https://www.debian.org/releases/jessie/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system [debian.org]
http://without-systemd.org/wiki/index.php/How_to_remove_systemd_from_a_Debian_jessie/sid_installation [without-systemd.org]
http://people.skolelinux.org/pere/blog/How_to_stay_with_sysvinit_in_Debian_Jessie.html [skolelinux.org]
Debian supports running it without systemd for the time being, but there's no guarantee that this is sustainable, because several upstream projects expect systemd to be on the target (KDE, Gnome, Network Manager, etc). For more info on those details, see: https://bugs.debian.org/727708 [debian.org]
The main benefit of Debian is that it supports upgrade-in-place. Few distros support this as well as Debian does.
Debian boxes can be upgraded over Tor after installing apt-transport-tor. All packages require GPG signatures from developers as well as package checksums which get checked as the package is installed.
Distribution releases have 3-year support + 2-year "LTS" support (from another team) after that.
Debian supoprts AppArmor and I use it on some Debian systems, and I find I definitely like AppArmor over SELinux for a lot of reasons.
Debian now has support for ZFS as an add-on driver: (zfs-dkms): https://tracker.debian.org/pkg/zfs-linux [debian.org]
This means ZFS is usuable in Debian but (AFAIK) isn't supoprted in the installer, but can be used otherwse. There are a couple of Debian Developers using ZFS as the root filesystem who have written about it:
http://changelog.complete.org/archives/9152-why-and-how-to-run-zfs-on-linux [complete.org]
Disclaimer: I'm involved in Debian development and Debian-based distros are primarily what I run.
Devuan: I haven't tried it but I like the idea; I looked at the development model of depending on Git repos and wasn't thrilled with that. Others in the comments seem to like running it and being that it's based on Debian it's at least worth investigating; hopefully it has apt-transport-tor, GPG signatures from developers for uploads, checkums on packages and checked on installation, etc. I haven't investigated how long releases are supported for or how.
CentOS: 10-year release support is IMHO its distinguishing feature. It is possible to upgrade-in-place for servers but the procedure does not look pleasant and a "fresh install" is recommended over using it:
https://www.namhuy.net/3253/upgrade-centos-6-7.html [namhuy.net]
I run a CentOS 6 system so I know what that's like (works but not pleasant due to aging packages, uses Upstart which is dead upstream and has some issues), and of course CentOS 7 uses systemd. *shrug*.
Arch Linux: IMHO rolling-based distros are fine for desktops, but I wouldn't want to run a rolling-based distro on a server, and friends of mine running Arch and Gentoo generally say the same.
Arch in particular has a lesser-known mechanism available to downgrade to a prior date, and this can be limited to a specified set of packages to be held back. This comes up often for a friend of mine dealing with incompatibilities between X and the prorpietary Nvidia driver. For servers I really want the ability to downgrade a particular package to a particular version, and wih Arch that can be done but not as easily as I'd want.
I (sometimes) run Arch myself, and I love the distro in a lot of ways. Arch + the Arch AUR has even more packages than Debian Unstable does, and that's saying a lot.
Gentoo: everyone I knew in the Linux User Groups that used to run Gentoo (including a few prior Gentoo developers) have moved on and are running something else. I briefly ran it on a box back when the install started with a "stage1" compile from a Live environment, I've tried it several times since then when testing things, and I have not been happy with the results. Basically "your milage will vary" -- there are Gentoo experts that can keep Gentoo systems running well, but for others it seems easy to get into trouble.
Slackware: I started Slackware back in the 90's. Back then the recommended upgrade method was "wipe and reinstall" and that's why I was compelled to switch away from it. Today there are some mecanisms to upgrade Slackware machines, and if this distro is chosen I would recommend setting up a test system to understand the upgrade mechanism first. I don't know what security features (GPG sigs or checksums) Slackware might have, if any -- last I knew Slackware was all based on simple tarballs without any of that. Last I recall Slackware was a minimalistic distro, so upgrading certain packages are likely to involve compiling from source.
FreeBSD: I've run it occasionally but need (a lot) more experience with it before I think I'd have an informed opinion of it.
Good to ask this question now so that there's plenty of time for evaluation of the options.
(Score: 1) by pTamok on Sunday February 12 2017, @06:23PM
Coming to this late, and having skim read the responses, I'll make a couple of points, one frivolous, and one serious, frivolous first:
1) Why not V̶A̶X̶ O̶p̶e̶n̶ VSI OpenVMS? Long history, very stable. Which leads me on to...
2) If you are looking for experience, and possibly something to decorate your CV with, then choose what is best for that. I'm sure you can make pretty much anything work: a Beowulf cluster of Raspberry Pi's, GNU/Hurd, Haiku, Inferno - the question is what interests you, and what is of most use to you. The answer to that question is likely to be different to a boring and sensible answer designed to give a secure, reliable, and high-performing site. FreeBSD might well be the best option if evaluated this way.
Whichever way you choose, thank-you for persisting with the site.
(Score: 1) by trustn1 on Tuesday February 14 2017, @10:43AM
Hi,
did you already choose your new underlying OS?
Thanks