Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday October 20 2015, @11:12AM   Printer-friendly
from the wish-us-luck! dept.

Hello fellow Soylentils!

[Update:] We survived all three days of reboots without major issues. Many thanks to all who prepped the systems, prodded things along, and were on standby to deal with any unforeseen issues!

We were informed by Linode (our hosting provider) that they needed to perform some maintenance on their servers. This forces a reboot of our virtual servers which may cause the site (and other services) to be temporarily unavailable.

Here is the three-day reboot schedule along with what runs on each server:

Status Day Date Time Server Affects
Done Tues 2015-10-20 0200 UTC boron DNS, Hesoid, Kerberos, Staff Slash
Done Tues 2015-10-20 0500 UTC beryllium IRC, MySQL, Postfix, Mailman, Yourls
Done Wed 2015-10-21 0500 UTC sodium Primary Load Balancer
Done Wed 2015-10-21 0500 UTC magnesium Backup Load Balancer
Done Wed 2015-10-21 0700 UTC neon Production Back End, MySQL NDB cluster
Done Thu 2015-10-22 0200 UTC hydrogen Production Front End, Varnish, MySQL, Apache, Sphinx
Done Thu 2015-10-22 0500 UTC helium Production Back End, MySQL NDB, DNS, Hesoid, Kerberos
Done Thu 2015-10-22 0900 UTC fluorine Production Front End, slashd, Varnish, MySQL, Apache, ipnd
Done Thu 2015-10-22 1000 UTC lithium Development Server, slashd, Varnish, MySQL, Apache

We apologize in advance for any inconvenience and appreciate your understanding as we try and get things up and running following each reboot.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by VLM on Tuesday October 20 2015, @12:56PM

    by VLM (445) on Tuesday October 20 2015, @12:56PM (#252269)

    At $workplace I enjoy being able to halt DB server #7 with transparent failover, clone it in the NAS and rename the clone to "test DB", upgrade or otherwise F around with the test DB, swap it in for DB#2 in production once I trust the changes, or delete the test DB image and start over, then more or less do the same with puppet, then unleash the puppet.

    You could integrate it into one box vertically, but things get complicated when you mix multiple people doing multiple things on multiple projects and then do IP address level operations to swap test/dev/prod all around.

    At legacy companies and sites, the middlemen get in the way with virtual stuff just as much as they used to in the physical era, but cloud-i-ness doesn't have to be as screwed up as the old days. So "its no big deal" to spin up virtual servers as part of day to day operations unless the legacy middlemen are still standing in the way and try to turn creating a simple little image into some kind of insane capex purchase project.

    A good analogy from the old days when I worked in a dinosaur pen is purchasing more mainframe DASD is a major departmental project, but this is the era of a secretary picking up a box of blank floppy disks on the way to work, so there are different mindsets in how you swap things around or otherwise operate.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2