Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday October 26 2016, @08:48AM   Printer-friendly
from the no-reboots dept.

LWN (formerly Linux Weekly News) reports

Canonical has announced the availability of a live kernel patch service for the 16.04 LTS release. "It's the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads."

Up to three systems can be patched for free; the service requires a fee thereafter. There is a long FAQ about the service in this blog post; it appears to be based on the mainline live-patching functionality with some Canonical add-ons.

Another distro, not wanting to be left out of the recent abundance of limelight has made some noise of its own.

Phoronix reports

KernelCare Is Another Alternative To Canonical's Ubuntu Live Kernel Patching

The folks from CloudLinux wrote in to remind us of their kernel patching solution, which they've been offering since 2014 and believe is a superior solution to Canonical's service. KernelCare isn't limited to just Ubuntu 16.04 but also works with Ubuntu 14.04 and other distributions such as CentOS/RHEL, Debian, and other enterprise Linux distributions.

Another big difference to Canonical's Livepatch is that KernelCare does support rollback functionality while Canonical doesn't appear to support it at this time. KernelCare can also handle custom patches, 32-bit support, and they share they plan [sic] to soon begin offering livepatching support for glibc, OpenSSL, and QEMU.

The downside though is that KernelCare appears to rely upon some binary blobs as part of its service. Pricing on KernelCare ranges from $25 to $45 USD per year depending upon the number of licenses being purchased.

[Details at] CloudLinux.com.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by VLM on Wednesday October 26 2016, @02:23PM

    by VLM (445) on Wednesday October 26 2016, @02:23PM (#418991)

    Does anyone care about reboots anymore other than "solution providers" who are convincing us a problem exists which they have a very complicated way to solve?

    I'm just saying, with virtualization and clustering technology being better (at least in PC world, although todays IT still isn't as good as MVS or VMS was in the 80s, or at least the past is viewed thru rose colored glasses) than ever in my entire life, I simply couldn't care less about the rebooting problems systemd "solves" or this weird live patching thing "solves". Its just no longer relevant to my professional or home life due to technological improvements elsewhere...

    This would have kinda kicked ass in 1995 back when an entire ISP ran on a single utterly non-redundant P75 running an ancient linux (SLS? Slackware?), but it hasn't been '95 for some time now.

    I mean I can get into this game myself. ipchains was kind of a PITA in 1996-ish and I've been thinking of a way to completely change how everything works in Debian that would make ipchains really easy to use. Sure nobody gives a shit about ipchains but that never slowed down live kernel patching or systemd, so I bet I can pull this off. I hope you all don't mind I'm flipping all your OS inside out just to accomplish something that doesn't matter.

    I'm just saying, its 20-frigging-16, when I upgrade the kernel (or any other damn thing, OS or app code) today, I have openstack spin up a copy of the most recent gold image, do whatever to create a new gold image with yer new kernel or new db or wtf (long openstack specific process here). Then I spin up a new image with the all new stuff, tell ansible to set it up as a new WTF, add the configured and working new image to the TEST cluster or replace the TEST cluster entirely depending on what it does, let the TEST test or whatever I need to do, assuming all is well with the world I spin up and ansible up some new PROD images, depending on the product/service I add the new boxes to the cluster to see what happens or just shut down the old PROD images and replace with new PROD images. Depends enormously on specific technology and business demands and where/how data and state are stored, etc. You'll note I never reboot anything. Pretty much ever. (I'll sometimes reboot a new image while its in TEST just to see if it actually reboots LOL but thats just Fing around). Its 20-frigging-16 get with the program yo. Last time I ran an official business critical app on a single bare metal hardware was ... in the 90s maybe? I distinctly remember around the turn of the century a coworker created a SPOF in our internal email system using spamassassin and an unfortunately configured single mysql server and I got to clean it up when the inevitable happened and thats the last time I had to do cleanup on an architecture like that, although last time I ran something like that was probably more recent, maybe early 00s? Certainly once I virtualize'ed and NAS'ed I never ran anything BUT clustered load balanced systems... At home I only have one active DHCP server but I could spin up an image pretty quick so I donno if that counts. I've done experiments and non-business critical stuff in a sloppy way but thats precisely where once again, No F's to give about rebooting precisely because its not an important system.

    I mean, why not trash linux and replace the latest kernel with MVS3.8j because its really important in 2016 that people be able to easily use punch card readers to control their OS, I mean who cares about legacy users who are actually trying to "do stuff" its more important that a cheap hack be forced onto everyone.

    I mean I use FreeBSD so I encourage my competitors to continue to do stupid stuff making me look better, but...

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Interesting) by zugedneb on Wednesday October 26 2016, @02:40PM

    by zugedneb (4556) on Wednesday October 26 2016, @02:40PM (#418998)

    the main problem here is that whatever becomes popular, and gains momentum in terms of user base + becoming industrial norm, eventually gets the main feedback loop attached to psychology, and not technology, and the organization gets infested by sociopaths...

    enjoy ur 1337 freebsd distro, but I give u 5 to 10 more years of joy, with the distribution being rather thin at 10...

    --
    old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
  • (Score: 3, Insightful) by LoRdTAW on Wednesday October 26 2016, @05:00PM

    by LoRdTAW (3755) on Wednesday October 26 2016, @05:00PM (#419043) Journal

    Pretty much. With this new virtualized cloud stuff, who needs to worry about rebooting? You have multiple virtual machines with a load balancer/proxy/whatever in front handing out connections to multiple systems in the back. Just take those systems down one at a time and plug in the new patched systems. Same goes for the host system. Live migrate the VM's to a patched box, patch and reboot. Lets not get started with systemd. Makes me laugh. Especially with people blathering about how systemderp was going to speed up boot times as if boot time is somehow a metric that matters. I suppose those people spend half their day rebooting or something.

    • (Score: 4, Insightful) by VLM on Wednesday October 26 2016, @07:03PM

      by VLM (445) on Wednesday October 26 2016, @07:03PM (#419095)

      Just take those systems down one at a time and plug in the new patched systems.

      The biggest professional problem I have is legacy people who don't understand the game has changed.

      So in the old days you did forklift upgrades moving entire racks of hardware in and out of centers and you can't physically install both at the same time and switching back and forth takes like a full working day. I did stuff like that for decades.

      So the business people get used to 1am maint notification complete shutdown for 7 hours until 8am and god help me they code it into procedures or even law.

      Now I roll systems by flipping one IP address in a load balancer at a time and its best if I do it during prime time to find out if it works and if it doesn't, first why wasn't it caught in DEV or TEST but second I flip an ip address and we're back. And I can roll 20 systems in 5 minutes or 5 months if you insist on slow methodical conversion.

      But the business people just don't understand it, although they remember getting burned back in the forklist days, so it leads to a lot of stupid behavior that would have been useful in 1986, not so much in 2016.

    • (Score: 2) by maxwell demon on Thursday October 27 2016, @06:08AM

      by maxwell demon (1608) on Thursday October 27 2016, @06:08AM (#419302) Journal

      If you want a quick boot, install your system on SSD. And disable stuff you don't need (that's anyway a good idea for security reasons); a service that doesn't get started doesn't add to boot time.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by LoRdTAW on Thursday October 27 2016, @12:42PM

        by LoRdTAW (3755) on Thursday October 27 2016, @12:42PM (#419376) Journal

        This. I upgraded my Thinkpad to an SSD and a vanilla install of Linux Mint 17.2 (non-systemd) boots in about 10 seconds to the login screen. Plenty fast.