Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Wednesday October 26 2016, @08:48AM   Printer-friendly
from the no-reboots dept.

LWN (formerly Linux Weekly News) reports

Canonical has announced the availability of a live kernel patch service for the 16.04 LTS release. "It's the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads."

Up to three systems can be patched for free; the service requires a fee thereafter. There is a long FAQ about the service in this blog post; it appears to be based on the mainline live-patching functionality with some Canonical add-ons.

Another distro, not wanting to be left out of the recent abundance of limelight has made some noise of its own.

Phoronix reports

KernelCare Is Another Alternative To Canonical's Ubuntu Live Kernel Patching

The folks from CloudLinux wrote in to remind us of their kernel patching solution, which they've been offering since 2014 and believe is a superior solution to Canonical's service. KernelCare isn't limited to just Ubuntu 16.04 but also works with Ubuntu 14.04 and other distributions such as CentOS/RHEL, Debian, and other enterprise Linux distributions.

Another big difference to Canonical's Livepatch is that KernelCare does support rollback functionality while Canonical doesn't appear to support it at this time. KernelCare can also handle custom patches, 32-bit support, and they share they plan [sic] to soon begin offering livepatching support for glibc, OpenSSL, and QEMU.

The downside though is that KernelCare appears to rely upon some binary blobs as part of its service. Pricing on KernelCare ranges from $25 to $45 USD per year depending upon the number of licenses being purchased.

[Details at] CloudLinux.com.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by LoRdTAW on Wednesday October 26 2016, @05:00PM

    by LoRdTAW (3755) on Wednesday October 26 2016, @05:00PM (#419043) Journal

    Pretty much. With this new virtualized cloud stuff, who needs to worry about rebooting? You have multiple virtual machines with a load balancer/proxy/whatever in front handing out connections to multiple systems in the back. Just take those systems down one at a time and plug in the new patched systems. Same goes for the host system. Live migrate the VM's to a patched box, patch and reboot. Lets not get started with systemd. Makes me laugh. Especially with people blathering about how systemderp was going to speed up boot times as if boot time is somehow a metric that matters. I suppose those people spend half their day rebooting or something.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by VLM on Wednesday October 26 2016, @07:03PM

    by VLM (445) on Wednesday October 26 2016, @07:03PM (#419095)

    Just take those systems down one at a time and plug in the new patched systems.

    The biggest professional problem I have is legacy people who don't understand the game has changed.

    So in the old days you did forklift upgrades moving entire racks of hardware in and out of centers and you can't physically install both at the same time and switching back and forth takes like a full working day. I did stuff like that for decades.

    So the business people get used to 1am maint notification complete shutdown for 7 hours until 8am and god help me they code it into procedures or even law.

    Now I roll systems by flipping one IP address in a load balancer at a time and its best if I do it during prime time to find out if it works and if it doesn't, first why wasn't it caught in DEV or TEST but second I flip an ip address and we're back. And I can roll 20 systems in 5 minutes or 5 months if you insist on slow methodical conversion.

    But the business people just don't understand it, although they remember getting burned back in the forklist days, so it leads to a lot of stupid behavior that would have been useful in 1986, not so much in 2016.

  • (Score: 2) by maxwell demon on Thursday October 27 2016, @06:08AM

    by maxwell demon (1608) on Thursday October 27 2016, @06:08AM (#419302) Journal

    If you want a quick boot, install your system on SSD. And disable stuff you don't need (that's anyway a good idea for security reasons); a service that doesn't get started doesn't add to boot time.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by LoRdTAW on Thursday October 27 2016, @12:42PM

      by LoRdTAW (3755) on Thursday October 27 2016, @12:42PM (#419376) Journal

      This. I upgraded my Thinkpad to an SSD and a vanilla install of Linux Mint 17.2 (non-systemd) boots in about 10 seconds to the login screen. Plenty fast.