Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday October 26 2016, @08:48AM   Printer-friendly
from the no-reboots dept.

LWN (formerly Linux Weekly News) reports

Canonical has announced the availability of a live kernel patch service for the 16.04 LTS release. "It's the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads."

Up to three systems can be patched for free; the service requires a fee thereafter. There is a long FAQ about the service in this blog post; it appears to be based on the mainline live-patching functionality with some Canonical add-ons.

Another distro, not wanting to be left out of the recent abundance of limelight has made some noise of its own.

Phoronix reports

KernelCare Is Another Alternative To Canonical's Ubuntu Live Kernel Patching

The folks from CloudLinux wrote in to remind us of their kernel patching solution, which they've been offering since 2014 and believe is a superior solution to Canonical's service. KernelCare isn't limited to just Ubuntu 16.04 but also works with Ubuntu 14.04 and other distributions such as CentOS/RHEL, Debian, and other enterprise Linux distributions.

Another big difference to Canonical's Livepatch is that KernelCare does support rollback functionality while Canonical doesn't appear to support it at this time. KernelCare can also handle custom patches, 32-bit support, and they share they plan [sic] to soon begin offering livepatching support for glibc, OpenSSL, and QEMU.

The downside though is that KernelCare appears to rely upon some binary blobs as part of its service. Pricing on KernelCare ranges from $25 to $45 USD per year depending upon the number of licenses being purchased.

[Details at] CloudLinux.com.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by Geotti on Wednesday October 26 2016, @09:51AM

    by Geotti (1146) on Wednesday October 26 2016, @09:51AM (#418906) Journal

    Wait, so you want me to allow some entity that is not under my control to update my kernel? No. Fucking. Way! (binary blobs or not)

    • (Score: 5, Insightful) by Runaway1956 on Wednesday October 26 2016, @10:45AM

      by Runaway1956 (2926) Subscriber Badge on Wednesday October 26 2016, @10:45AM (#418917) Journal

      Just how literally should we take you? Should we assume that you write your own kernel, line by line, and that EVERYTHING is under your direct control?

      Computing is a world of compromises. All of us decide who to trust, and how much to trust them. All of us have been burned by misplaced trust. But, we trust SOMEONE, or we wouldn't be on computers at all. Those of us who are more paranoid tend to trust very few people, and we double check everything, compile our own binaries, and don't use any proprietary binary blobs. Those of us who are more gullible run Windows. The REAL paranoids don't trust anyone, and they are still fabbing their own CPU's, mainboards, GPU's, etc.

      Compromise. Get informed, decide who you trust, and stick with that decision until you have reason to change it.

      Personally, I "trust" nvidia for their binary blobs. It's not a blind trust, I know that nvidia has screwed the pooch a few times in it's history. But, still, I use their blobs. I don't trust Microsoft, but for some strange reason I'm "trusting" Oracle and VirtualBox. Compromise.

      It might be good if we could make some kind of scale we could guage ourselves on. I'm not totally paranoid, but I'm not the most gullible customer in the market either. On a scale of 1 to 10, with 10 being the most trusting of Microsoft sycophants, I hope that I'm closer to 2, 3, or 4 than to 10. Maybe you need to be at zero, for some reason.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday October 26 2016, @11:13AM

        by Anonymous Coward on Wednesday October 26 2016, @11:13AM (#418926)

        as a fellow paranoiac i can understand what he's saying as .. i'm not giving you or anyone a way to inject and later remove bits into my kernel. ... you know.. in an auto-update or live system that can get man-in-the-middle'd.

        and no, people past 30..40..50.. depends on intelligence and evilness... do not compromise. if you're suggesting one compromise, you have been compromised yourself. tolerating works however. ;)

        • (Score: 2) by Runaway1956 on Wednesday October 26 2016, @09:46PM

          by Runaway1956 (2926) Subscriber Badge on Wednesday October 26 2016, @09:46PM (#419168) Journal

          I do understand the problems with auto-updating. It was one of the things that turned me off of Ubuntu. I rather like the way my Gentoo runs. I ask it to check for updates. It comes up with three, or three hundred different updates. I can browse the list, and decide which, if any, I want to update. Debian is much the same - or it used to be. SystemD was leveraged in there somewhere, and the updates felt less "optional" after that.

          Auto updating isn't really such a good idea anyway. Your MITM attack is a valid observation, aside from the fact that some updates just don't work as intended.

          I mentioned nvidia drivers, in my last post. I've found it to be foolish to update the drivers immediately after they have been released. Wait a couple weeks, then hit the forums to see how many people are bitching, and about what.

          When I still maintained Windows for the family, there was a service pack for WinXP. I think (almost certain) that it was service pack 2. I grabbed it immediately, and applied it. The computer went into an endless reboot cycle, and I didn't know how to break the cycle. Hit some forums, and learn that SP2 tended to do that to some AMD CPU's. Had I waited a week, then hit the forums before updating, I would have saved myself a lot of bother.

        • (Score: 2) by Geotti on Thursday October 27 2016, @05:01AM

          by Geotti (1146) on Thursday October 27 2016, @05:01AM (#419287) Journal

          i'm not giving you or anyone a way to inject and later remove bits into my kernel. ... you know.. in an auto-update or live system that can get man-in-the-middle'd

          Thanks, I was sure this was blatantly clear.

    • (Score: 2, Insightful) by Anonymous Coward on Wednesday October 26 2016, @12:31PM

      by Anonymous Coward on Wednesday October 26 2016, @12:31PM (#418945)

      Your kernel package and hundreds of other packages are already coming from them. What's the difference?

      • (Score: 2) by VLM on Wednesday October 26 2016, @02:01PM

        by VLM (445) Subscriber Badge on Wednesday October 26 2016, @02:01PM (#418975)

        Assuming Debian/Ubuntu, when manually upgrading apt-get will whine about the signature hierarchy although you can override it.

        I have not looked at this kernel thingy to see if its equally secured or worst case something like has an implied --force-yes as default config.

    • (Score: 0) by Anonymous Coward on Wednesday October 26 2016, @07:52PM

      by Anonymous Coward on Wednesday October 26 2016, @07:52PM (#419115)

      If they were serious about security they would update their packages to be something a tad more current. A few of the packages I rely on are 1-2 years out of date in ubuntu. Many of them have serious security issues. *IF* they did that better than debian at this point they might be worth listening to.

      They went from a kick ass distro to a kinda 'okaish' to 'crapish' one. The only reason I still use them is out of inertia and not feeling like fiddling with the stupid thing. Much like my windows boxes.

  • (Score: 3, Insightful) by ledow on Wednesday October 26 2016, @11:31AM

    by ledow (5567) on Wednesday October 26 2016, @11:31AM (#418931) Homepage

    Ha ha ha ha ha!

    Linux Kernel, security by subscription, just buy our DLC licences to get this functionality?

    Do you have ANY IDEA of your prime customer base? They are going to throw a fit and just replicate this functionality for themselves without your interference.

    Linux kernel trampolines have been available since the days of Linux 2.0. If you're going to provide it "as a paid service", you might as well just pack up now. Maybe the Red Hat people would buy it - they have entirely different needs to general Ubuntu users, however.

    And within minutes of you issuing out a patch via this service, an equivalent one, via an equivalent free service, where I can review the code in question, will undoubtedly spring up.

    Binary blobs and DLC licensing... what a great match for a Linux distribution...

    • (Score: 2) by RamiK on Wednesday October 26 2016, @12:20PM

      by RamiK (1813) on Wednesday October 26 2016, @12:20PM (#418942)

      Maybe the Red Hat people would buy it

      That's not a maybe. Red Hat Enterprise Linux was delivering kpatch while Suse was pushing kgraft since circa 2014. Oracle was doing it with ksplice since 2011, after buying it off Ksplice, Inc which was doing the same since 2008.

      they have entirely different needs to general Ubuntu users, however.

      Ubuntu is quite popular in cloud infrastructure both for VMs and containers. Most developers aren't system admins and don't want to code around library versions so after they get their project compiled and running under their Ubuntu workstations, they look for Ubuntu hosting services.

      Binary blobs and DLC licensing...

      It's what keeps Red Hat in the green. It's why there's so many Docker and AppImage projects out there. And yeah, Android...

      --
      compiling...
      • (Score: 1, Informative) by Anonymous Coward on Wednesday October 26 2016, @09:45PM

        by Anonymous Coward on Wednesday October 26 2016, @09:45PM (#419166)

        Forgetful me failed to include the story that mentioned that stuff.

        Previous: Kernel Live-Patching Moving into the Linux Kernel [soylentnews.org]

        That references other related stories as well.

        -- OriginalOwner_ [soylentnews.org]

        • (Score: 2) by RamiK on Thursday October 27 2016, @03:29AM

          by RamiK (1813) on Thursday October 27 2016, @03:29AM (#419263)

          Good to know about kexec.

          --
          compiling...
  • (Score: 3, Interesting) by VLM on Wednesday October 26 2016, @02:23PM

    by VLM (445) Subscriber Badge on Wednesday October 26 2016, @02:23PM (#418991)

    Does anyone care about reboots anymore other than "solution providers" who are convincing us a problem exists which they have a very complicated way to solve?

    I'm just saying, with virtualization and clustering technology being better (at least in PC world, although todays IT still isn't as good as MVS or VMS was in the 80s, or at least the past is viewed thru rose colored glasses) than ever in my entire life, I simply couldn't care less about the rebooting problems systemd "solves" or this weird live patching thing "solves". Its just no longer relevant to my professional or home life due to technological improvements elsewhere...

    This would have kinda kicked ass in 1995 back when an entire ISP ran on a single utterly non-redundant P75 running an ancient linux (SLS? Slackware?), but it hasn't been '95 for some time now.

    I mean I can get into this game myself. ipchains was kind of a PITA in 1996-ish and I've been thinking of a way to completely change how everything works in Debian that would make ipchains really easy to use. Sure nobody gives a shit about ipchains but that never slowed down live kernel patching or systemd, so I bet I can pull this off. I hope you all don't mind I'm flipping all your OS inside out just to accomplish something that doesn't matter.

    I'm just saying, its 20-frigging-16, when I upgrade the kernel (or any other damn thing, OS or app code) today, I have openstack spin up a copy of the most recent gold image, do whatever to create a new gold image with yer new kernel or new db or wtf (long openstack specific process here). Then I spin up a new image with the all new stuff, tell ansible to set it up as a new WTF, add the configured and working new image to the TEST cluster or replace the TEST cluster entirely depending on what it does, let the TEST test or whatever I need to do, assuming all is well with the world I spin up and ansible up some new PROD images, depending on the product/service I add the new boxes to the cluster to see what happens or just shut down the old PROD images and replace with new PROD images. Depends enormously on specific technology and business demands and where/how data and state are stored, etc. You'll note I never reboot anything. Pretty much ever. (I'll sometimes reboot a new image while its in TEST just to see if it actually reboots LOL but thats just Fing around). Its 20-frigging-16 get with the program yo. Last time I ran an official business critical app on a single bare metal hardware was ... in the 90s maybe? I distinctly remember around the turn of the century a coworker created a SPOF in our internal email system using spamassassin and an unfortunately configured single mysql server and I got to clean it up when the inevitable happened and thats the last time I had to do cleanup on an architecture like that, although last time I ran something like that was probably more recent, maybe early 00s? Certainly once I virtualize'ed and NAS'ed I never ran anything BUT clustered load balanced systems... At home I only have one active DHCP server but I could spin up an image pretty quick so I donno if that counts. I've done experiments and non-business critical stuff in a sloppy way but thats precisely where once again, No F's to give about rebooting precisely because its not an important system.

    I mean, why not trash linux and replace the latest kernel with MVS3.8j because its really important in 2016 that people be able to easily use punch card readers to control their OS, I mean who cares about legacy users who are actually trying to "do stuff" its more important that a cheap hack be forced onto everyone.

    I mean I use FreeBSD so I encourage my competitors to continue to do stupid stuff making me look better, but...

    • (Score: 3, Interesting) by zugedneb on Wednesday October 26 2016, @02:40PM

      by zugedneb (4556) on Wednesday October 26 2016, @02:40PM (#418998)

      the main problem here is that whatever becomes popular, and gains momentum in terms of user base + becoming industrial norm, eventually gets the main feedback loop attached to psychology, and not technology, and the organization gets infested by sociopaths...

      enjoy ur 1337 freebsd distro, but I give u 5 to 10 more years of joy, with the distribution being rather thin at 10...

      --
      old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 3, Insightful) by LoRdTAW on Wednesday October 26 2016, @05:00PM

      by LoRdTAW (3755) on Wednesday October 26 2016, @05:00PM (#419043) Journal

      Pretty much. With this new virtualized cloud stuff, who needs to worry about rebooting? You have multiple virtual machines with a load balancer/proxy/whatever in front handing out connections to multiple systems in the back. Just take those systems down one at a time and plug in the new patched systems. Same goes for the host system. Live migrate the VM's to a patched box, patch and reboot. Lets not get started with systemd. Makes me laugh. Especially with people blathering about how systemderp was going to speed up boot times as if boot time is somehow a metric that matters. I suppose those people spend half their day rebooting or something.

      • (Score: 4, Insightful) by VLM on Wednesday October 26 2016, @07:03PM

        by VLM (445) Subscriber Badge on Wednesday October 26 2016, @07:03PM (#419095)

        Just take those systems down one at a time and plug in the new patched systems.

        The biggest professional problem I have is legacy people who don't understand the game has changed.

        So in the old days you did forklift upgrades moving entire racks of hardware in and out of centers and you can't physically install both at the same time and switching back and forth takes like a full working day. I did stuff like that for decades.

        So the business people get used to 1am maint notification complete shutdown for 7 hours until 8am and god help me they code it into procedures or even law.

        Now I roll systems by flipping one IP address in a load balancer at a time and its best if I do it during prime time to find out if it works and if it doesn't, first why wasn't it caught in DEV or TEST but second I flip an ip address and we're back. And I can roll 20 systems in 5 minutes or 5 months if you insist on slow methodical conversion.

        But the business people just don't understand it, although they remember getting burned back in the forklist days, so it leads to a lot of stupid behavior that would have been useful in 1986, not so much in 2016.

      • (Score: 2) by maxwell demon on Thursday October 27 2016, @06:08AM

        by maxwell demon (1608) on Thursday October 27 2016, @06:08AM (#419302) Journal

        If you want a quick boot, install your system on SSD. And disable stuff you don't need (that's anyway a good idea for security reasons); a service that doesn't get started doesn't add to boot time.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by LoRdTAW on Thursday October 27 2016, @12:42PM

          by LoRdTAW (3755) on Thursday October 27 2016, @12:42PM (#419376) Journal

          This. I upgraded my Thinkpad to an SSD and a vanilla install of Linux Mint 17.2 (non-systemd) boots in about 10 seconds to the login screen. Plenty fast.

  • (Score: 0) by Anonymous Coward on Wednesday October 26 2016, @10:20PM

    by Anonymous Coward on Wednesday October 26 2016, @10:20PM (#419182)

    GPLv2, no fees...

    https://github.com/dynup/kpatch [github.com]