Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday May 10 2020, @06:47PM   Printer-friendly
from the stormy-weather dept.

System adminsitrator Chris Siebenmann has found Modern versions of systemd can cause an unmount storm during shutdowns:

One of my discoveries about Ubuntu 20.04 is that my test machine can trigger the kernel's out of memory killing during shutdown. My test virtual machine has 4 GB of RAM and 1 GB of swap, but it also has 347 NFS[*] mounts, and after some investigation, what appears to be happening is that in the 20.04 version of systemd (systemd 245 plus whatever changes Ubuntu has made), systemd now seems to try to run umount for all of those filesystems all at once (which also starts a umount.nfs process for each one). On 20.04, this is apparently enough to OOM[**] my test machine.

[...] Unfortunately, so far I haven't found a way to control this in systemd. There appears to be no way to set limits on how many unmounts systemd will try to do at once (or in general how many units it will try to stop at once, even if that requires running programs). Nor can we readily modify the mount units, because all of our NFS mounts are done through shell scripts by directly calling mount; they don't exist in /etc/fstab or as actual .mount units.

[*] NFS: Network File System
[**] OOM Out of memory.

We've been here before and there is certainly more where that came from.

Previously:
(2020) Linux Home Directory Management is About to Undergo Major Change
(2019) System Down: A systemd-journald Exploit
(2017) Savaged by Systemd
(2017) Linux systemd Gives Root Privileges to Invalid Usernames
(2016) Systemd Crashing Bug
(2015) tmux Coders Asked to Add Special Code for systemd
(2016) SystemD Mounts EFI pseudo-fs RW, Facilitates Permanently Bricking Laptops, Closes Bug Invalid
(2015) A Technical Critique of Systemd
(2014) Devuan Developers Can Be Reached Via vua@debianfork.org
(2014) Systemd-resolved Subject to Cache Poisoning


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rleigh on Monday May 11 2020, @05:52PM (3 children)

    by rleigh (4887) on Monday May 11 2020, @05:52PM (#992966) Homepage

    In the field I work in, every system requirement has to have an associated FMEA (failure modes effects analysis), which includes all of the hardware and software mitigations to take. It's tedious, but it ensures that all of the common and not-so-common failure modes have been thoroughly explored by a whole team of people, and that the appropriate mitigations have been implemented where appropriate.

    Do you think the systemd developers have done this, or anything remotely like this? No, neither do I. They don't care about stuff like that.

    And yet... having deliberately placed themselves in the most safety-critical part of the system, that's exactly what they should be doing.

    Whenever you parallelise something, you've got to have an upper bound on the parallelisation. Often, that's a maximum bound, and you might want to lower it if the system can't cope. Look at how, e.g. ZFS balances I/O. It's continually monitoring the available bandwidth to each device and adjusting the I/O load on them to maximise throughput. It also cares about responsiveness. If you start e.g. a scrub, it will consume all available disk bandwidth, but it has a very clever algorithm which slowly ramps up the utilisation over several minutes, and it has a fast backoff if any other I/O requests come in. There's no reason that systemd couldn't be doing this on unmount. It doesn't need to parallelise everything, it can start slow, monitor the time each umount takes, the completion rate and system resources used, and it can back right off if things start stalling. But as mentioned elsewhere in the thread, this is a place where parallelisation is almost pointless. Sometimes the simple solution is the best solution. You can safely and reliably unmount everything in a one line shell script, so why can't systemd do something that simple?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday May 11 2020, @08:51PM

    by Anonymous Coward on Monday May 11 2020, @08:51PM (#993065)

    They definitely didn't do anything near that. They have less than 50% function coverage and less than 38% statement coverage. I can't even imaging them trying to get anywhere near the truly required edge, branch, and condition coverage. Give me some MCDC, fuzzing, and property testing. But no, don't do any of that but then act surprised when bugs people predicted and warned you about crop up.

  • (Score: 0) by Anonymous Coward on Monday May 11 2020, @09:02PM

    by Anonymous Coward on Monday May 11 2020, @09:02PM (#993073)

    Shell script is anathema for systemd. Any admin can debug that with a plain editor, or even test interactively by copy & paste, then edit lines in shell history (basic REPL). Not in the systemd vision.

    We already concluded that IBM (and RH before them) plan is to add complexity so only them can touch the steering wheel, and all the rest have to buy support contracts and training, and "enjoy" the ride. And as you said in the past, other projects bent over dreaming they will still matter in the future and everything will be roses by then, when Rome does not pay traitors (that is the part I add).

    Free Software licenses like GPL require distribution of "source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities". Corporations found the loophole, the license does not say anything about making such big mess that people can make use of their freedoms. The mess is so big in some projects, that even small companies can not do it. Even less so, fork a project to try to get it back into sanity.

  • (Score: 2) by driverless on Tuesday May 12 2020, @01:29AM

    by driverless (4770) on Tuesday May 12 2020, @01:29AM (#993188)

    Your mention of FMEA is exactly the point I was making about software that ends up costing $1,000 per line of code in it. I've worked on SIL 2 and 3 systems and the amount of effort required for even a simple system is insane, no standard commercial or open-source system could be developed that way. Sure, it's then highly reliable, but only if you're prepared to invest the massive amounts of time and money into doing it that way.

    Anyway, as I mentioned earlier, not trying to defend systemd, but pointing out that just because it's in theory possible to build something to (say) SIL 3 doesn't mean it's practical for most software.