Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday May 10 2020, @06:47PM   Printer-friendly
from the stormy-weather dept.

System adminsitrator Chris Siebenmann has found Modern versions of systemd can cause an unmount storm during shutdowns:

One of my discoveries about Ubuntu 20.04 is that my test machine can trigger the kernel's out of memory killing during shutdown. My test virtual machine has 4 GB of RAM and 1 GB of swap, but it also has 347 NFS[*] mounts, and after some investigation, what appears to be happening is that in the 20.04 version of systemd (systemd 245 plus whatever changes Ubuntu has made), systemd now seems to try to run umount for all of those filesystems all at once (which also starts a umount.nfs process for each one). On 20.04, this is apparently enough to OOM[**] my test machine.

[...] Unfortunately, so far I haven't found a way to control this in systemd. There appears to be no way to set limits on how many unmounts systemd will try to do at once (or in general how many units it will try to stop at once, even if that requires running programs). Nor can we readily modify the mount units, because all of our NFS mounts are done through shell scripts by directly calling mount; they don't exist in /etc/fstab or as actual .mount units.

[*] NFS: Network File System
[**] OOM Out of memory.

We've been here before and there is certainly more where that came from.

Previously:
(2020) Linux Home Directory Management is About to Undergo Major Change
(2019) System Down: A systemd-journald Exploit
(2017) Savaged by Systemd
(2017) Linux systemd Gives Root Privileges to Invalid Usernames
(2016) Systemd Crashing Bug
(2015) tmux Coders Asked to Add Special Code for systemd
(2016) SystemD Mounts EFI pseudo-fs RW, Facilitates Permanently Bricking Laptops, Closes Bug Invalid
(2015) A Technical Critique of Systemd
(2014) Devuan Developers Can Be Reached Via vua@debianfork.org
(2014) Systemd-resolved Subject to Cache Poisoning


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by tekk on Monday May 11 2020, @12:39AM (3 children)

    by tekk (5704) Subscriber Badge on Monday May 11 2020, @12:39AM (#992573)

    Other init systems are less obsessively parallel. The problem is that it tried to spawn a few hundred unmount instances at once on a system with (for these days) relatively little ram.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by PinkyGigglebrain on Monday May 11 2020, @12:58AM

    by PinkyGigglebrain (4458) on Monday May 11 2020, @12:58AM (#992583)

    Thank you for the reply and info.

    One more item on my list of "why systemd will NEVER be on a server I set up" reasons.

    --
    "Beware those who would deny you Knowledge, For in their hearts they dream themselves your Master."
  • (Score: 5, Informative) by rleigh on Monday May 11 2020, @12:07PM (1 child)

    by rleigh (4887) on Monday May 11 2020, @12:07PM (#992767) Homepage

    Parallelising unmounting is also (as a general rule) completely pointless. You've got to consider the I/O resulting from each umount, and also that you have to process the graph of mounts from leaf nodes up to the root. It's 100% reliable to unmount in reverse mount order in sequence (like 'tac /proc/mounts | while read fs; do umount "$fs"; done', IIRC the old Debian initscripts did this when I last worked on them). And it's usually not going to be much slower than parallelising it since it's I/O bound. And you're not introducing any obscure failure cases.

    You also don't strictly need to unmount. Switch each mountpoint to readonly, which will flush any pending writes, and then power off with them all mounted readonly once everything is synced.

    Unix is all about having simple and composable pieces which can be assembled into larger more complex functionality. From the system calls to the userspace tools. Anyone who wants to learn should be able to understand how any part of the system functions. The old way may have been imperfect, but it was unique compared with proprietary systems in that you could see every command used to boot the system, from the initramfs all the way to bringing up the GUI. That was powerful, and empowering: anyone could tweak it to meet their needs. Declarative syntax might be superficially "simpler" but we've lost a great deal of what made Linux, Linux.

    • (Score: 4, Insightful) by TheRaven on Monday May 11 2020, @12:49PM

      by TheRaven (270) on Monday May 11 2020, @12:49PM (#992788) Journal

      Parallelising unmounting is also (as a general rule) completely pointless. You've got to consider the I/O resulting from each umount, and also that you have to process the graph of mounts from leaf nodes up to the root.

      It's worse than that. You're typically going to be hitting kernel locks when you mount or unmount a filesystem, so even if you run 300 umount processes, you'll almost certainly then have them all wait on the same in-kernel lock. You could probably improve performance by having a umount process that does a load of umount system calls in sequence, rather than creating an entire new process for each process though.

      --
      sudo mod me up