A Debian user has recently discovered that systemd prevents the skipping of fsck while booting:
With init, skipping a scheduled fsck during boot was easy, you just pressed Ctrl+c, it was obvious! Today I was late for an online conference. I got home, turned on my computer, and systemd decided it was time to run fsck on my 1TB hard drive. Ok, I just skip it, right? Well, Ctrl+c does not work, ESC does not work, nothing seems to work. I Googled for an answer on my phone but nothing. So, is there a mysterious set of commands they came up with to skip an fsck or is it yet another flaw?
One user chimed in with a hack to work around the flaw, but it involved specifying an argument on the kernel command line. Another user described this so-called "fix" as being "Pretty damn inconvenient and un-discoverable", while yet another pointed out that the "fix" merely prevents "systemd from running fsck in the first place", and it "does not let you cancel a systemd-initiated boot-time fsck which is already in progress."
Further investigation showed that this is a known bug with systemd that was first reported in mid-2011, and remains unfixed as of late December 2014. At least one other user has also fallen victim to this bug.
How could a severe bug of this nature even happen in the first place? How can it remain unfixed over three years after it was first reported?
(Score: 3, Informative) by sjames on Sunday December 21 2014, @09:14AM
In my testing, it also refuses to mount a btrfs filesystem in degraded mode. It dumps you to the emergency shell every time. When I researched the issue, I saw that the same problem exists for soft raid. Thus far, no solution has been offered.
(Score: 0) by Anonymous Coward on Sunday December 21 2014, @10:21AM
this shit just gets better and better.
(Score: 0) by Anonymous Coward on Monday December 22 2014, @12:01AM
That 'it' that you're referring to would be the mount command. BTRFS is an experimental filesystem, and it does not have fully working and robust error handling / correcting code. It was a conscious decision on the part of the BTRFS devs to make the user say "yes, really mount it degraded." That way, the user knows that they are currently in dangerous territory, and should probably be checking their most recent backups.
Absolutely, positively nothing to do with systemd.
(Score: 2) by sjames on Monday December 22 2014, @12:52AM
Sorry, wrong. I had degraded set as an option in fstab because I want it to mount even if a disk is dead so it can email me the bad news and keep running in the mean while. Once in the emergency shell, simply running mount -a successfully mounts all btrfs subvolumes. You are behind the times a bit, btrfs has been around several years and is moving into primetime use. The RAID 5 and 6 modes are seriously not ready for production use but RAID1 mode works just fine.
Looking at the debug logs, btrfs refuses to even attempt to mount the volume even though I explicitly stated that I want it to mount even with missing drives.
When I replaced systemd with sysV init, it started working perfectly. That is rather strong proof that the problem was systemd and only systemd.