Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday August 19 2014, @12:04PM   Printer-friendly
from the you-either-love-it-or-hate-it dept.

The good people over at Infoworld have published a story outlining why they feel systemd is a disaster.

Excerpt from Infoworld:

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux—which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system—from authentication to mounting shares to network configuration to syslog to cron. It wants to do so as essentially a monolithic entity that obscures what's happening behind the scenes.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by MrNemesis on Tuesday August 19 2014, @08:33PM

    by MrNemesis (1582) on Tuesday August 19 2014, @08:33PM (#83237)

    Well, the AC above has already said most of what I was going to say but here's my slightly-differently-worded take on it.

    Most of systemD's improvements benefit servers, not desktop users.

    An impenetrable boot process does not benefit servers. Needing /usr available after initrd does not benefit servers... unless of course having /usr over NFS is "holding it wrong". Even on slow-arse 2x7200rpm RAID1's, bootup time is about 10% of what POST time is... well, OK, 20% of PST time if this is a throwaway dev physical and you're cheaping out on softraid. Boot time on a server, as long as it's under a few minutes, is an utterly meaningless metric in my world. Even using appallingly stone-aged SysV init our virtuals boot in less than 30s, out physicals POST in 5mins... and boot for 30s.

    Tracking service startup dependencies and ordering/waiting for things correctly.

    Maybe I'm just being stupid, but distros seem to have already made this a solved problem since forever ago. It's almost as if init had a concept of the varying levels at which things should run... I forget what it's called... jogfloors? Saunterplanes? Scamperdecks? Dash-storeys? Scurrytiers?

    Automatically just restarting any service that crashes.

    Like the AC, there's no way in hell you'd do this in our environment, and it's frowned on in windows as well other than for *really* flaky services that would generate excessive amounts of admin overhead otherwise (and already have exceptions in place for escalations). Our "important" prod boxes have some vastly expensive software on them to fulfil this purpose, our non-important boxes use common tools like monit (does the same thing but without the supposed benefit of a support contract)... but *none* of them will just restart a service without reporting it, judging impact and escalating if need be. For most important systems, if a daemon fails one of the other servers takes over services until the duty engineer(s) can have a look at it and explain why it fell over in the first place - half the time when a service does fall over it's because of a non-transient error, so restarting will just lead to another crash and this achieves nothing but filling up your inbox and making your line manager have a minor freakout. Isolate the box and bring the service up elsewhere. Linux is one of the few OS'es where that's *easy*.

    Improving syslog.

    There's nothing improved against bog standard syslog that I can see. In fact, it's gone backwards. It's *less* useful on a systemd/journald system because you can't use standard tools to read it. When boot fails I don't even get to see a log.

    Those are all things any admin would kill for, that has always been such a half-assed mess. Debugging a system...

    Well, we don't keep our systems up for 2yrs at a time for a start, we reboot at the very longest at least once a month and yet we're still somehow at over five nines for the last three years. RHEL and debian system updates and admins tinkering with various bits of the system are still frequent enough that you're better of testing the server comes up cleanly on a very regular basis, not when you're forced to by seventeen pending kernel updates. We've never had a "half-assed mess"; hey, if the NFS mount doesn't come up in time the boot process waits until it does, if it takes 2x longer than it usually does then the duty engineer gets a beep and if it takes more than 10mins the duty engineer is going to get several loud phone calls from the escalation engineers. But... lo and behold... that almost never happens, and when it does it's always been due to hardware failure rather than services starting in the wrong order. Like AC said... parallel startup might solve problems on the desktop but for servers it's inconsequential when it doesn't happen and frequently problematic when it does.

    So... again... in the server space at least, what problems is it that systemd's solving that haven't already been solved?

    --
    "To paraphrase Nietzsche, I have looked into the abyss and been sick in it."
    Starting Score:    1  point
    Moderation   +4  
       Insightful=3, Interesting=1, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Tuesday August 19 2014, @11:38PM

    by Anonymous Coward on Tuesday August 19 2014, @11:38PM (#83291)

    Thank you. I'm far too busy and annoyed over the purported benefits (ie: coprophagia - "it's good for you") of systemd to move beyond heart-felt if sarcasm laced rebuttals. I hope your reasoned technical explanation of the real-world problems we face will result in the faces of a few people flushing the same color as their hat ;P