Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday May 30 2016, @06:44AM   Printer-friendly
from the fed-up-with-the-UNIX-take-over dept.

The spreading of systemd continues, now actively pushed by themselves unto other projects, like tmux:

"With systemd 230 we switched to a default in which user processes started as part of a login session are terminated when the session exists (KillUserProcesses=yes).

[...] Unfortunately this means starting tmux in the usual way is not effective, because it will be killed upon logout."

It seems methods already in use (daemon, nohup) are not good for them, so handling of processes after logout has to change at their request and as how they say. They don't even engange into a discussion about the general issue, but just pop up with the "solution". And what's the "reason" all this started rolling? dbus & GNOME coders can't do a clean logout so it must be handled for them.

Just a "concidence" systemd came to the rescue and every other project like screen or wget will require changes too, or new shims like a nohup will need to be coded just in case you want to use with a non changed program. Users can probably burn all the now obsolete UNIX books. The systemd configuration becomes more like a fake option, as if you don't use it you run into the poorly programmed apps for the time being, and if they ever get fixed, the new policy has been forced into more targets.

Seen at lobsters 1 & 2 where some BSD people look pissed at best. Red Hat, please, just fork and do you own thing, leaving the rest of us in peace. Debian et al, wake up before RH signed RPMs become a hard dependency.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bitstream on Monday May 30 2016, @01:42PM

    by bitstream (6144) on Monday May 30 2016, @01:42PM (#352621) Journal

    I suspect the inherent problem is that of packetization and thus as a consequence latency. Interprocess communication is affected by scheduler and so on. An open file handle can usually handle byte-by-byte usage but a network connection would likely become a mess in such scenario. Perhaps if the (network) connection where handled by the kernel and the pathway consist of circuit connections like in ATM, perhaps then it could work without serious issues.

    I simple suspect anyone trying network audio, even internally within a computer will be fighting the basic architecture and thus all approaches will be tainted in some way.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Scruffy Beard 2 on Monday May 30 2016, @04:21PM

    by Scruffy Beard 2 (6030) on Monday May 30 2016, @04:21PM (#352671)

    I believe that JACK does network audio first, then if you want to hear the output, you remix it for your soundcard's clock.

    The packetization would have a mostly predictable latency because the bandwidth used it predictable. As for scheduling latency, JACK wants to run with the "Real-time" process priority. The Debain configuration script warns that can cause crashes if you are low on memory (I left that disabled).

    I never did get it to work across the network: multicast failed for some reason I never had the time to trouble-shoot.

  • (Score: 2) by sjames on Monday May 30 2016, @09:07PM

    by sjames (2882) on Monday May 30 2016, @09:07PM (#352757) Journal

    Some professional audio distribution systems use raw ethernet packets in order to reduce overhead and latency. They also use real time scheduling.

    If the network audio has a long duration, you also run in to problems with sample clock disagreement. You have to have at least a small buffer so you can drop or duplicate samples to sweep that under the rug without it being human audible or risking under/overflow..