Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday February 23, @06:32PM   Printer-friendly

While app development is faster and easier, security is still a concern:

In a report last year, silicon design automation outfit Synopsys found that 97 percent of codebases in 2021 contained open source, and that in four of 17 industries studied – computer hardware and chips, cybersecurity, energy and clean tech, and the Internet of Things (IoT) – open source software (OSS) was in 100 percent of audited codebases. The other verticals had open source in at least 93 percent of theirs. It can help drive efficiency, cost savings, and developer productivity.

"Open source really is everywhere," Fred Bals, senior technical writer at Synopsys, wrote in a blog post about the report.

That said, the increasing use of open source packages in application development also creates a path for threat groups that want to use the software supply chain as a backdoor to myriad targets that depend on it.

The broad use of OSS packaging in development means that often enterprises don't know exactly what's in their software. Having a lot of different hands involved increases complexity, and it's hard to know what's going on in the software supply chain. A report last year from VMware found that concerns about OSS included having to rely on a community to patch vulnerabilities, and the security risks that come with that.

Varun Badhwar, co-founder and CEO of Endor Labs – a startup working to secure OSS in app development – called it "the backbone of our critical infrastructure." But he added that developers and executives are often surprised by how much of their applications' code comes from OSS.

Badhwar noted that 95 percent of all vulnerabilities are found in "transitive dependencies" – open source code packages that are indirectly pulled into projects rather than selected by developers.

[...] Developers pull the source components together and add business logic, Fox told The Register. This way, open source becomes the foundation of the software. What's changed in recent years is the general awareness of it – not only among well-meaning developers that are creating the software from these disparate parts.

"The attackers have figured this out as well," he said. "A big notable change over the last five or so years has been the rise of intentional malware attacks on the supply chain."

That came to the fore with the SolarWinds breach in 2020, in which miscreants linked to Russia broke into the firm's software system and slipped in malicious code. Customers who unknowingly downloaded and installed the code during the update process were then compromised. Similar attacks followed – including Kaseya and, most notably, Log4j.

The Java-based logging tool is an example of the massive consolidation of risk that comes with the broad use of popular components in software, Fox argued.

"It's a simple component way down [in the software] and it was so popular you can basically stipulate it exists in every Java application – and you would be right 99.99 percent of the time," he said. "As an attacker ... you're going to focus on those types of things. If you can figure out how to exploit it, it makes it possible to 'spray and pray' across the internet – as opposed to in the '90s, when you had to sit down and figure out how to break each bespoke web application because they all had custom code."

Enterprises have "effectively outsourced 90 percent of your development to people you don't know and can't trust. When I put it that way, it sounds scary, but that's what's been happening for ten years. We're just now grappling with the implications of it."

Log4j also highlighted another issue within the software supply chain and woke many up to how dependent they are on OSS. Even so, an estimated 29 percent of downloads of Log4j are still of the vulnerable versions.

According to analysis by Sonatype, the majority of the time that a company uses a vulnerable version of any component, a fixed version of the component is available – but they're not using it. That points to a need for more education, according to Fox. "96 percent of the problem is people keep taking the tainted food off the shelf instead of taking a cleaned-up one."

There is another rising threat related to OSS: the injection of malware into package repositories like GitHub, Python Package Index (PyPI), and NPM. Cybercriminals are creating malicious versions of popular code via dependency confusion and other techniques to trick developers into putting the code into their software.

They may use an underscore instead of a dash in their code, in hopes of confusing developers into grabbing the wrong component.

"The challenge with this is that the attack happens as soon as the developer downloads that component and these downloads happen by the tools," Fox said. "It's not like they're literally going to a browser and downloading it like the old days, but they're putting it into their tool and it happens behind the scenes and it might execute this malware.

"The sophistication of the attacks is low and these malware components don't even often pretend to be a legitimate component. They don't compile. They're not going to run the test. All they do is deliver the payload. It's like a smash-and-grab."


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by krishnoid on Thursday February 23, @06:43PM

    by krishnoid (1156) on Thursday February 23, @06:43PM (#1293156)

    Perl had a tainting mode [docstore.mik.ua] that would mark data as un/safe as it moved through your Perl code (please reply with jokes below). Not the best/only way to do it, but it's a good example of how software toolchains, languages, and compile/runtime options can support defense against malicious efforts at various levels.

  • (Score: 3, Informative) by Snotnose on Thursday February 23, @08:03PM (3 children)

    by Snotnose (1623) on Thursday February 23, @08:03PM (#1293162)

    When you release your software, not only should your source code be under VCS, but all the needed libraries, plus the libraries it pulls in, recurse as needed, should also go under VCS. I've worked in some shops where to tools were also put under VCS. If you can't rebuild your software unless you have internet access then your code isn't really under VCS.

    I suspect that when Joe Boss assigns Pete little guy this job, and Pete pulls in A, then B, then ..., then Q, someone might start to wonder "why are we pulling in all this stuff? What the hell is this D anyway?".

    --
    I just passed a drug test. My dealer has some explaining to do.
    • (Score: 4, Informative) by bmimatt on Thursday February 23, @08:49PM (1 child)

      by bmimatt (5050) on Thursday February 23, @08:49PM (#1293174)

      TL;DR: If developers are allowed to just grab any code from the net and integrate into their codebase without any vetting process and oversight in place, then you have a wild west situation on your hand and a recipe for eventual disaster.

      It's a governance issue - developers should not be free to pick up a random chunk of code and incorporate it into the codebase just because it makes their code 'complete/ready' faster. In any non-trivial development scenario, there should be a formal process of vetting of third party code in place and formal sign-off on each external library at given version - security, licensing, compliance, etc.

      It's also a SDLC pipeline/DevSecOps issue. In modern software development lifecycle, as code travels through various stages/environments to prod, it is built and tested via automation. If the build is successful and all tests pass (including security scans), all build artifacts should be considered 'deployable'. As such, tagged build/test artifacts should be stored somewhere (git, Artifactory, etc.) in the tested state for future deployment.

      • (Score: 3, Insightful) by Runaway1956 on Thursday February 23, @09:14PM

        by Runaway1956 (2926) Subscriber Badge on Thursday February 23, @09:14PM (#1293179) Homepage Journal

        My first impression of TFS was, "This looks like FUD."

        You post helps to clarify that impression. It isn't the OSS that is at fault, it's management, or governance as you put it, allowing developers to take shortcuts and find the easy way out. And, if everyone just points their fingers at OSS, it helps to scare people away from OSS.

        So, I'm back to FUD, but now I have a better understanding of the FUD.

        --
        Abortion is the number one killed of children in the United States.
    • (Score: 2) by optotronic on Friday February 24, @02:55AM

      by optotronic (4285) on Friday February 24, @02:55AM (#1293202)

      Is this possible with a build system like maven? Do you have to host your own repository that you hand fill with projects you downloaded from a public repository, vetted, and added to your VCS?

  • (Score: 5, Insightful) by rigrig on Thursday February 23, @08:27PM (19 children)

    by rigrig (5129) Subscriber Badge <soylentnews@tubul.net> on Thursday February 23, @08:27PM (#1293167) Homepage

    having to rely on a community to patch vulnerabilities

    The whole point of OSS is that you can patch them yourself, instead of having to rely on whoever controls your proprietary dependency.

    the SolarWinds breach

    "Not a great" example, Orion being closed-source...

    (I tried to ignore this obvious "Hire us for security" scaremongering report, but it bugged me too much)

    --
    No one remembers the singer.
    • (Score: 4, Interesting) by RS3 on Thursday February 23, @08:43PM (18 children)

      by RS3 (6367) on Thursday February 23, @08:43PM (#1293171)

      Adding: I like to think, hope at least, that with open-source, any errors / vulnerabilities can get much quicker attention and patched. In many cases you can get direct communication with the author/s. They generally don't have the layers of corporate process, communication, pride, lawyers, etc., to get in the way / delay patch release.

      One of the only downsides I've seen to open-source: abandonware. Not that everything needs constant updating- some things are done, complete, no fixes needed. But you may not know for sure, unless the author and hopefully at least a few others pronounce it final and complete.

      • (Score: 2) by JoeMerchant on Thursday February 23, @10:32PM (17 children)

        by JoeMerchant (3937) on Thursday February 23, @10:32PM (#1293186)

        >the only downsides I've seen to open-source

        You have never tried to work with FFmpeg source then?

        --
        Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
        • (Score: 2) by RS3 on Thursday February 23, @11:08PM (16 children)

          by RS3 (6367) on Thursday February 23, @11:08PM (#1293190)

          No, now you have me curious...

          ...

          https://ffmpeg.org/download.html [ffmpeg.org]

          I didn't download it (yet?).

          I'm aware that there is not one unifying "open-source" license. In fact, I've seen a wide variety of licensing rules, restrictions, conditions, etc., so I'm not sure.

          What issues / problems do you know of?

          • (Score: 4, Informative) by JoeMerchant on Friday February 24, @06:12PM (15 children)

            by JoeMerchant (3937) on Friday February 24, @06:12PM (#1293265)

            It has been 10 years, hopefully the players have matured a bit in the interim. I worked with FFmpeg extending it at the source level for various applications from 2010-2013 and the biggest challenge was the community.

            I'm not going to sugar coat it: they were a bunch of snot nosed French brats acting like they were 14 years old.

            Basically, you downloaded their pile of source and did your own thing with it, because interacting with the community was toxic for anyone who wasn't a snot nosed French brat, and appeared to be toxic for many of the "in group" as well. It was so bad that libAV forked them, and they proceeded to each innovate down their own branches but always took the time to copy from each other so neither had something the other didn't. I gladly lost track of them and how the drama was progressing in 2014 when I stopped working with video streams.

            --
            Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
            • (Score: 2) by RS3 on Friday February 24, @07:29PM (13 children)

              by RS3 (6367) on Friday February 24, @07:29PM (#1293273)

              Thanks. Yeah, over the years I've noticed that toxic (and I don't like that word) atmosphere seems to be expanding and deepening. Not a psychologist nor sociologist, but it seems that somehow the 'net and various websites maybe enable it, or allow the childish spoiled brats to let it out without fear of someone breaking their jaw. I have a fairly early /. login and quickly realized the atmosphere there was pretty ugly, so I stopped commenting 20 or so years ago. Very occasionally write there again because there are some good people and useful discussions, but still a lot of that crap. I wish I understood it, and I hope some psychologists are working on it because it's a sad waste of productivity. There are some brilliant people who are so abrasive that it ruins collaboration.

              For sure there's a strong connection- not necessarily 100% cause and effect, but a correlation between creativity and brattyness (and I'm putting it mildly). I think we've all seen it across the board of creativity, including certainly entertainment (actors, musicians, artists, filmmakers, ...)

              When I first saw your comment "ffmpeg" I got upmixed- I was thinking of "divx" - I think that's it- and confused the two. I might be further confused, but I'm thinking of projects that look like open source, but turn out to be largely owned by a corporation, and you fall into the rabbithole of what you can and can't do with the project.

              I'm in a frustratingly long process of deciding on which hypervisor to use. I was really big on xen and used it for years on a machine at home, but recently learned it's largely owned by a corporation. The core is free / open, but free tools are very limited, but they'll sell you subscriptions to the good stuff.

              Another that came preinstalled on a used server was XCP-ng. Again, super slick and powerful, but you have to pay for the good stuff. I'll pass. Same happened with VMware- was fully based on Linux kernel, free, and they slowly incorporated proprietary code, charged for features, and of course now it's fully proprietary and expensive.

              Obvious choice is kvm, but which distro...

              • (Score: 3, Interesting) by JoeMerchant on Friday February 24, @08:34PM (6 children)

                by JoeMerchant (3937) on Friday February 24, @08:34PM (#1293276)

                We have a product that processes some real time audio. Windows is a PITA for that, company developers have been unable to avoid the occasional gaps in audio while Windows gives priority to other stuff. I'm sure it's possible, but I certainly don't want to be the developer on the hook to maintain that.

                The previous generation product used a dedicated DSP chip for the audio processing, then communicated over a bus with a "standard" Wintel setup that provided the touchscreen ui. Needless to say: expensive, PITA to maintain, end of life parts required significant engineering to replace, etc.

                This generation I managed to influence the design into a single Intel core i7 running a hypervisor so the DSP code could run on a basic Linux, realtime if necessary (it isn't), and the Windoze developers could stay comfy in their experience zone. We started with a fancy bare metal hypervisor from Switzerland I think, maybe Austria, ski country in any case, and there was going to be a license payment, but far cheaper than a DSP chip. That hypervisor preferred to work with CentOS, and generally was a pain to deal with. We compared it to Ubuntu host with Oracle VirtualBox, and VirtualBox turned out to be better for our use case, even ignoring the license cost and hassle.

                Oh, and the DSP code ported to a Raspberry Pi in an afternoon and from there to Ubuntu on the new hardware in less time than that (once the new hardware became available).

                --
                Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
                • (Score: 2) by RS3 on Friday February 24, @10:35PM (5 children)

                  by RS3 (6367) on Friday February 24, @10:35PM (#1293288)

                  I've never done deep development with real-time stuff (how do you define that anyway?) but being I do a bit of audio (sometimes video) stuff, it's on my radar, IE, I pay attention when I see / hear something.

                  One way I look at things: there are people who do really good real-time A/V in Windoze, so it's possible. Maybe they're doing some deep trickery? Messing with IRQ masking (risky!). I know buffering is your friend, but obviously introduces delays.

                  The DSP stuff depends on how much processing you need to do. If it's a lot of adaptive EQ, echo cancellation, etc., then yes, even with the great power of today's CPUs, the far too much going on in Windoze might kill you.

                  There is Windoze CE, what little I've messed with it, they've greatly removed most of the background tasks, and you have much more ability to customize it. There are supposed "real-time" configurations and .dlls to help with it, but that gets into specialized developers and could be big $.

                  I like what you came up with. There are Linux distros tailored to A/V / real-time stuff (easy to search for).

                  Of course there are several great actual real-time OSes. Not sure about running them in a VM. You wouldn't have a guaranteed latency, unless you dedicate a CPU core to that VM, which is doable, but you still have RAM and main system data bus being shared, so it's possible you'd get into bigger latency than if you have a dedicated bare-metal hardware. Sounds like a very cool project.

                  One of the things I do, sometimes professionally, is run live sound mix (and record occasionally). For many reasons I still like good old analog mixers, but everyone has moved to digital "because it's better". Anyway, some have dedicated OSes. One I mostly use regularly has a Windows CE OS, but you really never see Windows. You can bring it up if needed, and sometimes it needs a diagnostic run, but extremely rarely. The audio all goes through custom dedicated DSP hardware, so that if Windows crashes (it has never crashed on me) the audio will keep going, Windows reboots, and the control system picks up where it left off- no glitch / change to the audio. I'm pretty sure the control surfaces have dedicated microprocessors so you have channel control even if Windows needs to reboot. Many things in professional live show A/V: lighting controllers, camera / video processors / switchers, audio mixers (desks) have Windows CE running the UI, and dedicated micros and DSP running the actual show. It scares me to no end thinking that some medical stuff runs Windows...

                  • (Score: 2) by JoeMerchant on Friday February 24, @11:34PM (4 children)

                    by JoeMerchant (3937) on Friday February 24, @11:34PM (#1293297)

                    >Of course there are several great actual real-time OSes. Not sure about running them in a VM. You wouldn't have a guaranteed latency, unless you dedicate a CPU core to that VM

                    In initial development we were using the level 1 or bare metal, hypervisor just Incase we needed that core separation, but it turned out that we don't.

                    Our Windoze app does a lot of graphing (and not very efficiently, I used to draw the same stuff on a 386 DOS system at higher frame rates than they are running into issues with). The same team took over development of a similar system based on a laptop and while it's gap free 99.9+% of the time, it does drop out once in awhile.

                    I'm sure that flawlessly audio is possible in windoze, but I'm equally sure that maintaining that performance over a 10+ year timeframe would be an endless stream of obscure maintenance headaches. When we started this project Windows 10 "the last windows version that will ever be released, we will just update it with patches from here on out" was just coming out. You see how long that promise didn't last.

                    Not only did the DSP code port easily to Linux, but it has always run gap free with no fuss, even on vanilla Ubuntu with Win10 running in a Virtual Box VM no fussing with process priorities, nothing, it just works.

                    Our "acceptable lag" figure is 50ms, which is awfully long IMO, but that's the window we work in. I feel like I perceive the lag anywhere over 20ms. Out signal starts at the sensor tip, gets digitized, wirelessly transmitted over something like Bluetooth digital, then communicated over Ethernet to the DSP app which plays processed audio out the audio chip on a standardish PC motherboard.

                    --
                    Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
                    • (Score: 2) by RS3 on Saturday February 25, @12:30AM (3 children)

                      by RS3 (6367) on Saturday February 25, @12:30AM (#1293301)

                      I've never worked directly with Bluetooth. Does that give you problems? Like lag, total dropouts while it tries to pair with a phone or wireless mouse somewhere? :}

                      I remember trying to do digital audio over USB 1.1. Theoretically the 12Mbps would have been okay, but the overhead and maybe not enough buffering or something caused dropouts, which were 100% unacceptable when recording events, music, etc. I didn't dig into it, but Firewire at 400MHz did much better than USB 2.0 at 480MHz.

                      I never did that graphics stuff, but worked where people were doing it. Especially if it's full-screen motion, there's some magic and trickery that helps a ton. One place did EEG, so you had a screenfull of up to 128 (32 was common) traces, constantly scrolling. Well, they tried doing "bit-blit" which of course didn't work. The trick is to change the screen's home point reference, and when done correctly, moves smoothly. Game developers are always on top of the best ways to do that stuff and get that super fast smooth motion. I still have an older book "The Zen of Code Optimization" and it has all kinds of cool tricks for, well, everything.

                      You should be able to get well under 10 mS lag, btw. I work with real-time audio systems that feed audio to musicians' ears and the systems do under 2 mS latency. Worst-case maybe a few mS total if you have a digital mixer feeding another digital mixer for the in-ears.

                      • (Score: 3, Interesting) by JoeMerchant on Saturday February 25, @05:48PM (2 children)

                        by JoeMerchant (3937) on Saturday February 25, @05:48PM (#1293395)

                        So, we're using a Bluetooth chip, but not entirely implementing the Bluetooth protocol - for instance: our device won't show up on your smartphone for pairing.

                        We could probably get the lag in that: 1) 1 wire serial extract the sample from the ADC, 2) packetize that in the uP and send it over to the radio chip, 3) Radio chip does it's thing and the packet ends up in the radio chip on the other end of the wireless link, 4) receiver uP pulls the radio packets and does some error checking / recovery / retransmission stuff (which adds up to 5ms of lag just having the possibility of a retry.) 5) Error relieved data repacketized for handoff to the DSP, 6) Packets sent over ethernet from uP to DSP app running in Linux 7) DSP app does some basic filtering, including a FIR which adds another 5-10 ms of lag (I forget to be honest) 8) DSP sends the filtered data, plus some "alarm tones" depending on what it hears in the data out to the audio chip which, thankfully, directly drives an analog signal into the speakers on the console (unless we're talking about the option where it gets hopped up onto HDMI for another digital ride before becoming sound waves again.)

                        The design somewhat abuses that 50ms window they were given, and our product specialist is far from an audiophile, he's been selling these things since the first digital versions came out in the early 1990s, to him all this wizardry is so much better than the initial versions he just doesn't care that it could be better. If I were "king" of the product design (as I used to be for other similar products in the 1990s) we'd be under 10ms lag in the wireless system, but I'm not, and I have no desire to herd a bunch of cats into making a better product when the customer isn't asking for it.

                        --
                        Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
                        • (Score: 2) by RS3 on Saturday February 25, @06:37PM (1 child)

                          by RS3 (6367) on Saturday February 25, @06:37PM (#1293399)

                          Cool, thanks. Some years ago I did some of that stuff- specifically packetizing data, error detection, retransmits, etc. All over Ethernet, but looking into wireless (which never happened, you know, management's "business decisions"...) I'm not super good at all the statistical math modeling for things like packet size. You know, the bigger the packet, the better the overall data rate (data to overhead ratio) versus latency. Big packets are great, but if you lose one, you're way behind. But for my project latency wasn't really an issue. RAM and buffer sizes needed to be optimized.

                          Besides Bluetooth, there's a whole world of wireless data transmission, including some that goes many miles. Good examples, and there are many more: https://www.digikey.com/en/articles/comparing-low-power-wireless-technologies [digikey.com]

                          You probably already know FIR vs. IIR: https://www.advsolned.com/difference-between-iir-and-fir-filters-a-practical-design-guide/ [advsolned.com]

                          Full disclosure: I'm an EE, very strong in analog, also started toward an MS in signal processing (FIR, IIR, FFT, DSP, ...) I'm not a shill nor champion of any specific cause other than efficiency. I'd probably implement a good analog filter, if possible, rather than the FIR, or at least significantly reduce the load and requirements of the FIR. Too many people jump onto this or that bandwagon: "digital is better!" when it is not. You might be amazed at how much horrible aliasing is happening in audio these days (because they don't do pre-sampling analog filtering, and/or sample rate too low). I hear it everywhere, including major national TV and radio broadcasts. But I digress- a good simple front-end analog filter would likely be your friend.

                          • (Score: 2) by JoeMerchant on Saturday February 25, @07:08PM

                            by JoeMerchant (3937) on Saturday February 25, @07:08PM (#1293400)

                            Yeah, I would really like to play with some mid range data links in the 5 to 10 mile range, but I just don't have the time, or paying customers.

                            Our product uses the 2.4 GHz link to eliminate a cable in our setup, it's always within 20' or less, and we really never want it to work through walls, but can accept it if it does.

                            We sort of settled on 5ms packet groups and worked forward from there, did some wild stuff like split the samples among packets so if we lose a packet we can interpolate the missing data and just lose the high frequency info in that group (95% of the value of our data is found in the bottom 25% of the spectrum). There is also some logarithmic encoding and other optimizations so we can transmit 8 channels of data.

                            >I hear it everywhere, including major national TV and radio broadcasts.

                            I remember in the mid 2000s being shocked at how many places I heard audio with 128kbps or worse codec artifacts. Either that has gotten better or my hearing has gotten worse, probably a bit of both.
                             

                            --
                            Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
              • (Score: 3, Informative) by ls671 on Saturday February 25, @11:05AM (5 children)

                by ls671 (891) on Saturday February 25, @11:05AM (#1293352) Homepage

                > Obvious choice is kvm, but which distro...

                Have a look at proxmox

                --
                Everything I write is lies, read between the lines.
                • (Score: 2) by RS3 on Saturday February 25, @04:35PM

                  by RS3 (6367) on Saturday February 25, @04:35PM (#1293386)

                  Thank you much. I had looked at proxmox, looks awesome, but they're based on systemd. I got into Linux to get away from such things. But otherwise it looks great.

                • (Score: 2) by RS3 on Saturday February 25, @04:42PM (3 children)

                  by RS3 (6367) on Saturday February 25, @04:42PM (#1293387)

                  I meant to add, strong contenders: artix, antix, MX/Devuan, puppy, SmartOS (?), Tiny Core, void (about to try it next).

                  My personal favorite is Slackware, which I've been using since SLS days, but this will be for a server I admin, and I would feel badly if I leave and someone else struggles with Slackware (which means, IMHO, they shouldn't be admin, but that's a different problem).

                  • (Score: 2) by ls671 on Saturday February 25, @06:22PM (2 children)

                    by ls671 (891) on Saturday February 25, @06:22PM (#1293398) Homepage

                    Yeah, I stlil use Slackware too for my home desktop and as an admin console in a VM in proxmox with VNC.

                    Although I am not a fan either, systemd is not like the end of the world :) Proxmox works fine with it. I also mostly use plain debian with systemd for my VMs to remain as standard as possible instead of using devuan (no systemd) and potentially hit issues with some software.

                    --
                    Everything I write is lies, read between the lines.
                    • (Score: 2) by RS3 on Saturday February 25, @08:08PM

                      by RS3 (6367) on Saturday February 25, @08:08PM (#1293403)

                      systemd is obviously not the end of the world, but neither is COVID-19. I don't want either. :)

                      Someone once said something about "Lies, d%^& lies, and statistics". I hear lots of people touting systemd (not you, I mean true fanboi / champions). I'd love to see stats on how many of them are passionate hobbyists, versus it's part of their full-time job. IE, they're paid to deal with it, so they just accept it.

                      I don't have a full or even part-time job doing admin. It's a tenuous situation where if I bill the owner too much, he'll move the operation to some godaddy or other "cloud" provider. Plus I pride myself in keeping things running for absolute minimum cost and downtime.

                      My view of systemd: if and when something goes wrong, it's going to take me huge time and effort, not to mention the downtime and clients' websites offline.

                      My non-negotiable: I must be able to edit text configuration files, and they stay as I wrote them. Nobody and nothing is allowed to change them. No, I will not search for some other configuration file somewhere else that coaxes things like systemd to do what I want done, and maybe if it feels like it. :)

                      I like the Slackware in a VM on proxmox.

                    • (Score: 2) by RS3 on Saturday February 25, @08:11PM

                      by RS3 (6367) on Saturday February 25, @08:11PM (#1293404)

                      BTW, does your username have anything to do with a large supercharger on a Chevy engine? :)

            • (Score: 2) by RS3 on Friday February 24, @07:31PM

              by RS3 (6367) on Friday February 24, @07:31PM (#1293274)

              PS: they may have been 14 years old!

              Today's "Dilbert" illustrates some of it: https://assets.amuniversal.com/6e5f16f08ba1013be461005056a9545d [amuniversal.com]

  • (Score: 0) by Anonymous Coward on Saturday February 25, @01:37AM

    by Anonymous Coward on Saturday February 25, @01:37AM (#1293307)
(1)