Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by jelizondo on Monday March 09, @05:10AM   Printer-friendly

Free beer is great. Securing the keg costs money:

Open source registries are in financial peril, a co-founder of an open source security foundation warned after inspecting their books. And it's not just the bandwidth costs that are killing them.

"The problem is they don't have enough money to spend on the very security features that we all desperately need to stop being a bunch of idiots and installing fu when it's malware," said Michael Winser, a co-founder of Alpha-Omega, a Linux Foundation project to help secure the open source supply chain.

Winser spoke at FOSDEM this year, in a talk we dropped in on virtually.

Trusted registries are widely treated as a key component of Software Bill of Materials (SBOM) - driven supply chain security efforts, one of the main approaches promoted for securing open source software. Rule one: Get your open source packages from a trusted source.

Yet many of these registries operate on razor-thin margins, relying on non-continuous funding from grants, donations, and in-kind resources.

Google and Microsoft kicked in an initial $5 million to launch Alpha-Omega in 2022 under the Open Source Security Foundation.

And the first thing Winser noticed when he ramped up operations was that open source registries are all dirt poor. All the major registries are facing the same issue: They're experiencing exponential growth, even though their investment in infrastructure and people remains flat.

"We're living on borrowed time," he warned.

"One of the problems that people have is they actually conflate open source software and open source infrastructure," Winser said.

Open source software itself is free to use, and its costs don't increase the more people use it. The costs of registries to hold all open source applications and libraries, however, do indeed keep increasing with greater usage.

Packages don't go away. Collections just grow larger and larger. And AI is now adding to the pile at a considerable clip.

[...] In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io).

Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm).

[...] The good news may be that "Registries are effective monopolies. They own the name space," as Winser put it.

But as monopolies, their hold is tenuous at best, because "the cost of spinning up an alternative, crappy registry, is effectively zero," he added.

Winser went through the various ways of covering expenses, though none, he calculated, could fully defray expenses.

[...] Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages.

Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed.

[...] Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

[...] Money is a rarely discussed aspect of open source. The software is just supposed to be like free beer, right?

Hospitals, universities, and museums are all nonprofits, yet they still charge for services. In fact it is good practice; otherwise people will abuse the system. But in open source, the idea of payment remains taboo.

Open source may indeed be like free beer, but no one enjoys their frothy lager served chock full of parasites and bacteria. So maybe we all should get used to ponying up at the bar.


Original Submission

This discussion was created by jelizondo (653) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by KritonK on Monday March 09, @06:51AM (4 children)

    by KritonK (465) on Monday March 09, @06:51AM (#1436120)

    Um, what are "Open Source Registries"? I've never heard about them, even though I use open source software all the time. I get my software from the repositories of the distributions that I use, plus a few well-known third party repositories, such as EPEL for Red Hat or Packman for OpenSUSE. Or is that a Windows thing?

    • (Score: 2, Disagree) by VLM on Monday March 09, @02:16PM (3 children)

      by VLM (445) on Monday March 09, @02:16PM (#1436162)

      "automated, (sometimes) virus scanned, FTP sites for libraries".

      Like NPM for loading up nodejs libraries or python's pypy or from back in the olden days, rubyforge, I guess.

      They didn't mention docker but reading in between the lines, docker would be included.

      The people running hypercentralized registries have an immense authoritarian streak and want to gatekeep every little detail and vet contributors for demographic membership and political affiliation etc. Meanwhile, why not "sell" the overall ecosystem as everything is secure and virus scanned etc.

      The cost of that authoritarianism is the competitive price drops to zero and its expensive.

      Imagine a book organization that you had to a apply to join, they'd scrutinize your demographics and book to see if its good enough, demand proof of ownership, they'd read/scan everything you send them to make sure there's no doublePlusUngoodThink which they forbid in their repo, and then they wrap their page in ads and sponsorship deals (none of which you get a cut of) and finally they'd let the general public read it, but not read too many at once or too many per week. From the general public POV its just another library but from the author side there's a lot of variation and its not a "real" library.

      I see three ways this goes:

      1) Quite literally the public library. Why doesn't my local public library, which has a corporatized makerspace built into it, have a simple mirror server for Debian, perhaps? Well I donno. It probably should. As a public service the public library should have local super fast mirrors of all kinds of stuff. Project Gutenberg, libravox, FOSS software, etc. These things move slowly... are there university libraries that host software mirrors vs IT vs the CS dept hosting mirrors? It's really a "library" job and gives them a modern relevant job to pursue.

      2) Copy Debian. There's no need for "a registry" to be "a major download site" sure you need like a 1u rackmount server to gatekeep the absolute F out of the authors (why, again?) and it needs to sign the repo with a crypto key, but once its done, ship it and let the automated hands off crypto secured mirror network hand all the worlds downloads.

      3) Scrap the DEI fundraising conduit. People are wise that only a tiny percentage of wikipedia's funds go to supporting wiki, mostly its a conduit for funding political action committee type groups. Most charities only spend a fraction of their revenue on their supposed goals, there are expenses like large highly paid executive teams and boards, private corporate jets, hang outs at vacation destinations masquerading as education, its a lot of ripoff stuff. Perhaps if they can't afford the $1.8M bandwidth bill thats mostly donated for free anyway, they could not pay the $50M/yr CEO, to, um, continue to not enshitify the service by doing nothing I guess. For an example, Coinbase uses Python, the entire bandwidth needs of the entire python infrastructure of the world, assuming the article figures are accurate and they NEVER got another donation again, would cost about 3% of the Coinbase CEO's take home pay. Not the company revenue, not the company profit, not total salary spend, not the total executive suite ... just the CEO's annual total compensation. And that takehome pay disappears if he doesn't donate 3% of it to keep the entire planet's Python running. No, I don't think the financal problems are related to dollar value its more a priority issue. The people with money won't spend it and the people getting the money are mostly wasting it and holding the "product" hostage to secure more funding.

      • (Score: 0) by Anonymous Coward on Monday March 09, @03:00PM (1 child)

        by Anonymous Coward on Monday March 09, @03:00PM (#1436168)

        One of the things VLM and I agree on.

        • (Score: 0) by Anonymous Coward on Monday March 09, @03:02PM

          by Anonymous Coward on Monday March 09, @03:02PM (#1436169)

          If you follow the money, though, they seem to be happy spending 100 million dollars for
          lobbyists in every state for stamping out privacy

      • (Score: 2) by jb on Wednesday March 11, @07:39AM

        by jb (338) on Wednesday March 11, @07:39AM (#1436302)

        "automated, (sometimes) virus scanned, FTP sites for libraries".

        Hmm ... ftp and search is basically just archie. Shouldn't be too hard to integrate something like signify(1) for attestation. So what's all the fuss about?

  • (Score: 2) by darkfeline on Monday March 09, @07:06AM (9 children)

    by darkfeline (1030) on Monday March 09, @07:06AM (#1436121) Homepage

    Once again Go's creators shows their foresight. Go doesn't use a single registry monopoly so does not suffer from this problem.

    (inb4 Go isn't perfect. No it's not but it has a lot of really insightful design decisions. By contrast, a language, like say Rust, has a ton of questionable design decisions, like using a central registry.)

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 0) by Anonymous Coward on Monday March 09, @12:50PM

      by Anonymous Coward on Monday March 09, @12:50PM (#1436153)

      By contrast, a language, like say Rust, has a ton of questionable design decisions, like using a central registry.

      Could be worse. Image it still using a single registry, but the registry is a peripheral one1!!oneone!!1!

    • (Score: 3, Insightful) by VLM on Monday March 09, @02:40PM (7 children)

      by VLM (445) on Monday March 09, @02:40PM (#1436164)

      inb4 Go isn't perfect. No it's not but it has a lot of really insightful design decisions.

      I have some Go experience and its a ridiculously common experience that you'll find a "pain point" in another language, then you'll learn that golang doesn't have that problem.

      Its inaccurate to describe golang with a oneliner like "Its a better designed C", although thats fairly accurate, its more accurate to describe it as G. P. and T. spent careers running into problems in other languages, getting really pissed off each time, finally they wrote golang solely to not have a reason to complain. As a ficitional origin story this does somehow ring true.

      Its not the coolest, most elegant, clearest, or easiest to learn language, but its the one that so far has the least amount of frustration.

      It has a peculiar identity problem where "perl programmers" back in the day had shared pain points and we had an "in it together" attitude and it may have been a semi abusive community but those shared pain points made it a community. However golang just works so there's no need for community healing or whatever.

      Golang is often promoted as simple/easy/clear which it is not. Look at how you access the number of bytes in a string vs the number of glyphs/runes and why is this the only language in the land that calls a "glyph" a "rune"? However golang seems to have the least overall aggravation so people MISREMEMBER it as simple/easy/clear.

      Another golang trivia about it not being simple/easy/clear but it is not aggravating. Time handling. Layout based parsing, really? Nobody seems to understand monotonic vs wall clock time IRL. I would confess there seems to be no consensus on how stuff like that should be handled anyway.

      Channels are done better than Erlang for concurrency (the whole its like a file descriptor/named pipe vs like a process thing), its got the best networking of any language, regexs are the only thing I've found better than the oldest days of Perl, its type system is "enough types to help you out but not enough nonsense to aggravate you". The way it handles errors is interesting and honestly pretty chill, for non golang peeps its basically an interface and functions can return multiple values, so typically you would return a result (worst case nil) and an error conforming to the error interface (again usually nil). There's a nice scheme to pre-define messages for later I18N work, it works tolerable well with logging, its just pretty cool.

      • (Score: 2) by JoeMerchant on Monday March 09, @10:03PM (6 children)

        by JoeMerchant (3937) on Monday March 09, @10:03PM (#1436251)

        I asked Claude about tech stack platforms for a project, it's top two suggestions were: Go and python. Basically, it said that Go would yield the more elegant, smaller memory footprint solution (which was of essentially zero value to the project) while python would get the solution working in about 1/2 the time of Go... I don't like python, but....

        --
        🌻🌻🌻🌻 [google.com]
        • (Score: 2) by VLM on Wednesday March 11, @12:03PM (5 children)

          by VLM (445) on Wednesday March 11, @12:03PM (#1436327)

          Try em both on a baby sized problem.

          Really the "cost" is labor and if you're happier/more productive in golang or in python you should use the happy one whichever it is. It'll probably be golang unless there's no support for something weird in which case python works fine.

          I find it hard to believe anything could beat golang for network pipes between APIs thats its bread and butter. Likewise if you're connecting to something "weird" or doing something "weird" the odds of python having an off the shelf library to do just that are very high.

          Interop can be a headache. A stereotype is nodejs and python, like the entire node-red project uses that. Using python from nodejs is at least a well trodden path. Golang can be whipped into producing shared libraries but it doesn't really interop like other languages (like calling java from clojure is easy, calling goroutines from nodejs is kind of like what is a goroutine?)

          When you make your baby problem, make sure to crash it hard (divide something by zero) and compare the debuggers. Delve on golang just works on CLI or in an IDE and fits my debugging mindset pretty well. If I wrote a debugger my wish list would look a lot like delve. pdb on python works but just doesn't do it for me, I could see others preferring it but it seems less likely. Which one do you want to use at 2am when you get "The Call" about some mysterious problem? You wouldn't expect a compiler to have a better debugger than an interpreter but here we are. Possibly, if you need to use the REPL to debug, it'll be easy/possible in python and not so fun in golang. The golang debugger supports threads and goroutines (mini-threads) natively so on the other hand you wouldn't need a repl as much on golang, then again I don't know how one debugs an async.io project in python and I'd prefer not to have to figure it out while "under fire" or prefer not to have to figure it out at all.

          The linter/static analysis type stuff is about similar quality on both (excellent). The golang memes/idioms make golang clearer and the linters job easier.

          A lot of golang feels like learning C back in the 80s where "huh this sure is a simple beautiful easy to use language" at least for the first few pages of K+R and then about midway thru here's mallocs and have fun collecting your own garbage and strings/arrays are a flaming dumpster fire whoopsies. But in golang all that ugly is taken care of and "just works" behind the scenes. Python is like Perl and Ruby had a snake baby that works but it looks weird sometimes.

          There are features in golang that noobs are told about and should not be told, sorcerer's apprentice and all that. Pretend that slices ARE arrays and ignore the array behind the curtain. There are "real arrays" behind slices which are awesome for speed optimization if you know what you're doing but you'll just mess stuff up if you're learning. Likewise there's a "new" keyword in golang for structs and because functions are first class thingies you can treat golang almost like OO C++ where structs are like classes and method functions really do work. However in golang the code is cleaner if you do factory functions like have a newFunction constructor method that returns a pointer to the new struct. Better to have a single thing doing all the initialization of possibly weird default values in one place than to have random "new" all over your code initializing your new structs with who knows what. So the best way to teach arrays and the "new" keyword to new golang programmers it not to mention it until they're not new golang programmers anymore. All languages have some built-in "here's how to shoot yourself in your foot" functionality and at least golangs just make slow ugly code instead of c using fun times like strcpy buffer overflows etc.

          • (Score: 2) by JoeMerchant on Wednesday March 11, @12:31PM (4 children)

            by JoeMerchant (3937) on Wednesday March 11, @12:31PM (#1436331)

            >Which one do you want to use at 2am when you get "The Call" about some mysterious problem?

            The debugger that "does it" for me is: no debugger. ATARI 800 BASIC didn't have a debugger, and 6502 assembly launched from BASIC surely didn't, so I learned to live without.

            Early debuggers (particularly Microsoft ones) entrenched my mindset: it's flaky - so debug it, it runs differently in the debugger and half the time I couldn't get things to work at all in the early debuggers because they made everything run too slow. But, persevere, get it running in the debugger (when you can), now switch back to release mode - still crashes. My debugger of choice is printf, and friends. Logging from the release mode production code. Log in production, that way your code runs the same everywhere, and when you need it: some level of debug info is available in the logs - when you don't need it, rotate 'em out - no sweat.

            Debuggers have improved, infinitely from the zero state they started at, and I have experienced "a-HA" moments in a debugger, but it's a whole additional layer of tooling to learn and master and maintain knowledge of and I'd rather solve the problem with production ready logging which I still consider essential even when you have an awesome debugging environment available.

            Captain Ron: The best way to find out Kitty is to get her out on the ocean. If anything is going to happen, it's going to happen out there. https://en.wikiquote.org/wiki/Captain_Ron [wikiquote.org]

            --
            🌻🌻🌻🌻 [google.com]
            • (Score: 2) by VLM on Wednesday March 11, @06:04PM (3 children)

              by VLM (445) on Wednesday March 11, @06:04PM (#1436376)

              But, persevere, get it running in the debugger (when you can), now switch back to release mode - still crashes.

              My experience with golang crashes is nil pointers are the only crash headache, and the GC is goooooood so you are more likely to get memory leaks than crash on nil dereference if it was ever created correctly to begin with, so the only way to nil a pointer assuming you don't wildly outsmart the GC (how?) is to fail at initialization.

              Which is probably the source of the noob advice to never create a struct with the "new" keyword randomly all over the code, always use a constructor method to create an object (well, a struct...) because at least then there's only one place to screw up the initialization.

              Golang suffers from almost having real OO but not exactly really. You get structs with member functions so it looks a lot like OO but its not full on C++ or CLOS. Golang's got interfaces so it kinda can do polymorphism but not "real inheritance" like C++

              golang has a nilaway static analyzer to find nil objects but I'd wonder if anyone ever made a dynamic debugger that just watches nils all day. Like an IDS but for nils. Probably not enough of a problem to be worth inventing.

              The worst debugger problems involve concurrency, buncha threads all bumping in to each other, abandon hope all ye who enter here.

              • (Score: 2) by JoeMerchant on Wednesday March 11, @06:34PM

                by JoeMerchant (3937) on Wednesday March 11, @06:34PM (#1436385)

                > buncha threads all bumping in to each other, abandon hope all ye who enter here.

                I _really_ prefer a system of independent single threaded cooperating processes as opposed to a big multi-threaded app, for many reasons, but a big reason is: the mystery of thread pre-emption is something that shouldn't have to be on your mind with every single line of code you create. There are places where many threads in one app makes sense - just isolate those in their own thread-heavy playground and leave the other (usually 90%+ of the) work out in happy single thread land.

                Yes, a process is more "heavyweight" than a thread, but these days that's just not a concern in most cases...

                --
                🌻🌻🌻🌻 [google.com]
              • (Score: 2) by JoeMerchant on Wednesday March 11, @06:49PM (1 child)

                by JoeMerchant (3937) on Wednesday March 11, @06:49PM (#1436388)

                Ha

                I was just moved to click on this https://www.amazon.com/BitPC-Open-Source-Computer-remotely-Touchscreen/dp/B0FY63H6NH/ref=pd_rhf_ee_s_bmx_gp_d_sccl_1_7/137-4850090-1078036 [amazon.com]

                to see if it claims to do what it looks like it claims to do (I think it does, but I think you also have to buy a $100 widget per terminal, though in my house that might still be worth it...)

                and, jumping right off the descriptions at me:

                Written in Golang, running on Linux, making it easy to modify the software on your JetKVM device. Want to customize the JetKVM? Simply patch the software and upload it to the JetKVM device through SSH, and you’re good to go.

                --
                🌻🌻🌻🌻 [google.com]
                • (Score: 2) by VLM on Wednesday March 11, @07:32PM

                  by VLM (445) on Wednesday March 11, @07:32PM (#1436406)

                  upload it to the JetKVM device

                  Yeah thats another golang vs legacy c/c++ thing, binaries are usually static linked. Just one file and it'll run, most times.

  • (Score: 2) by Rich on Monday March 09, @12:29PM (10 children)

    by Rich (945) on Monday March 09, @12:29PM (#1436149) Journal

    I've never heard about these "registries", but the solution seems to be obvious for me: Just publish your source on Gitlab and be happy with it. Now, if some corporation has weighed itself down with SBOM requirements, kindly offer them to supply the software straight from yourself, the original author, with appropriate digital signage. For ten grand. You can also offer the test coverage results their required checklist demands. For another twenty grand. No deal? Tough luck.

    • (Score: 1, Troll) by c0lo on Monday March 09, @12:57PM (9 children)

      by c0lo (156) Subscriber Badge on Monday March 09, @12:57PM (#1436154) Journal

      Just publish your source on Gitlab and be happy with it.

      A courageous decision, mate, playing your open source card not only in a cloud overlord's hand, but your choice of that overlord is Microsoft. Really, what tables are you turning?

      I mean... like... yeah... I found myself speechless.

      --
      https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 4, Informative) by HiThere on Monday March 09, @01:18PM (8 children)

        by HiThere (866) on Monday March 09, @01:18PM (#1436157) Journal

        IIUC, GitHub is Microsoft, GitLab is somebody else. (My memory says they're European, but check if that's important to you.)

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by c0lo on Monday March 09, @01:51PM (7 children)

          by c0lo (156) Subscriber Badge on Monday March 09, @01:51PM (#1436160) Journal

          Oh, my bad, so it is.

          On other lines:
          - gitlab is a Delaware corporation (GitLab Inc.), trades on NASDAQ: GTLB
          - looks more like jumping from the frying pan into the fire: gitlab pricing [gitlab.com] is free w/ severe limitations, anything else comes w/ a price tag

          --
          https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 3, Informative) by Rich on Monday March 09, @02:10PM (4 children)

            by Rich (945) on Monday March 09, @02:10PM (#1436161) Journal

            anything else comes w/ a price tag

            Fair enough. Someone will have to pay for the data center. Nice enough from them to have a free tier for the small open source folks. Or would you rather have it paid for by selling out your data and blasting you with advertisements? Also, once you've sold one or two of the corporate paperwork packages (each at a price more than a yearly third world's full time work income), it'd be reasonable to cough up a few bucks for your servers. ;)

            • (Score: 2) by c0lo on Monday March 09, @02:20PM (3 children)

              by c0lo (156) Subscriber Badge on Monday March 09, @02:20PM (#1436163) Journal

              Or would you rather have it paid for by selling out your data and blasting you with advertisements?

              If you ask me specifically, put "self-hosted git service open source" in your favo search engine. The only thing I need to pay is the dedicated IP address, a service which my ISP offers.
              But then again, I'm not maintaining an open source registry.

              --
              https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 2) by Rich on Monday March 09, @09:52PM (2 children)

                by Rich (945) on Monday March 09, @09:52PM (#1436248) Journal

                If you want to promote usage of your library or application or whatever, it's got to be reasonably well presented. A faceless git server behind ssh won't do. Effectively, you'd have to run something like the Gitlab/Github webfronts yourself. I have a small net-facing Apache running, but I wouldn't trust myself to set up that whole application cluster in a secure fashion for collaborative work. And if I did, the time setting up and maintaining that would be (if billed to customers) more expensive than just buying into Gitlab's minimal paid tier. To make things worse, my ancient DSL has like 2Mbit/s upstream.

                Now, if their pricing was inspired by Oracle, and I'd have to show off my wares, I'd certainly look into hosting myself, but for now, my sole, minimal, mostly irrelevant application for the FOSS world rests well on the Gitlab free tier. It is a utility for (older) KiCAD files that can visually guide placing with special sorting and footswitch advance and export THT components into a list that Elecrow can work with. It is also well ignored by the world, because the KiCAD ex-works interactive HTML BOM is infinitely better, unless one has to scratch exactly that itch I had.

                • (Score: 2) by c0lo on Wednesday March 11, @02:24AM (1 child)

                  by c0lo (156) Subscriber Badge on Wednesday March 11, @02:24AM (#1436278) Journal

                  If you want to promote usage of your library or application or whatever, it's got to be reasonably well presented.

                  If I want to work on it, possible within a team, I'm not yet in the promotion stage.
                  This includes non-FOSS projects (as long as I can maintain the reliability of my self-hosting)

                  A faceless git server behind ssh won't do.

                  The Google results for "self-hosted git service open source" include Gitea [gitea.com], Forgejo [forgejo.org] and GitLab CE [gitlab.com].

                  The first 2 are well presented, Web API and UI presentation layers of a git server - perfect if you need even just promotion (and take the pain of being the sysadm of your self-hosted solution)
                  The later is a heavyweight that include CI/CD functionalities.

                  nobody spoke about a ssh faceless server

                  And if I did, the time setting up and maintaining that would be (if billed to customers) more expensive than just buying into Gitlab's minimal paid tier.

                  All of them incur the cost of administration and uptime/reliability for any serious project or group of projects.
                  In the context of "Open source registry", are we talking about thousands of packages, maintained by zillions of devs (like a Linux distro), or hobby projects scratching minor itches? Where on the axis of this complexity are we sitting?

                  --
                  https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
                  • (Score: 2) by Rich on Wednesday March 11, @02:12PM

                    by Rich (945) on Wednesday March 11, @02:12PM (#1436345) Journal

                    I had a look at gitea, no way I'd run that monster locally and expose the attack surface to the entire world unless I'd absolutely have to, and then on a compartmentalized, dedicated machine on a wired-only connection. I excel at nurturing a million-line C/C++ project in a controlled sector from multiplatform GUI issues down to the timing of motor stepping signals, but I'm not a good enough network administrator to be certain that some SaaS install is secure. I also value my time too much to deal with it. Static websites are fine, and I even run a world-facing Apache, but I already couldn't be sure that the "Let's encrypt" package I installed wasn't trojaned. Many times I have seen bugs or exploits, I take note that I'd never have thought of that particular hole. Taking the risk also wouldn't make my DSL faster, which is simply too slow to serve git histories downstream. So, as long as complex hosting is done by someone else, I'll gladly take that, and even more so, if they do it for free.

                    Also, one wants to promote a project before it's done to lure in other developers to do some work. Far away from registries with zillions of packages (I guess they mean stuff like pip, crates, and such.). If it's interesting to them, some maintainer will put it in there and pull from my upstream at gitlab (or my home server, if gitlab goes away and I have to self-host).

                    Finally, we're talking about a single, or maybe a handful of personal projects, both technically and from a promotion standpoint at this stage:

                    > I'm doing a (free) operating system (just a hobby, won't be big and
                    > professional like gnu) for 386(486) AT clones. This has been brewing
                    > since april, and is starting to get ready. I'd like any feedback on
                    > things people like/dislike ...

          • (Score: 2) by VLM on Monday March 09, @03:07PM (1 child)

            by VLM (445) on Monday March 09, @03:07PM (#1436170)

            gitlab pricing [gitlab.com] is free w/ severe limitations, anything else comes w/ a price tag

            Kind of.

            https://gitlab.com/rluna-gitlab/gitlab-ce/-/blob/master/LICENSE [gitlab.com]

            https://docs.gitlab.com/development/licensing/ [gitlab.com]

            It appears overly complicated but the community edition boils down to MIT license for code and CC-BY-SA for the docs.

            A former client ran it locally (like semi-air-gapped from the internet) and I've run it at home. I was not involved, but apparently, it integrates pretty easily with corporate active directory. The hardest part of any AD job is getting the windows IT guys to give you a "bind_dn" which amounts to a limited account to verify login data (not a gitlab thing, its an AD architecture thing that a server needs to log in to get permission to log some user in, more or less). I don't recall them ever getting SSO/SAML working but looking online it appears no worse than usual.

            It just works. Nothing exciting. The CE has some features missing that the paid version has (from memory, no automated scanning and CI/CD worked but didn't have the wildest most elaborate build dependency features none of which I used anyway)

            Its a whole lot nicer than gitea. On the other hand if you're not using any features beyond gitea, just use gitea... I subjectively prefer doing branch merges with gitlab over gitea's style of making it the user's problem LOL. Gitea works but gitlab is easier especially for simple merges subjectively IMHO.

            Its not a huge system. It's "a couple gigs" of Docker container(s). If you reckon your sizes in "mysql servers" its like hosting about a dozen minimal mysql servers, so it'll take more than an old laptop but not much of a modern server LOL. When you're not using it, it doesn't do much, so its pretty well behaved on a cluster. Its just a container, total set up time is like a fifteen minutes.

            • (Score: 2) by c0lo on Monday March 09, @03:38PM

              by c0lo (156) Subscriber Badge on Monday March 09, @03:38PM (#1436174) Journal

              It appears overly complicated but the community edition boils down to MIT license for code and CC-BY-SA for the docs.

              Yes, self-hosted. My employer maintains one for internal use, and it works a treat.
              But again the problem for those software registries is not the technical solution that supports their community needs, it's the cost of running it. TFS put it as

              "One of the problems that people have is they actually conflate open source software and open source infrastructure," Winser said.

              The beers is free, who pays for the keg?

              --
              https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 0) by Anonymous Coward on Monday March 09, @07:46PM (2 children)

    by Anonymous Coward on Monday March 09, @07:46PM (#1436221)

    Open source may indeed be like free beer, but no one enjoys their frothy lager served chock full of parasites and bacteria. So maybe we all should get used to ponying up at the bar.

    It was never supposed to be like free beer. It was supposed to be about freedom.

    Free software is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech," not as in "free beer."

    At some point this has been corrupted and we ended up with nonsense like "open source may indeed be like free beer", and nobody gives a shit about freedom. So we inevitably end up with more and more demands placed on unpaid volunteers. The greedy technology companies making gigadollars just make more and more demands on unpaid volunteers like "software bill of materials" nonsense and are never willing to pony up more than a handful of peanuts.

    At the individual level it is almost as bad, nobody is willing to pay for anything. For example, Raptor Computing Systems sells workstation motherboards which have an entirely free software stack from the BMC upwards. As far as I am aware this platform has the only free software implementation of DDR4 ram training in existence. All the FPGAs on the boards use free designs that can be compiled with entirely free software design tools. The onboard ethernet controller had its proprietary firmware reverse engineered so the boards ship with fully free firmware there too.

    All of the source code and tools needed to build and install the entire firmware stack is included literally in the fucking box. These boards are everything a free software advocate could have ever asked for. And whenever I suggest that people looking for a computer that respects your freedom buy one of these motherboards, all anyone does is complain about the price tag. So free software "supporters" keep funding companies that provide boards chock full of proprietary software simply because they cost less.

    • (Score: 2) by Bentonite on Wednesday March 11, @03:42AM (1 child)

      by Bentonite (56146) on Wednesday March 11, @03:42AM (#1436294)

      People may be willing to pay for freedom, but unfortunately most people can't afford to spend $5,352.99 USD for a board+CPU, or $6,199.99 USD for a desktop system, or $10,926.99 USD for a decent computer (last time I checked, none of them do ACPI S3 suspend) - only those who are well paid to do something deeply immoral, or have won a large lottery somehow, have that much money to spare.

      A more reasonable option is a GNUbooted KGPE-D16 - all free software, free DDR3 RAMinit (RaptorCS originally ported the board, so it lacks ACPI S3, but GNUboot is slowly working on it) - it's not too expensive and all you need to do is mod some CPU fans to fit G34, or find some expensive G34 fans.

      You can get 8-16-32 AMD64 cores and up to 256GB DDR3 ECC RAM (typically you would go with 64GB ECC).

      There are a few inconveniences like specific setup required, no S3 suspend (which the raptorCS computers lack too) and how the integrated graphics has no EDID support - which doesn't matter for a server, but for a desktop, you just use a Nvidia GPU that works with free software with Nouveau and problem solved.

      As it's AMD64, there are no inconveniences of the different POWER9 architecture.

      You can go to the #gnuboot channel on libera to get advice how to setup a KGPE-D16, or possibly you could pay someone to send you a tested board for more than doing it yourself, but far less than $5000 USD.

      Or you can go for a GNUbooted laptop, which is much cheaper.

      • (Score: 0) by Anonymous Coward on Wednesday March 11, @04:46PM

        by Anonymous Coward on Wednesday March 11, @04:46PM (#1436364)

        People may be willing to pay for freedom, but unfortunately most people can't afford to spend $5,352.99 USD for a board+CPU, or $6,199.99 USD for a desktop system, or $10,926.99 USD for a decent computer

        The Blackbird bundles are a bit cheaper. Then yes, get the rest of the computer from a local shop as everything else is standard PC components. When I first got into computers this was about what an entry-level PC cost, before adjusting for inflation, and enthusiasts somehow managed to buy computers.

        If nobody is willing to pay for the boards that respect your freedom today then nobody is going to be making boards that respect your freedom tomorrow. For example, my understanding is that even the FSF uses the KGPE-D16 or similar systems for their staff and does not use any Raptor systems, even though one fully kitted Talos II could probably replace eight or more fully kitted KGPE-D16s for a small organization.

        The boards were significantly cheaper when first introduced in 2018 and people complained about the price then too. The cost of FPGAs increased massively around 2021 and never really came back down. I suspect this is reflected a lot in the price of the boards. Don't look at the price of memory right now...

        I don't know what a KGPE-D16 cost when it was still in production but it was probably a pretty expensive motherboard.

        A more reasonable option is a GNUbooted KGPE-D16 - all free software, free DDR3 RAMinit (RaptorCS originally ported the board, so it lacks ACPI S3, but GNUboot is slowly working on it) - it's not too expensive and all you need to do is mod some CPU fans to fit G34, or find some expensive G34 fans.

        The problem is that leading with this option is actively encouraging others to trade their freedom for convenience. The KGPE-D16 boards have been out of production for more than a decade and eventually all the boards that exist will either be in the hands of enthusiasts or in the landfill. All the companies like Vikings that previously stocked KGPE-D16 systems with libreboot preinstalled don't appear to have them anymore.

        When the boards were in production, they didn't come with free software preinstalled, so buying one meant directly supporting vendors which provide the proprietary software. This is necessarily at the expense of vendors who might have put in the effort to respect your freedom. This article from 2007 I think is still as relevant today: The Fifth Freedom [fsfla.org].

        Consider the consequences of buying hardware from a vendor that won't offer Free Software drivers for Free Software operating systems, or won't even share specifications for others to develop Free Software drivers. When you give the vendor money and marketshare, you strengthen its position. But you also divide our community, as some of us will stand firm and reject such hardware, while you give in.

        Eventually, and probably pretty soon, the Raptor POWER9 boards will stop being made too. Then what? We forever stick with the handful of Raptor boards, vintage AMD motherboards, vintage thinkpads and vintage SBCs that can be booted with free software?

(1)