Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by turgid

Subtitle: Angry Old Man Shakes Fist and Shouts at the Sky/Get off my Lawn

Alas I am not young enough to know everything. Fortunately I am surrounded at work by people who are so I am not completely lost.

We had a very confident young hotshot who left some time ago for a very well-paid job "doing AI." He knew much. He knew that Bitbucket was the way to go. And we adopted bitbucket and we pay a subscription.

Bitbucket is pretty cool. It's very similar to GitLab. In a previous life I set up and ran my own GitLab server and had my own continuous integration pipeline. I really liked using it.

Now to the present. I have been doing PHB duties, then I was given emergency bug fixes to do on Critical Projects(TM) and all sorts of stuff and because reasons I am writing code again for Critical Projects(TM) with tight deadlines meanwhile trying to do all sorts of other stuff including teaching the young ones things about C (everything's Python nowadays).

We had a crazy head of projects who was from the headless chicken school of management and some months ago I was given a fortnight to write a suite of command line utilities to process some data in a pipeline from the latest Critical Project(TM). Specifications? Requirements? A rough idea of what might be in the system? Ha! Fortunately crazy head of projects got a new job and left.

I wrote this code, in C, from scratch, on my own, and in four days flat I had three working command line utilities which I had written using test driven development (TDD) and will an additional layer of automated tests run by shell scripts all at the beck and call of make. I cheated and wrote some scripts to help me write the code.

As you can imagine, these utilities are pretty small. We gave them to the new guy to finish off. Six weeks and lots of hand-holding later, I took them back to fix.

However, we have this "continuous integration" setup based on bitbucket. It's awfully like GitLab, which I used some years ago, so there are no great surprises.

Now we come to the super fun part. We build locally, not on Bitbucket's cloud, which is good. The strange thing is that since I got old, Docker has come along.

The young hotshot who knew everything decided that we needed to do all our builds in these Docker containers. Why?

A Docker container is just one of these LXC container things which is a form of paravirtualisation, somewhere between chroot jails and a proper VM, where the kernel presents an interface that looks like a whole system on its own (see Solaris 10 Containers). That means that you can run "arbitrary" Linux instances (with their own hostnames and IP addresses) on top of a single kernel. Or can you? Doesn't it have to be compatible with (integrated with) the kernel version and build that the host is running?

This is a cool feature. You can have many lightweight pretend virtual hosts on a single machine without having a hypervisor. You can also use it to have a user-land environment with a specific configuration nailed down (set of tools, applications, libraries, user accounts). It might be a great way to set up a controlled build environment for software.

For the last hundred years or so anyone who knows anything about making stuff (engineering) understands that you need to eliminate as much variation in your process as possible for quality, reliability and predictability.

So here's the thing - do you think our young hotshot and his successors have this sorted out? Think again!

I needed to set up some build pipelines and I was shocked. Apparently we are using a plethora of diverse Docker containers from the public Internet for building various software. But that's OK, they're cached locally...

Never mind that this stuff has to work properly.

Everyone in the team is developing their code on a different configuration. We have people using WSL (seriously) and others running various versions of Ubuntu in VMs. So we have these build pipelines running things like Alpine (because the image is small) which may or may not be anywhere near WSL or Ubuntu versions X to Y.

It gets better. Everything we do, every piece of software we build has its own Docker container. And then it goes onto a VM which gets "spun up" in the Microsoft(R) Azure(TM) cloud.

My little command line utilities, a few hundred k each compiled, get compiled in their own Docker container. That's hundreds and hundreds and hundreds of megabytes of random junk to compile a few thousand lines of C. When I type make on my command line (in my Ubuntu VM) each one takes under a second to build against the unit tests and rebuild again and run the automated regression tests.

The final thing that takes the cake is that I have to release these tools to another department (which they'll then put in a "pipeline" on "the cloud") and after about a year of having this amazing set-up for continuous integration, the young folk can't tell me (and they haven't figured it out yet) how to get the built binaries out of the build system.

Because the builds are done in Docker containers, the build artifacts are in the containers and the container images are deleted at the end of the build. So tell it not to delete the image? Put a step in the build script to copy the artifacts out onto a real disk volume?

"We don't know how."

There's a reason human beings haven't set foot on the Moon in over 50 years, and the way things are going our own stupidity will be the end of us. Mark my words.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Funny) by Anonymous Coward on Friday June 02, @08:55PM (7 children)

    by Anonymous Coward on Friday June 02, @08:55PM (#1309473)

    REAL men just upload their important stuff on ftp and let the rest of the world mirror it.

    -Linus Torvalds

    • (Score: 3, Funny) by DannyB on Friday June 02, @10:27PM (6 children)

      by DannyB (5839) on Friday June 02, @10:27PM (#1309482) Journal

      All of those mirrors don't necessarily have age verification to ensure that people under 18 don't see it.

      --
      If you eat an entire cake without cutting it, you technically only had one piece.
      • (Score: 1) by khallow on Saturday June 03, @12:07AM (5 children)

        by khallow (3766) on Saturday June 03, @12:07AM (#1309490) Journal
        Good point. Better do that in a country that doesn't require age verification.
        • (Score: 1, Touché) by Anonymous Coward on Saturday June 03, @12:15AM (3 children)

          by Anonymous Coward on Saturday June 03, @12:15AM (#1309494)

          But what if the mirror also contains information about abortion and gender transition? What if a child were to access that information?

          • (Score: 0) by Anonymous Coward on Saturday June 03, @10:19AM (2 children)

            by Anonymous Coward on Saturday June 03, @10:19AM (#1309569)

            They might turn into an unemployed gay junkie commie and cost decent people a lot of money.

            • (Score: 0) by Anonymous Coward on Sunday June 04, @12:50AM (1 child)

              by Anonymous Coward on Sunday June 04, @12:50AM (#1309659)

              In the US none of that would cost anyone anything. I think you just enjoy hatred but I will refrain from speculating as to why.

              • (Score: 0) by Anonymous Coward on Sunday June 04, @08:36AM

                by Anonymous Coward on Sunday June 04, @08:36AM (#1309722)

                Woosh

        • (Score: 1, Funny) by Anonymous Coward on Saturday June 03, @06:19AM

          by Anonymous Coward on Saturday June 03, @06:19AM (#1309537)

          khallow, just disgruntled about being aged out. Teenaged libertarianism only lasts for, oh, 4 years?

  • (Score: 0, Insightful) by Anonymous Coward on Friday June 02, @11:02PM (3 children)

    by Anonymous Coward on Friday June 02, @11:02PM (#1309485)

    Welcome to late stage capitalism where institutional knowledge and mentoring are not valued. You could use your knowledge to educate but sounds like you just want to lord over coworkers with your decades of experience and knowledge. Why not package your solutions, make them more generalized? Let others benefit from your knowledge and approach to engineering, or waste more time yelling about clouds.

    • (Score: -1, Troll) by Anonymous Coward on Saturday June 03, @12:13AM

      by Anonymous Coward on Saturday June 03, @12:13AM (#1309493)

      Yes but under capitalism, knowledge and sound engineering is just all tl;dr. The capitalists like their buzzword bullshit and rent-seeking death spiral.

      Instead we must expropriate capitalist software and relicense it under the GPL! We must form rank-and-file committees to link up with our class brothers and sisters in logistics, manufacturing, nursing, and teaching.

      Only then will we have a society based on sound engineering and human need instead of buzzword bullshit and private greed.

    • (Score: 2) by turgid on Saturday June 03, @09:08AM (1 child)

      by turgid (4318) on Saturday June 03, @09:08AM (#1309561) Journal

      I would have been delighted to help and coach them but we were all too busy firefighting for headless chicken management.

      • (Score: 1, Insightful) by Anonymous Coward on Saturday June 03, @10:48PM

        by Anonymous Coward on Saturday June 03, @10:48PM (#1309646)

        That is good, then put blame on the institutions creating the problem and not the younger people. Yes there are generational changes and many kids are being taught to just memorize answers and not think critically, but same as it ever was. The problems are societal, and specifically education was purposefully eroded with NCLB and more local measures. Plus educators are widely disrespected as a profession, low pay, high demands, often zero respect.

  • (Score: 3, Insightful) by RS3 on Saturday June 03, @01:08AM (12 children)

    by RS3 (6367) on Saturday June 03, @01:08AM (#1309503)

    Because the builds are done in Docker containers, the build artifacts are in the containers and the container images are deleted at the end of the build. So tell it not to delete the image? Put a step in the build script to copy the artifacts out onto a real disk volume?

    Could you mount an NFS share or other network share inside the container and deposit the logs and other build artifacts in there? You'd need to automate it in the scripts, including (probably) different names for each build container.

    • (Score: 2) by turgid on Saturday June 03, @09:06AM (2 children)

      by turgid (4318) on Saturday June 03, @09:06AM (#1309559) Journal

      That's what I said. Young people don't know about NFS.

      • (Score: 1, Informative) by Anonymous Coward on Saturday June 03, @10:52PM (1 child)

        by Anonymous Coward on Saturday June 03, @10:52PM (#1309647)

        One of the problems with tech getting good enough that few need to troubleshoot, even young coders get to focus on programming and might have zero knowledge about OS drivers or network issues outside of sending files over the internet.

        • (Score: 2) by turgid on Sunday June 04, @08:52AM

          by turgid (4318) on Sunday June 04, @08:52AM (#1309728) Journal

          One huge problem we have is that the business side of the company is obsessed with Microsoft Azure. They are pushing really hard to have everything on Azure. We have the usual Windows network with a single CIFS volume that we can mount from our PeeCees and our Linux instances. It's slow and unreliable and always short of space.

          Management has got it into their heads that doing everything on the cloud is cheaper. I spoke to our IT provider about getting some cloud storage specifically for our needs a while back and I was shocked at how much they charged for Azure. It was something like the order of ten times the price of an equivalent SATA disk per month! So it was like buying ten SATA disks a month forever to keep having access to the data, plus it was over the internet (slow) plus they charge extra for NFS (instead of CIFS).

          We have a couple of very pathetic old second-hand machines that we use as physical servers which we begged and pleaded for, running Linux. You would be utterly shocked and astounded if I told you what sort of hard disks (and how old) we were using. We're waiting for them to break. We should probably put an NFS export on one of the machines I only found out about this build artifact problem this week, so I think it will be this week's priority to sort out.

    • (Score: 2) by turgid on Saturday June 03, @09:27AM (2 children)

      by turgid (4318) on Saturday June 03, @09:27AM (#1309562) Journal

      About 15 years ago I wrote in bash my own build and packaging system and it ran on my home network. I had the abilities to build in chrome jails (always 100% clean) as well as the plain OS. It would build for various versions of Slackware plus I had it working on Solaris Nevada. I had NFS volumes for the shared files, including the built packages.

      I was proud of it at the time. I used to have it running from a cron job and I could take the latest packages to install on my work machine the next day. I even had a cutting edge build of gcc, automatically downloaded every night, to play with.

      Then life became even more hectic. I reproduced! After a couple of years I looked at the scripts again with the benefit of hindsight and thought, "What on earth was I thinking? That's crazy" It worked, but it was very badly written. I could probably write something better now in two or three weekends.

      • (Score: 2) by turgid on Saturday June 03, @09:36AM (1 child)

        by turgid (4318) on Saturday June 03, @09:36AM (#1309564) Journal

        Chrome jails? Chroot jails. Autocorrect hasn't heard of chroot apparently, and on an Android device too!

        • (Score: 1, Funny) by Anonymous Coward on Sunday June 04, @05:22AM

          by Anonymous Coward on Sunday June 04, @05:22AM (#1309698)

          get off my lawn turgid, even with my big thumbs, first thing i do to any mobile device is turn off bloody autocorrect. Turgid old man claims to be an engineer...grumble...mumble...harrumph.

    • (Score: 2) by DannyB on Wednesday June 07, @02:55PM (5 children)

      by DannyB (5839) on Wednesday June 07, @02:55PM (#1310344) Journal

      Could you mount an NFS share or other network share inside the container and deposit the logs and other build artifacts in there?

      Can NFS be used over TLS? For that matter, can the finger protocol be used over TLS.

      Young people do not know about the finger protocol unless older people explain it to them.

      Don't use NFS. Use an "append only" mechanism. Such as an HTTP POST operation to push logs to another server that simply accumulates them. That way, a bad actor can only append data, but not removify it, whereas a good actor could win an oscar.

      --
      If you eat an entire cake without cutting it, you technically only had one piece.
      • (Score: 2) by RS3 on Wednesday June 07, @05:27PM (3 children)

        by RS3 (6367) on Wednesday June 07, @05:27PM (#1310372)

        I've never used it, but I believe stunnel [archlinux.org] will work for any IP port.

        Also Red Hat's docs / how-to: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-configuring_stunnel_wrapper [redhat.com]

        The approach I usually use for these kinds of things is to limit the IP access range. So the containerized thing would only allow NFS mounts to local IP numbers, or just one IP (10.x.x.x, 192.168.x.x, etc.). If an Oscar loser gets into the host or one of the VMs, they'll wreck havoc, break statutes, statues, but it will be exciting and make for good news stories.

        Young people usually learn all about finger on their own, especially when it's against protocol.
         

        • (Score: 2) by turgid on Wednesday June 07, @09:21PM

          by turgid (4318) on Wednesday June 07, @09:21PM (#1310412) Journal

          Excellent, more new toys to play with!

        • (Score: 1, Interesting) by Anonymous Coward on Thursday June 08, @06:24AM (1 child)

          by Anonymous Coward on Thursday June 08, @06:24AM (#1310475)

          I like your idea, but there is a risk of mounting at the wrong level. The guests should be as similar as possible, which means SPMD is usually the right choice. It can be done on the guest, but then you can end up duplicating complexity unnecessarily. Also, mounting in the guest using single shares can be tricky to prevent them from clobbering each other, especially if you have to worry about a container misbehaving. It can also simplify key/secret management to a degree. Those considerations mean that using a single share on the host and then map a subdirectory of that with the proper name to the same well-known location in the guest is one approach that was common. However, I do believe that Docker/Moby (and some other container/VM systems) automatically push that complexity using "automatic arguments," so that concern may be moot in their situation.

          • (Score: 1, Informative) by Anonymous Coward on Sunday June 11, @05:04AM

            by Anonymous Coward on Sunday June 11, @05:04AM (#1310954)

            You all may not know what SPMD is. In this context, it stands for Single Program Multiple Data. It is an approach to parallelism where the "programs" are the same, it is only something in their inputs that differ. There are a number of approaches that accomplish that. The most commonly used implementation sends a different message to each. However, most people start by changing something about the environment directly or building on the understanding of fork(), since those is the easiest to visualize.

      • (Score: 0) by Anonymous Coward on Thursday June 08, @05:57AM

        by Anonymous Coward on Thursday June 08, @05:57AM (#1310474)

        Yes you can use any application protocol over TLS, which includes NFS. Some Linux and BSD kernels support it natively, even. Otherwise you have to use a terminator or tunnel. RS3 points out that the most commonly used one is stunnel but people use SSH too. Pushing it down the stack using IPSec is also an option, but isn't exactly nice to configure.

  • (Score: 1, Insightful) by Anonymous Coward on Saturday June 03, @01:11AM (6 children)

    by Anonymous Coward on Saturday June 03, @01:11AM (#1309504)

    OS Virtualization can be a good thing for you, especially since you are running on disparate systems. You can set up your pipeline to include docker images to include distros equivalent to all of the machines you run your binaries on and theoretical ones you might run them on in the future. You can even use various techniques to make sure it works on different kernel versions too. It can be much easier to manage than lower levels of virtualization.

    And here is a hint for your problem: You are looking for Volumes you can --mount using the appropriate driver.

    • (Score: 2) by quietus on Saturday June 03, @05:05AM (3 children)

      by quietus (6328) on Saturday June 03, @05:05AM (#1309523) Journal

      Sure it can. For testing. After you've got a working product on a single standardized system. But *not* for developing things in parallel, and only when there is a *very* good reason why you should have the same software running on disparate systems, which is becoming exceedingly rare -- when was the last time you used standalone software, instead of a subscription service?

      • (Score: 1, Insightful) by Anonymous Coward on Saturday June 03, @09:07AM (2 children)

        by Anonymous Coward on Saturday June 03, @09:07AM (#1309560)

        To address the second one first: Every day. What do you think the browser you are posting that from is? Most software I use daily is standalone without a subscription. I have a hard time coming up with something that isn't a subscription that I use other than 365, even indirectly. If you are asking about daily use, there is none.

        For the first point: you do realize we are talking about testing, right? And running things across target environments from day one help you catch bugs early rather than later. Problems are easiest to fix early in the process before dependencies are build on top of them. Will someone run it on Alpine/Debian/RH/musl/glibc/LibreSSL/OpenSSL/GCC/LLVM/etc.? Better make sure your software can handle it or you are OK it won't. Will it break on older/newer systems? Better make sure you support the ones you need to. Further, turgid already identified that these utilities need to support multiple environments in order to prevent bugs from creeping in, especially ones that are Heisenbugs based on the environment or undiagnosable without much more heavy lifting because the software "works for me." Maybe it is the difference in perspective but I like my software well thought out and tested before I use it not hacked together with bugfix and version kludges everywhere.

        • (Score: 2) by quietus on Saturday June 03, @04:23PM (1 child)

          by quietus (6328) on Saturday June 03, @04:23PM (#1309611) Journal

          Sigh. You are using subscription software right now: soylentnews. Idem dito for your mail account, your news sites, your music listening, your movie viewing, your gaming and so on: all subscription software, all web-based applications; and the same is true of just about any in-house company application you use: all are server-based i.e. you can pretty much decide yourself which platform you're going to use for development, which is mostly [externally] done by API calls anyway.

          As for your second para: maybe you missed the key line in turgid's description:

          Apparently we are using a plethora of diverse Docker containers from the public Internet for building various software.

          and

          Everyone in the team is developing their code on a different configuration. We have people using WSL (seriously) and others running various versions of Ubuntu in VMs. So we have these build pipelines running things like Alpine (because the image is small) which may or may not be anywhere near WSL or Ubuntu versions X to Y.

          • (Score: 1, Informative) by Anonymous Coward on Saturday June 03, @10:27PM

            by Anonymous Coward on Saturday June 03, @10:27PM (#1309645)

            Those are not subscription software nor are they all web based. But sure, lets (wrongly) boil that all down to simply being client-server architecture. But not all software is is client-server and some of the biggest money and most commonly used software are things that are not client-server. The utilities he is talking about aren't unless you really stretch the definition. But the fact remains, when you write software for other people to use, you develop for all possible targets. That includes not only the ones where you plan it to run on but also the ones you develop on. That is the only way to avoid bugs from versions not working identically in different environments. I'll grant that they may be able to simplify all that by forcing their developers into the same environment that matches the target environment, but that isn't an option for everyone. And short of that if you write it and run it then you have to test it.

    • (Score: 3, Informative) by turgid on Saturday June 03, @10:05AM (1 child)

      by turgid (4318) on Saturday June 03, @10:05AM (#1309568) Journal

      Many years ago, at the job where I learned to write software properly, we had a very thorough software development process which involved testing (unit, regression, static analysis, different compilers) on different hardware architectures and operating systems.

      The process was something like this.

      1. Write the C/C++ on Linux compiled with gcc (32-bit x86) using TDD, and get it working.
      2. Run the simulation of the target under Linux and show that the regression tests work and write any new ones etc.
      3. Build and test on Solaris/SPARC (32-bit big endian). Note the endiannes! Many bugs are found here.
      4. Compile on Windows (32-bit) using Visual C++ and attend to any extra warnings and errors that compiler finds.
      5. Then run the static analysis tools for C and C++ and fix any issues.
      6. Plug the development board into the Windows machine and run on representative hardware with the simulation enabled.
      7. Test on the other embedded target (the Linux PowerPC port).
      8. Get code checked into the source control system ready to push (ie rebase and test/fix as necessary).
      9. Get a code review.
      10. Push to the local CI.
      11. Check that the build passes. If not, repeat.
      12. Release upstream to the main project.

      Our software was pretty high quality, and I learned so much from that experience. We were an extremely productive little team because we didn't put new bugs in our code, and we fixed the old ones.

      • (Score: 1, Insightful) by Anonymous Coward on Sunday June 04, @11:20PM

        by Anonymous Coward on Sunday June 04, @11:20PM (#1309817)

        I love step three. One of the biggest recurring problems I have is people ignoring my warnings and trying to be too clever for the sake of speed only to come crawling back when it blows up in their face because their fanciness doesn't work properly once the batch runs on a big endian machine. Usually they ended up costing themselves a big chunk of money and more time than it would have taken to just do it properly first.

  • (Score: 1, Funny) by Anonymous Coward on Saturday June 03, @06:25AM (4 children)

    by Anonymous Coward on Saturday June 03, @06:25AM (#1309541)

    Yuck. Gross. No wonder this country is turning to shit.

    • (Score: 0) by Anonymous Coward on Saturday June 03, @02:26PM (3 children)

      by Anonymous Coward on Saturday June 03, @02:26PM (#1309602)

      Impatient you. They won't be around much longer. And just like a warranty, right after they expire you'll need them.

      • (Score: 1, Touché) by Anonymous Coward on Saturday June 03, @05:43PM (2 children)

        by Anonymous Coward on Saturday June 03, @05:43PM (#1309616)

        Problem is most older folks crap on the younger generations instead of helping them. Not all older techies, and it is a shame that a lot of knowledge will be lost with them, relegated to a smaller and smaller handful of specialists.

        • (Score: 1, Funny) by Anonymous Coward on Saturday June 03, @10:57PM (1 child)

          by Anonymous Coward on Saturday June 03, @10:57PM (#1309648)

          The brats are super-entitled, snotty, arrogant, disrespectful, distracted, impertinent, impatient, all-knowing, self-absorbed, act like everything previous is so ridiculous, closed-minded, need I go on? There are a few good ones, but fsck if I'm going to put an ounce of effort toward any that show any kind of disrespect or ungratefulness.

          • (Score: 1, Insightful) by Anonymous Coward on Friday June 09, @05:52PM

            by Anonymous Coward on Friday June 09, @05:52PM (#1310719)

            I knew it! I'm surrounded by assholes!

            I jest, but it is kind of true. Most humans are exactly as you describe but they also have lots of good qualities. Sounds like you never developed soft skills while feeling super entitled to respect. Demanding all the respect while refusing to help because:

            super-entitled, snotty, arrogant, disrespectful, distracted, impertinent, impatient, all-knowing, self-absorbed, act like everything previous is so ridiculous, closed-minded

            shows that you are at least a few things on that list. I fully understand your complaint, got some teens displaying a lot of those symptoms, but what doesn't help is piling on negativity and giving up on them when they are acting stupid. Be more like duck, water off back. Or get soaked in your cynicism, WWJD?

  • (Score: 3, Insightful) by istartedi on Saturday June 03, @04:38PM (3 children)

    by istartedi (123) on Saturday June 03, @04:38PM (#1309612) Journal

    Docker and other "big business" or "cloud" type stuff is also the last thing I ever want to know about, or care about. It's "because somebody pays me" software to me, and since I'm not working I don't care about it.

    I get that it has a purpose, but if I ever release anything again it'll have project files, plain vanilla make files for *NIX, and then you just go to town, son and let me know if it doesn't build right.

    Things like Docker are a convention. Conventions have their purpose in large organizations where people need to communicate--like that damned 4-space indent everybody's using.

    One thing on the back burner is for me to either find or build a transparent interface to tabs, so I can indent like I want without getting carpal tunnel.

    Such a simple algo! I can outline it right here, in pseudo-code.

    OnLoad: Foreach line, check that each line has a multiple of N-spaces for indent, no trailing white, and no tabs mixed in with printable chars. If these criteria are not met, load un-modified file with a warning. Otherwise, convert N-spaces to tabs and mark file as tab'd.

    OnSave: If file was loaded as un-modified, save. Else, Foreach line, strip trailing whites and convert tabs to N spaces.

    This ought to be available in every editor, or perhaps wedged in to the repository. There are a few refinements, but the fact that we still argue over things like tabs vs. spaces, is also indicative of why we haven't "landed on the Moon".

    --
    Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2) by turgid on Saturday June 03, @06:24PM (2 children)

      by turgid (4318) on Saturday June 03, @06:24PM (#1309622) Journal

      When I first heard of Docker, it was being used as a container for specific applications because it saved the end user having to install all the right libraries on their system. It's a very big sledgehammer to crack a nut. You end up having an entire OS environment to run a single application. In these days of many gigabytes of RAM. terabytes of disk space, multicore CPUs and fast network connections you can get away with it.

      • (Score: 4, Informative) by istartedi on Saturday June 03, @07:36PM (1 child)

        by istartedi (123) on Saturday June 03, @07:36PM (#1309629) Journal

        Yep. Out of the "DLL hell" frying pan, in to the bloat oven.

        --
        Appended to the end of comments you post. Max: 120 chars.
        • (Score: 3, Funny) by DannyB on Wednesday June 07, @03:02PM

          by DannyB (5839) on Wednesday June 07, @03:02PM (#1310347) Journal

          Now why are you bringing up Java by implying JAR hell without actually saying it.

          --
          If you eat an entire cake without cutting it, you technically only had one piece.
  • (Score: 3, Funny) by DannyB on Wednesday June 07, @03:01PM (1 child)

    by DannyB (5839) on Wednesday June 07, @03:01PM (#1310346) Journal

    My little command line utilities, a few hundred k each compiled, get compiled in their own Docker container. That's hundreds and hundreds and hundreds of megabytes of random junk to compile a few thousand lines of C.

    Please make sure that your command line utility inside that docker container is a flatpack.

    --
    If you eat an entire cake without cutting it, you technically only had one piece.
(1)