Subtitle: Angry Old Man Shakes Fist and Shouts at the Sky/Get off my Lawn
Alas I am not young enough to know everything. Fortunately I am surrounded at work by people who are so I am not completely lost.
We had a very confident young hotshot who left some time ago for a very well-paid job "doing AI." He knew much. He knew that Bitbucket was the way to go. And we adopted bitbucket and we pay a subscription.
Bitbucket is pretty cool. It's very similar to GitLab. In a previous life I set up and ran my own GitLab server and had my own continuous integration pipeline. I really liked using it.
Now to the present. I have been doing PHB duties, then I was given emergency bug fixes to do on Critical Projects(TM) and all sorts of stuff and because reasons I am writing code again for Critical Projects(TM) with tight deadlines meanwhile trying to do all sorts of other stuff including teaching the young ones things about C (everything's Python nowadays).
We had a crazy head of projects who was from the headless chicken school of management and some months ago I was given a fortnight to write a suite of command line utilities to process some data in a pipeline from the latest Critical Project(TM). Specifications? Requirements? A rough idea of what might be in the system? Ha! Fortunately crazy head of projects got a new job and left.
I wrote this code, in C, from scratch, on my own, and in four days flat I had three working command line utilities which I had written using test driven development (TDD) and will an additional layer of automated tests run by shell scripts all at the beck and call of make. I cheated and wrote some scripts to help me write the code.
As you can imagine, these utilities are pretty small. We gave them to the new guy to finish off. Six weeks and lots of hand-holding later, I took them back to fix.
However, we have this "continuous integration" setup based on bitbucket. It's awfully like GitLab, which I used some years ago, so there are no great surprises.
Now we come to the super fun part. We build locally, not on Bitbucket's cloud, which is good. The strange thing is that since I got old, Docker has come along.
The young hotshot who knew everything decided that we needed to do all our builds in these Docker containers. Why?
A Docker container is just one of these LXC container things which is a form of paravirtualisation, somewhere between chroot jails and a proper VM, where the kernel presents an interface that looks like a whole system on its own (see Solaris 10 Containers). That means that you can run "arbitrary" Linux instances (with their own hostnames and IP addresses) on top of a single kernel. Or can you? Doesn't it have to be compatible with (integrated with) the kernel version and build that the host is running?
This is a cool feature. You can have many lightweight pretend virtual hosts on a single machine without having a hypervisor. You can also use it to have a user-land environment with a specific configuration nailed down (set of tools, applications, libraries, user accounts). It might be a great way to set up a controlled build environment for software.
For the last hundred years or so anyone who knows anything about making stuff (engineering) understands that you need to eliminate as much variation in your process as possible for quality, reliability and predictability.
So here's the thing - do you think our young hotshot and his successors have this sorted out? Think again!
I needed to set up some build pipelines and I was shocked. Apparently we are using a plethora of diverse Docker containers from the public Internet for building various software. But that's OK, they're cached locally...
Never mind that this stuff has to work properly.
Everyone in the team is developing their code on a different configuration. We have people using WSL (seriously) and others running various versions of Ubuntu in VMs. So we have these build pipelines running things like Alpine (because the image is small) which may or may not be anywhere near WSL or Ubuntu versions X to Y.
It gets better. Everything we do, every piece of software we build has its own Docker container. And then it goes onto a VM which gets "spun up" in the Microsoft(R) Azure(TM) cloud.
My little command line utilities, a few hundred k each compiled, get compiled in their own Docker container. That's hundreds and hundreds and hundreds of megabytes of random junk to compile a few thousand lines of C. When I type make on my command line (in my Ubuntu VM) each one takes under a second to build against the unit tests and rebuild again and run the automated regression tests.
The final thing that takes the cake is that I have to release these tools to another department (which they'll then put in a "pipeline" on "the cloud") and after about a year of having this amazing set-up for continuous integration, the young folk can't tell me (and they haven't figured it out yet) how to get the built binaries out of the build system.
Because the builds are done in Docker containers, the build artifacts are in the containers and the container images are deleted at the end of the build. So tell it not to delete the image? Put a step in the build script to copy the artifacts out onto a real disk volume?
"We don't know how."
There's a reason human beings haven't set foot on the Moon in over 50 years, and the way things are going our own stupidity will be the end of us. Mark my words.
(Score: 3, Insightful) by RS3 on Saturday June 03, @01:08AM (12 children)
Could you mount an NFS share or other network share inside the container and deposit the logs and other build artifacts in there? You'd need to automate it in the scripts, including (probably) different names for each build container.
(Score: 2) by turgid on Saturday June 03, @09:06AM (2 children)
That's what I said. Young people don't know about NFS.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 1, Informative) by Anonymous Coward on Saturday June 03, @10:52PM (1 child)
One of the problems with tech getting good enough that few need to troubleshoot, even young coders get to focus on programming and might have zero knowledge about OS drivers or network issues outside of sending files over the internet.
(Score: 2) by turgid on Sunday June 04, @08:52AM
One huge problem we have is that the business side of the company is obsessed with Microsoft Azure. They are pushing really hard to have everything on Azure. We have the usual Windows network with a single CIFS volume that we can mount from our PeeCees and our Linux instances. It's slow and unreliable and always short of space.
Management has got it into their heads that doing everything on the cloud is cheaper. I spoke to our IT provider about getting some cloud storage specifically for our needs a while back and I was shocked at how much they charged for Azure. It was something like the order of ten times the price of an equivalent SATA disk per month! So it was like buying ten SATA disks a month forever to keep having access to the data, plus it was over the internet (slow) plus they charge extra for NFS (instead of CIFS).
We have a couple of very pathetic old second-hand machines that we use as physical servers which we begged and pleaded for, running Linux. You would be utterly shocked and astounded if I told you what sort of hard disks (and how old) we were using. We're waiting for them to break. We should probably put an NFS export on one of the machines I only found out about this build artifact problem this week, so I think it will be this week's priority to sort out.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by turgid on Saturday June 03, @09:27AM (2 children)
About 15 years ago I wrote in bash my own build and packaging system and it ran on my home network. I had the abilities to build in chrome jails (always 100% clean) as well as the plain OS. It would build for various versions of Slackware plus I had it working on Solaris Nevada. I had NFS volumes for the shared files, including the built packages.
I was proud of it at the time. I used to have it running from a cron job and I could take the latest packages to install on my work machine the next day. I even had a cutting edge build of gcc, automatically downloaded every night, to play with.
Then life became even more hectic. I reproduced! After a couple of years I looked at the scripts again with the benefit of hindsight and thought, "What on earth was I thinking? That's crazy" It worked, but it was very badly written. I could probably write something better now in two or three weekends.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by turgid on Saturday June 03, @09:36AM (1 child)
Chrome jails? Chroot jails. Autocorrect hasn't heard of chroot apparently, and on an Android device too!
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 1, Funny) by Anonymous Coward on Sunday June 04, @05:22AM
get off my lawn turgid, even with my big thumbs, first thing i do to any mobile device is turn off bloody autocorrect. Turgid old man claims to be an engineer...grumble...mumble...harrumph.
(Score: 2) by DannyB on Wednesday June 07, @02:55PM (5 children)
Can NFS be used over TLS? For that matter, can the finger protocol be used over TLS.
Young people do not know about the finger protocol unless older people explain it to them.
Don't use NFS. Use an "append only" mechanism. Such as an HTTP POST operation to push logs to another server that simply accumulates them. That way, a bad actor can only append data, but not removify it, whereas a good actor could win an oscar.
If you eat an entire cake without cutting it, you technically only had one piece.
(Score: 2) by RS3 on Wednesday June 07, @05:27PM (3 children)
I've never used it, but I believe stunnel [archlinux.org] will work for any IP port.
Also Red Hat's docs / how-to: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-configuring_stunnel_wrapper [redhat.com]
The approach I usually use for these kinds of things is to limit the IP access range. So the containerized thing would only allow NFS mounts to local IP numbers, or just one IP (10.x.x.x, 192.168.x.x, etc.). If an Oscar loser gets into the host or one of the VMs, they'll wreck havoc, break statutes, statues, but it will be exciting and make for good news stories.
Young people usually learn all about finger on their own, especially when it's against protocol.
(Score: 2) by turgid on Wednesday June 07, @09:21PM
Excellent, more new toys to play with!
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 1, Interesting) by Anonymous Coward on Thursday June 08, @06:24AM (1 child)
I like your idea, but there is a risk of mounting at the wrong level. The guests should be as similar as possible, which means SPMD is usually the right choice. It can be done on the guest, but then you can end up duplicating complexity unnecessarily. Also, mounting in the guest using single shares can be tricky to prevent them from clobbering each other, especially if you have to worry about a container misbehaving. It can also simplify key/secret management to a degree. Those considerations mean that using a single share on the host and then map a subdirectory of that with the proper name to the same well-known location in the guest is one approach that was common. However, I do believe that Docker/Moby (and some other container/VM systems) automatically push that complexity using "automatic arguments," so that concern may be moot in their situation.
(Score: 1, Informative) by Anonymous Coward on Sunday June 11, @05:04AM
You all may not know what SPMD is. In this context, it stands for Single Program Multiple Data. It is an approach to parallelism where the "programs" are the same, it is only something in their inputs that differ. There are a number of approaches that accomplish that. The most commonly used implementation sends a different message to each. However, most people start by changing something about the environment directly or building on the understanding of fork(), since those is the easiest to visualize.
(Score: 0) by Anonymous Coward on Thursday June 08, @05:57AM
Yes you can use any application protocol over TLS, which includes NFS. Some Linux and BSD kernels support it natively, even. Otherwise you have to use a terminator or tunnel. RS3 points out that the most commonly used one is stunnel but people use SSH too. Pushing it down the stack using IPSec is also an option, but isn't exactly nice to configure.